Mindgard raises $8M to help enterprises with AI security testing

The funding will accelerate R&D and propel Mindgard’s expansion into the U.S.  Mindgard, a pioneer in securing AI, today announced an $8 million funding round and the appointment of two industry leaders to the roles of Head of Product and VP of Marketing. The funding was led by .406 Ventures... The post Mindgard raises $8M to help enterprises with AI security testing first appeared on AI-Tech Park.

Mindgard raises $8M to help enterprises with AI security testing

The funding will accelerate R&D and propel Mindgard’s expansion into the U.S. 

Mindgard, a pioneer in securing AI, today announced an $8 million funding round and the appointment of two industry leaders to the roles of Head of Product and VP of Marketing. The funding was led by .406 Ventures with participation from Atlantic Bridge, Willowtree Investments and existing investors IQ Capital and Lakestar. The new executives, Dave Ganly, a former Director of Product at Twilio, and Fergal Glynn, who most recently served as CMO at Next DLP (acquired by Fortinet), will play a critical role in the company’s product development and launch Mindgard’s expansion into the North American market with a leadership presence in Boston. 

The deployment and use of AI introduces new risks, creating a complex security landscape that traditional tools cannot address. As a result, many AI products are being launched without adequate security assurances, leaving organizations vulnerable—an issue underscored by a Gartner finding that 29% of enterprises deploying AI systems have reported security breaches, and only 10% of internal auditors have visibility into AI risk. Many of these new risks such as LLM prompt injection and jailbreaks exploit the probabilistic and opaque nature of AI systems, which only manifest at runtime. Securing these risks, unique to AI models and their toolchains, requires a fundamentally new approach.

Mindgard is revolutionizing AI security testing and automated AI red teaming with its award-winning Dynamic Application Security Testing for AI (DAST-AI) solution. This cutting-edge technology identifies and resolves AI-specific vulnerabilities that can only be detected during runtime. For organizations adopting AI or establishing guardrails, continuous security testing is essential for gaining risk visibility across the AI lifecycle. Mindgard’s solution integrates into existing automation, empowering security teams, developers, AI red teamers and pentesters to secure AI without disrupting established workflows.

“The rapid adoption of AI has introduced new and complex security risks that traditional tools cannot address,” said Greg Dracon, Partner at .406 Ventures. “Mindgard’s innovative approach, born out of the distinct challenges of securing AI, equips security teams and developers with the tools they need to deliver secure AI systems. Mindgard is well-positioned to lead this emerging market and we are thrilled to partner with them on this journey.”

“All software has security risks, and AI is no exception,” said Dr. Peter Garraghan, CEO of Mindgard and Professor at Lancaster University. “The challenge is that the way these risks manifest within AI is fundamentally different from other software. Drawing on our 10 years of experience in AI security research, Mindgard was created to tackle this challenge. We’re proud to lead the charge toward creating a safer, more secure future for AI.”

About Mindgard

Mindgard is the leader in Artificial Intelligence Security Testing. Founded at Lancaster University and backed by cutting edge research, Mindgard enables organizations to secure their AI systems from new threats that traditional application security tools cannot address. Its industry-first, award-winning, Dynamic Application Security Testing for AI (DAST-AI) solution delivers continuous security testing and automated AI red teaming across the AI lifecycle, making AI security actionable and auditable. For more information, visit mindgard.ai

The post Mindgard raises $8M to help enterprises with AI security testing first appeared on AI-Tech Park.