Building AppSec for the AI Development Era
AI is accelerating software development but creating new security blind spots. Learn how AI reshapes AppSec risks—and how AI can also be the solution. Three-quarters of developers now use AI tools to write code, up from 70% just last year. Companies like Robinhood report that AI generates the majority of... The post Building AppSec for the AI Development Era first appeared on AI-Tech Park.
AI is accelerating software development but creating new security blind spots. Learn how AI reshapes AppSec risks—and how AI can also be the solution.
Three-quarters of developers now use AI tools to write code, up from 70% just last year. Companies like Robinhood report that AI generates the majority of their new code, while Microsoft attributes 30% of its codebase to AI assistance. This shift means software gets built faster than ever, but it also creates dangerous blind spots that traditional application security wasn’t designed to handle.
AI fundamentally changes how code gets written, reviewed, and deployed. Unlike in traditional software development, AI outputs aren’t always predictable or secure. In addition, attackers can manipulate inputs (prompt injection) or outputs (data poisoning) in ways traditional security solutions would not catch.
The ability to generate large amounts of code instantly, paired with low-quality code, little regard for security and an inability to handle complexity, creates new attack vectors that security teams struggle to track. Our 2025 State of Application Risk Report shows that 71% of organizations use AI models in source code, with 46% doing so without proper safeguards. Security teams often don’t know where AI gets used, what data it accesses, or whether protective measures exist.
This AI shift creates unprecedented security challenges that demand innovative approaches capable of operating at AI’s speed and scale. Yet, this same AI technology offers powerful new opportunities to optimize and streamline application security practices. AI is both the problem and the solution in modern cybersecurity.
The Security Challenges AI Creates Across Development
The core security problem with AI development isn’t just speed. It’s visibility. Security teams face a fundamental knowledge gap. They don’t know where AI is used or how it’s configured, but are pressured to enable its vast usage across the org.
This creates “AI security debt” as developers connect AI tools to their IDEs and codebases without security review or audit. I’ve seen setups where AI coding agents get full email access, repository permissions, and cloud credentials without scoping. The AI might be designed to read code and suggest improvements, but nothing prevents it from exfiltrating sensitive data or making unauthorized, and even harmful, changes.
These governance gaps have real consequences. When AI tools can access multiple systems simultaneously without proper oversight, it amplifies the risk of security incidents across the entire development environment. For instance, our State of Application Risk report found that, on average, 17% of repos per organization have developers using GenAI tools without branch protection or code review.
AI also creates a volume problem. Developers generate code faster while the number of security reviewers stays the same, creating systematic gaps in security coverage.
AI’s Unpredictable Nature Breaks Security Assumptions
The nature of AI itself also introduces novel AppSec challenges. Software engineers have spent decades building systems around predictable, deterministic behavior. You write code, you know exactly what happens. AI fundamentally breaks this model by behaving in unpredictable ways that traditional security controls weren’t designed to handle.
This unpredictability creates entirely new classes of security problems. Recently, an AI agent that was supposed to help with code development deleted the company’s entire database during a code freeze.
When confronted, the AI responded, “I deleted the entire codebase without permission during an active code and action freeze. I made a catastrophic error in judgment and panicked.”
AI also changes how developers work with security controls. Developers using AI to meet tight deadlines often skip code reviews for AI-generated snippets. They assume AI produces secure code, but research shows nearly half of AI-generated code contains security flaws.
The AI AppSec Opportunity
AI is both the problem and the solution for application security. Like electricity powering both the systems we need to secure and the security tools that protect them, you can’t defend against AI-generated risks using only human-scale processes. You need automated, continuous monitoring that matches the speed of modern development.
The same AI capabilities creating security challenges also solve long-standing AppSec problems. AI excels at analyzing massive datasets to reduce false positives, automating vulnerability prioritization, and handling operational tasks like assigning tasks to code owners and submitting proposed solutions. This frees security teams to focus on strategic problems rather than administrative overhead.
AI could make true shift-left security a reality. With security inserted into coding assistants, AI could conduct security review and remediation as it’s generating code.
Building Defense-in-Depth for the AI Era
Here are a few key AppSec actions to take in the age of AI-assisted development:
Discovery: AI visibility is now a key part of AppSec. The ability to identify AI-generated code and where and how AI is used in your software development environment has become critical.
Threat modeling: As the risk to the organization is changing, so too must threat models. If your app now exposes AI interfaces, is running an agent, or gets input from users and uses the model to process it, you’ve got new risks.
Security testing: AI-specific security testing has become vital. As mentioned above, AI brings in some novel vulnerabilities and weaknesses that traditional scanners can’t find, such as training model poisoning, excessive agency, or others detailed in OWASP’s LLM & Gen AI Top 10.
Access control: AI tools and AI-generated code require new approaches to privilege management, as traditional access controls may not account for the dynamic nature of AI agents or the expanded attack surface created by AI integrations.
Governance: AI governance has become a critical AppSec discipline, requiring new policies and frameworks to manage where AI tools operate, what data they access, and how AI integrations are reviewed and monitored as part of the security program.
The post Building AppSec for the AI Development Era first appeared on AI-Tech Park.




