eCommerceNews Australia - Technology news for digital commerce decision-makers
Julie davila

AI code is creating security bottlenecks for Australia businesses

Mon, 16th Feb 2026

AI coding assistants are accelerating software delivery, but also introducing new risks. Recent GitLab research among DevOps practitioners in Australia shows that while AI generates more than one-third of code, teams rank AI-driven security vulnerabilities and quality control as the biggest barriers to adoption.

As AI coding tools scale, these issues increasingly fall on the shoulders of security teams. While enabling faster development, AI is simultaneously creating security review bottlenecks that slow it down. Security engineers who once reviewed hundreds of lines of code per hour now face tens of thousands, as AI contributes across the codebase. AI increases code volume dramatically, but our capacity to defend has not kept pace.

At the same time, threat actors are using autonomous techniques to identify weaknesses faster than manual reviews can keep up.

This pressure is exposing an inherent constraint. Security models built around human-driven review worked when code volumes were manageable. At AI scale, they no longer hold. Organisations risk falling behind both attackers and their own delivery teams if they don't change how they embed security into development workflows.

Here are two compounding failures driving these bottlenecks, and what Australian organisations can do to avoid them.

Scaling AI without redesigning security review workflows

The "shift left" movement aimed to address security bottlenecks by shifting security responsibility to developers earlier in the software development lifecycle. Adding security testing to development workflows sounds good in theory, but forcing developers to address security checks that often flag false positives is suboptimal. It can unintentionally add hours to their workday with no incentives. Developers find workarounds because they need to ship features on a deadline.

The shift-left approach failed to account for the entire SDLC, resulting in unintended downstream effects. Now, teams are repeating the same mistake with AI code assistants.

These assistants optimise for code generation while leaving the review process unchanged. The solution isn't adding more people or more tools in isolation. Instead, organisations should think holistically about their entire pipeline and map their value streams before adding more AI tools.

This also means documenting processes that rely on tacit, institutional knowledge, which complicates how teams define and measure the value AI delivers. If AI makes an undocumented process more efficient, it's impossible to measure or prove that value.

Leaders should implement scalable review methodologies that combine AI with practical human oversight, establishing prioritisation frameworks based on measurable risk. For instance, code that touches sensitive customer data or production databases requires a much more intensive review than a feature to customise an application's theme.

Securing AI agents using frameworks built for a different era

Traditional security frameworks assume predictable human behaviour. AI agents don't follow those rules, and the result is an entirely new class of risk.

The complexity multiplies when agents interact with other agents across organisational boundaries. When your internal agent receives instructions from a third-party agent that itself received instructions from another external system, your security model must account for potentially malicious requests operating outside your direct observation.

Avoiding these issues requires developing security controls to limit permissions and monitor agent behaviour. Emerging approaches, like establishing composite identities for AI systems, can help tie AI activity to human accountability by tracking which agents performed specific actions and who authorised them.

In conjunction, fostering system design fluency within security teams can make it easier to accurately assess how a new AI implementation may impact existing security boundaries. Many security engineers today struggle to articulate how the backend of an LLM actually works, but understanding how an AI system is designed is fundamental to understanding AI security risks.  This doesn't require deep engineering expertise for every component, but rather a basic understanding of how the pieces fit together to achieve outcomes, much like security professionals understand how web applications work.

The path forward

Most Australian organisations will spend the next two years building AI capabilities on systems they know are imperfect. Waiting for everything to be fixed is neither realistic nor necessary. There is no single blueprint for securing AI-driven development. The priority is to acknowledge risk, manage it deliberately, and improve controls as AI adoption scales.

Security teams cannot solve this alone. Recent DX research shows that while most developers now use AI tools and save several hours each week, organisational friction, including meetings, interruptions, slow reviews, and CI delays, often erodes those gains. Some teams see faster delivery and better stability, while others accumulate technical debt at pace.

The differentiator is not the AI tools themselves, but the strength of underlying engineering practices. As continuous delivery expert Bryan Finster notes, "AI is an amplifier. If your delivery system is healthy, AI makes it better. If it's broken, AI makes it worse."

AI is exposing foundational issues at scale. Security reviews sit downstream, absorbing the impact of weak processes.

To move forward, security teams must advocate for practices that enable secure AI adoption: documented workflows, rigorous testing, and continuous delivery approaches that embed security throughout the lifecycle. In many cases, the real constraint is the quality of what reaches security teams in the first place.

The organisations that succeed are the ones that address these structural issues now, before AI-generated code volumes make them significantly harder to fix.