eCommerceNews Australia - Technology news for digital commerce decision-makers
Flux result 42bef347 2a1c 4504 90db 1717c05883c7

The AI risk hiding in your finance team's browser tabs

Wed, 15th Apr 2026

Most finance leaders have a reasonably clear view of where their data lives. It sits in the ERP, the data warehouse, the consolidation tool. Access is controlled, audit trails exist, and the governance framework has been built up over years of careful design. What that framework was never built to account for is the moment a senior analyst opens a browser tab, pastes three months of AP data into a free AI tool, and asks it to spot anomalies.

That moment is happening every day across finance teams in Australia and New Zealand, and in most organisations, nobody has formally decided whether it should be allowed. In fact, data from a recent Annexa webinar polling over 250 finance professionals found that 7 in 10 finance teams are moving data out of their systems to use AI - with just 8% having AI embedded directly inside their workflows.

The term that's emerged to describe this pattern is shadow AI - the use of personal or consumer AI accounts to process business data outside any sanctioned system or policy. It's not malicious, finance professionals are using the tools available to them to do their jobs faster and AI tools are useful. When asked how they are currently using AI with their business systems, 50% reported using tools like ChatGPT alongside their systems - and fewer than 5% are using AI effectively within operational workflows. The problem is that the data governance assumptions underpinning consumer AI products are fundamentally different from what finance teams require and those differences are rarely visible at the point of use.

Consumer accounts, commercial data

The distinction that matters most sits inside the terms of service for the platforms your team is probably already using. In August 2025, Anthropic updated its consumer terms so that users on Free, Pro and Max plans are opted in to model training by default. Data from those accounts can be retained for up to five years if training remains enabled. Claude Pro - a paid subscription - is a consumer product under those terms, not a commercial one. The opt-out exists, but it requires the user to know it's needed and to take deliberate action.

ChatGPT operates under similar logic for consumer accounts. Standard personal accounts can use conversation history to improve models unless the user has turned that setting off.

The gap between consumer and commercial tiers on these platforms is not a minor footnote. For Claude for Work, the Anthropic API and Enterprise tiers, model training is off by default with no opt-out required. API log data is retained for seven days. Enterprise customers can negotiate Zero Data Retention agreements under which inputs and outputs are not stored beyond what is needed to screen for misuse. These are substantively different products operating under substantively different rules - and from a finance governance perspective, the distance between them is significant.

Is implication that if a member of your finance team is using a personal Claude or ChatGPT account to analyse ERP data, that data may be contributing to model training without any awareness at the organisational level. The fact that it feels like a productivity tool does not change what is happening to the data.

The permission gap inside your ERP

Shadow AI is one dimension of the risk. The other sits closer to the system itself, and it surfaces once you understand what actually happens when finance data leaves your ERP versus when AI is connected directly to it.

AI cloud ERP NetSuite now allows platforms like Claude or ChatGPT to connect directly to live data inside the system, operating within the same permission controls that govern every other user. If a finance analyst cannot see a record, the AI cannot see it either. Every interaction is logged. The boundaries are the same ones your team already defined and your auditors already understand.

What that architecture makes visible is how different the governed approach is from the alternative. A finance analyst connecting Claude directly to NetSuite is operating inside a framework with permission controls, audit trails and defined boundaries. The same analyst exporting data and pasting it into a personal AI account is operating entirely outside that framework - with no log of what was shared, no control over what the platform does with it and no organisational visibility that it happened at all.

The gap between those two scenarios is a policy problem and it is one that most ANZ finance teams have not yet formally resolved.

What a governed AI setup actually looks like

The practical steps are less complicated than the risk framing might suggest. For NetSuite customers, connecting AI platforms directly to live ERP data carries no additional licensing cost - the tools needed to do it are included. The setup is largely configuration rather than development, and the permission controls that govern what the AI can see and do are the same ones your team has already defined for every other user in the system.

The one area worth deliberate attention is the AI role itself. Access needs to be granted explicitly - it is off by default and cannot be assigned to administrator-level roles. The cleanest approach is a dedicated role carrying only the permissions needed for the specific workflows you intend to support. The same least-privilege principle that applies across your integration architecture applies here in exactly the same way.

The policy question finance leaders need to answer

Technology configuration handles the technical side of the problem. The harder work is the organisational side which requires a decision rather than a default.

The question finance leaders need to answer is not whether their team is using AI. The evidence strongly suggests they are - and that the appetite for it runs deeper than most governance frameworks have caught up with. Only 16% of finance teams say they are very confident their data is ready for AI-driven tools, yet 35% are already comfortable with AI taking some form of autonomous action and 54% favour a recommendation-and-approval model that mirrors how finance controls already work. The question is whether that usage is happening inside a governed framework or outside one, and whether the organisation has formally defined which tools are approved for use with finance data and on what terms.

That definition needs to be specific enough to distinguish between a commercial API account and a consumer subscription on the same platform. It needs to cover not just what tools are approved, but what data categories can be used with them and under what access model. And it needs to be reviewed periodically - AI platform terms have changed materially in the past twelve months and will continue to change, sometimes with limited notice.

The AI capabilities available to finance teams in 2026 are useful. NetSuite's embedded AI features, the ability to connect governed AI assistants directly to live ERP data, and the agentic workflows now emerging represent a shift in what finance teams can do with the systems they already have. That value is real. The access model through which your team reaches it determines whether the governance framework built around your finance function is intact or quietly being bypassed one browser tab at a time.

Annexa and Oracle NetSuite recently ran a live webinar exploring how AI is showing up inside real finance workflows - from embedded NetSuite capabilities through to agentic AI tools and what's coming next with NetSuite Next. Watch the webinar replay here.