https://www.scoop.co.nz/stories/SC2605/S00010/when-ai-starts-making-decisions-cybersecurity-becomes-a-governance-issue.htm
|
| ||
When AI Starts Making Decisions, Cybersecurity Becomes A Governance Issue |
||
VINAYAK SREEDHAR
AI is no longer just generating content or analysing data – it is beginning to make decisions, trigger actions, and automate complex IT processes.
The shift towards agentic AI – systems capable of evaluating context and autonomously executing tasks – is reshaping how organisations manage technology, risk, and trust. But, it is also exposing a gap in how many organisations approach cybersecurity.
Traditional IT automation has always been predictable. Scripts run predefined instructions: if a server reaches a certain threshold, scale it up. If a system crashes, reboot it.

Agentic AI works differently.
These systems can analyse multiple variables, weigh trade-offs, and decide what action to take – whether that’s allocating resources, managing infrastructure, or responding to incidents.
In practice, that might mean an AI agent deciding to shut down idle resources, scale cloud infrastructure, or initiate remediation steps during a security incident. The productivity upside is enormous. But autonomy introduces a new category of risk.
When machines begin acting on behalf of organisations, identity and access governance suddenly becomes mission-critical. If an AI system has excessive permissions or insufficient oversight, the consequences are no longer theoretical. Autonomous systems can act quickly and at scale, which means mistakes – or malicious exploitation – can spread just as fast.
Many organisations still treat AI risk primarily as a model problem – worrying about hallucinations, bias, or inaccurate outputs. But the real governance challenge begins when AI systems start interacting with live systems, data, and infrastructure.
Agentic systems are not just answering questions. They are executing actions across business environments. This makes identity, access control, and visibility essential.
One of the most common risks is over-permissioned systems. Giving an AI agent broad administrative privileges may speed up deployment, but it also dramatically increases the attack surface.
Security experts increasingly recommend a stages approach: start with low-risk tasks, deploy AI agents in sandbox environments, and grant limited permissions before gradually expanding autonomy as trust is established.
In other words, autonomy must come with accountability.
The governance challenge is particularly relevant in New Zealand as organisations speed up modernisation across both the public and private sectors. New Zealand has moved from AI experimentation into implementation, with both government and enterprise pushing adoption through a national AI strategy and public-sector transformation programs.
Additionally, following major cyber breaches – including one of the biggest in New Zealand’s history which saw the potential exposure of the private medical details of more than 120,000 people – earlier this year saw the announcement of the New Zealand Cyber Security Strategy 2026-2030. The country’s National Cyber Security Centre has also introduced Minimum Cyber Security Standards to strengthen baseline protections across government agencies. These standards emphasize foundational controls such as risk management, secure configuration, patching, multi-factor authentication, and least-privilege access.
While designed primarily for government agencies, the underlying principle applies far more broadly: cyber resilience depends on strong operational discipline.
As more and more organisations adopt AI, cloud platforms, and increasingly autonomous systems, these fundamentals become even more important.
Another factor pushing this shift is the growing importance of identity. Historically, cybersecurity focused on defending networks and devices. Today, most breaches occur through compromised identities rather than perimeter attacks.
AI agents amplify this trend.
Every autonomous system effectively becomes a digital identity inside the organisation – one capable of accessing systems, retrieving data and executing workflows.
Without clear identity governance, organisations risk creating a sprawling ecosystem of automated actors operating with limited oversight. From a cybersecurity perspective, this changes the entire risk model.
Protecting the network is no longer enough. Organisations must manage which systems – human or machine – have the authority to act.
There’s another challenge complicating the picture: skills. This talent gap is being recognised as a risk in its own right.
The global shortage of cybersecurity and AI specialists means many organisations are deploying advanced technology without the internal expertise needed to govern it effectively.
As AI becomes embedded in core systems, businesses need professionals who understand both cybersecurity and emerging AI architectures – particularly around identity governance, observability, and operational controls.
Without those capabilities, businesses risk deploying powerful technology without fully understanding how to secure it.
We’re evolving beyond flashy headlines of innovation. Governance will be the determining factor to success.
The answer will involve “safe autonomy,” where AI systems operate within tightly defined boundaries, with clear permissions, monitoring, and human oversight. Security teams must be able to observe what AI agents are doing, understand why decisions were made, and intervene when necessary.
This is where identity governance, privileged access management, and real-time monitoring become foundational infrastructure.
AI promises enormous productivity gains for organisations and governments alike. But as technology moves from assisting humans to acting on their behalf, the stakes creep up dramatically.
Autonomous systems will increasingly manage infrastructure, analyse security events, and trigger operational responses. In some cases, they may even interact with customers.
The question isn’t whether to deploy AI. It’s whether it can be deployed safely.
In a world where machines are making decisions, governance over identity and access will be what protects trust.
Vinayak Sreedhar is the country head for Australia and New Zealand, ManageEngine
Home Page | Science | Previous Story | Next Story
Copyright (c) Scoop Media