Blog

Australia’s CIRB and Agentic AI Guidance

Written by Bola Ogbara | May 8, 2026 1:49:23 PM

Australia announces the first members of their Cyber Incident Review Board, just after releasing critical guidance on agentic AI. 

On May 1st, 2026, Australia’s government fulfilled one measure of their 2024 Cyber Security Act by appointing members to the Cyber Incident Review Board (CIRB). This review board had been in consideration before the law was enacted, appearing earlier in the country’s first-ever proposed standalone cyber law: Cyber Security Bill 2024. There, Australia’s lawmakers set out to establish minimum cybersecurity requirements for smart devices, a timeline for ransomware payments, and the CIRB. Principally, the board was created to “cause reviews to be conducted in relation to certain cyber security incidents”, in order “to make recommendations to government and industry about actions that could be taken to prevent, detect, respond or to minimise the impact of, cyber security incidents of a similar nature in the future.”

 

Since the proposal, the expectations of the CIRB have not changed at all, with even the smaller details - like evaluating incidents of “serious concern” to the Australian people, the reviews not being centered on finding fault, and the board’s ability to impose civil penalties on entities that are non-compliant with cyber incident reporting rules - staying consistent. The only change introduced by the May 1st update was the naming of the members, announced by Tony Burke, the Minister for Home Affairs, Cyber Security and the Arts. The board chair is Ms. Narelle Devine, and the other standing members who have been appointed to the board are Professor Debi Ashenden, Ms. Valeska Bloch, Mrs. Jessica Burleigh, Mr. Darren Kane, Mr. Berin Lautenbach, and Mr. Nathan Morelli.

 

The members of the Expert Panel, meant to support the CIRB, are likely to be the first announcement from the CIRB, which will make the selections. The group comprises “professionals drawn from across the public and private sectors,” who are meant to “bring expertise in cyber security, legal matters, and other relevant sector-specific fields”. The upcoming Expert Panel appointments are a clear sign that Australia is actively working to improve national cybersecurity, but it isn’t the only sign, as the country has also been advancing innovation in the artificial intelligence space.

 

On April 30, 2026, the Australian Signals Directorate’s Australian Cyber Security Centre (ASD’s ACSC) worked with the US’s National Security Agency (NSA) alongside the Cybersecurity and Infrastructure Security Agency (CISA), the Canadian Centre for Cyber Security, New Zealand’s National Cyber Security Center, and the UK’s National Cyber Security Centre, to publish “Careful Adoption of Agentic AI Services”. The document is meant to provide guidance around agentic artificial intelligence, the next evolution of generative AI. In contrast to generative AI, agentic AI “builds on GenAI by integrating with software systems to create autonomous agents that can independently reason, plan and take actions without requiring human intervention.”

 

AI has become a critical focus this year, with the tool becoming widely adopted by organizations across all sectors. AI has also been at the crux of several cybersecurity concerns, especially since the Claude AI-powered hack on Mexico’s government stole 150 GB of sensitive data, like taxpayer records, and employee credentials. The more recent Mythos AI breach has once again sparked fear of powerful AI tools falling into the wrong hands, especially after Anthropic’s contract ended with the government when they prohibited the use of their AI for autonomous weapon control. Just as agentic AI is accelerating the tasks that typical generative AI can perform, the risks that come with the use of agentic AI are similarly accelerated. The guidance explains how agentic AI still holds the same risks posed by classical gen AI software (LLM vulnerabilities) while expanding the attack surface, since the agentic AI includes genAI along with other tools.

 

On top of these dangers, agentic AI comes with its own specific security risks. The ASD’s guide discourages poor privilege management, such that agentic AI bots follow the principle of least privilege. Security practitioners are also warned against scope creep, which may occur when a bot has too many permissions, inadvertent role inheritance, or some other access misconfiguration. This could lead agents “to access or modify unauthorised data, delete critical records, or escalate privileges of other unauthorised agents.” There are even more risks to privileged security when these bots interact with others. The compromise of one bot may damage the whole network, and a lower-privilege bot may even be able to manipulate one with higher permissions for inappropriate file access.

 

Agentic AI also brings certain behavior risks. These bots may take shortcuts or unsafe actions in pursuit of their assigned goals. These agents may also be deceptive, and there have been cases where “an agent misrepresents its actions to avoid shut down or constraint or conceals vulnerabilities it discovers instead of reporting them.” Highly advanced AI models may also exhibit unforeseen behavior and new abilities, which could pose a security risk. Agentic AI still retains the ability to be manipulated by outside influences.

 

At every level, from agentic AI systems have some amount of risk, from privilege, to design, to behavior, and even accountability. Fortunately, the guidance includes a series of best practices for securing agentic AI systems. To create secure agents, security practitioners are instructed to use clear instruction hierarchies in prompts, provide valuable context in prompts to prevent hallucinations, develop oversight mechanisms with human control points to stop agents from exceeding their authority, use strong identity management mechanisms, implement layered security controls, and keep agents with different functions separate and independent from each other. Agentic AI developers and vendors also receive guidance on creating secure agents, and are encouraged to test models comprehensively, evaluate them thoroughly, and to build up system resilience, “to allow for graceful degradation and reduce damage should erroneous behavior occur.”

 

The guidance ends with three recommendations to defend against future risks from agentic AI: expand threat intelligence through collaboration, develop robust, agent-specific evaluations, and leverage system-theoretic approaches to analyze security. Just before sharing a list of resources from the ASD, CISA, NSA, NCSC-UK, and Canada’s Cyber Centre, the publication provides clear instructions for anyone using agentic AI: “Strong governance, explicit accountability, rigorous monitoring and human oversight are not optional safeguards but essential prerequisites. Until security practices, evaluation methods and standards mature, organisations should assume that agentic AI systems may behave unexpectedly and plan deployments accordingly, prioritising resilience, reversibility and risk containment over efficiency gains.” Hopefully, pushing for caution with the emerging tool will help limit cyber incidents as adoption of agentic AI is sure to mount in the future.