Blog

NIST AI Security Concept Paper

Written by Bola Ogbara | Aug 22, 2025 2:04:52 PM

The NIST's AI Security concept paper proposes AI use cases to help specific communities develop custom standards to mitigate cybersecurity risks.

On August 14, the National Institute of Science and Technology (NIST) released a concept paper for creating Control Overlays for Securing AI (artificial intelligence) Systems. Guidance on developing control overlays was first introduced in NIST Special Publication (SP) 800-53B, Control Baselines for Information Systems and Organizations. Controls, which may be for security or privacy purposes, are the safeguards or countermeasures employed by an organization and/or information to meet the security and privacy requirements. These requirements are based on legislation, policies, and standards, with baseline controls being general enough to protect the “needs of a group, organization, or community of interest” while acting as a “starting point for the protection of individuals’ privacy, information, and information systems.”

 

Overlays help organizations customize control baselines, allowing them to secure critical and essential processes and assets by tailoring the guidance and assumptions that go into control selection for that community. The NIST’s concept paper “outlines proposed AI use cases for the control overlays to manage cybersecurity risks in the use and development of AI systems, and next steps” in five particular use cases: 

 

  1. Adapting and Using Generative AI: For organizations that plan to use AI to create content, this use case will contain examples of AI being used internally for business augmentation
  2. Using and Fine-Tuning Predictive AI: For organizations that use predictive AI systems for business decisions, this use case will cover the specific cybersecurity risk - depending on where the AI model is hosted and where the data comes from - in the predictive AI life cycle (model training, deployment, and maintenance). 
  3. Using AI Agent Systems - Single Agent: For organizations using AI agent systems for automated workflows and business tasks, this use case will address examples of agent system use, like creating calendar events, simplifying workflows, and contextual insights in Enterprise Copilot, or acting as a Coding Assistant.
  4. Using AI Agent Systems - Multi-Agent: Similar to the single-agent case use system, but for organizations looking to automate more complicated tasks and workflows. This case is “the least mature in terms of adoption, but it is expected to evolve and be refined as understanding deepens, and implementation challenges are identified and addressed.” 
  5. Security Controls for AI Developers: For AI developers, this case will tie security controls to model artifacts and the best practices for securing them - allowing for effective risk management.

 

The concept paper also shares a link to the NIST AI Overlay Slack Channel, which will allow “all interested parties” to “get updates, engage in facilitated discussions with the NIST principal investigators and other subgroup members, share ideas, provide real-time feedback, and contribute to overlay development.” The NIST specifically would like feedback about the quality of representation in the proposed AI use cases, possible gaps in the examples, how to best prioritize overlay deployment, and next steps. This feedback will inform a public draft with an expected release date in early FY26 (not until October 1, 2025). 

 

While the Slack channel is just over a week old, public opinion already reveals some worries about the NIST’s work to secure AI. Experts agree that the work is necessary, with Brian Levine, CEO of a directory of former government and military security experts called FormerGov, saying the scramble to implement AI is not always well-weighed: “We are seeing that AI is becoming ubiquitous, and executives rushed to use it before they fully understood it and could grapple with the security issues” …[AI] is a little bit of a black box and everyone was rushing to incorporate it into everything they were doing. Over time, the more you outsource technology, the more risk you are taking.”

 

Others point out how the speed of AI development poses security risks and limits the applicability of any rules. Audian Paxson, principal technical strategist at Ironscales, said it was most important to “implement model retirement dates. An AI model trained six months ago is like milk left on the counter. It’s probably gone bad.” Vince Berk, partner at Apprentis Ventures, cast doubt on the usefulness of AI guidelines in such a rapidly changing environment: “... standards are typically formed after a large body of experiences have been gathered, and a common approach or vision to a particular area of engineering starts to form. For AI cybersecurity problems, this is very far from the case. Every day, new cases are discovered that were unanticipated and raise questions about the utility in a broad sense from AI at all.”

 

Erik Avakian, technical counselor at Info-Tech Research Group, even called attention to the potential cybersecurity risk that comes from the NIST requesting community feedback online - AI agents could inundate the comments with “bad guy friendly” recommendations, effectively “poisoning the actual feedback.” According to Avakian, the most secure option would be to have “human interviews or regional workshops where they bring people in." 


With all these concerns, it’s a bit of a relief that the NIST appears to be absorbing the feedback they receive, especially when it comes to AI. When Chief Information Security Officers (CISOs) first heard about the NIST’s cyber AI profile, a tool meant to complement their cybersecurity framework, they voiced their anxiety of having new guidance that required more training, on top of the previously established guides. According to the lead of the NIST “cybersecurity for Internet of Things” program, Katerina Megas, many essentially said “do not reinvent the wheel.” While the AI profile is still in the ‘reviewing comments’ phase, the AI control overlay concept paper does build on previous works like the (SP) 800-53B, hopefully circumventing the reinventing issue and instead creating a valuable resource for organizations.