The NIST's AI Security concept paper proposes AI use cases to help specific communities develop custom standards to mitigate cybersecurity risks.
On August 14, the National Institute of Science and Technology (NIST) released a concept paper for creating Control Overlays for Securing AI (artificial intelligence) Systems. Guidance on developing control overlays was first introduced in NIST Special Publication (SP) 800-53B, Control Baselines for Information Systems and Organizations. Controls, which may be for security or privacy purposes, are the safeguards or countermeasures employed by an organization and/or information to meet the security and privacy requirements. These requirements are based on legislation, policies, and standards, with baseline controls being general enough to protect the “needs of a group, organization, or community of interest” while acting as a “starting point for the protection of individuals’ privacy, information, and information systems.”
Overlays help organizations customize control baselines, allowing them to secure critical and essential processes and assets by tailoring the guidance and assumptions that go into control selection for that community. The NIST’s concept paper “outlines proposed AI use cases for the control overlays to manage cybersecurity risks in the use and development of AI systems, and next steps” in five particular use cases:
The concept paper also shares a link to the NIST AI Overlay Slack Channel, which will allow “all interested parties” to “get updates, engage in facilitated discussions with the NIST principal investigators and other subgroup members, share ideas, provide real-time feedback, and contribute to overlay development.” The NIST specifically would like feedback about the quality of representation in the proposed AI use cases, possible gaps in the examples, how to best prioritize overlay deployment, and next steps. This feedback will inform a public draft with an expected release date in early FY26 (not until October 1, 2025).
While the Slack channel is just over a week old, public opinion already reveals some worries about the NIST’s work to secure AI. Experts agree that the work is necessary, with Brian Levine, CEO of a directory of former government and military security experts called FormerGov, saying the scramble to implement AI is not always well-weighed: “We are seeing that AI is becoming ubiquitous, and executives rushed to use it before they fully understood it and could grapple with the security issues” …[AI] is a little bit of a black box and everyone was rushing to incorporate it into everything they were doing. Over time, the more you outsource technology, the more risk you are taking.”
Others point out how the speed of AI development poses security risks and limits the applicability of any rules. Audian Paxson, principal technical strategist at Ironscales, said it was most important to “implement model retirement dates. An AI model trained six months ago is like milk left on the counter. It’s probably gone bad.” Vince Berk, partner at Apprentis Ventures, cast doubt on the usefulness of AI guidelines in such a rapidly changing environment: “... standards are typically formed after a large body of experiences have been gathered, and a common approach or vision to a particular area of engineering starts to form. For AI cybersecurity problems, this is very far from the case. Every day, new cases are discovered that were unanticipated and raise questions about the utility in a broad sense from AI at all.”
Erik Avakian, technical counselor at Info-Tech Research Group, even called attention to the potential cybersecurity risk that comes from the NIST requesting community feedback online - AI agents could inundate the comments with “bad guy friendly” recommendations, effectively “poisoning the actual feedback.” According to Avakian, the most secure option would be to have “human interviews or regional workshops where they bring people in."
With all these concerns, it’s a bit of a relief that the NIST appears to be absorbing the feedback they receive, especially when it comes to AI. When Chief Information Security Officers (CISOs) first heard about the NIST’s cyber AI profile, a tool meant to complement their cybersecurity framework, they voiced their anxiety of having new guidance that required more training, on top of the previously established guides. According to the lead of the NIST “cybersecurity for Internet of Things” program, Katerina Megas, many essentially said “do not reinvent the wheel.” While the AI profile is still in the ‘reviewing comments’ phase, the AI control overlay concept paper does build on previous works like the (SP) 800-53B, hopefully circumventing the reinventing issue and instead creating a valuable resource for organizations.