The federal government is already using AI — it's time for a formal process to ensure the technology is safe

President Joe Biden signs an executive order on artificial intelligence in the East Room of the White House on Oct. 30, 2023, as Vice President Kamala Harris looks on.

President Joe Biden signs an executive order on artificial intelligence in the East Room of the White House on Oct. 30, 2023, as Vice President Kamala Harris looks on. BRENDAN SMIALOWSKI/AFP via Getty Images

COMMENTARY | The rapid operationalization of AI will require federal agencies and industry to work together to achieve success.

The U.S. government recently published more than 700 artificial intelligence use cases among federal agencies on the AI.gov website. Notably, the Department of Energy and Health and Human Services disclosed more than 150 use cases.

As an emerging technology, AI has significant potential to help agencies better serve their missions and the public. Federal agencies are increasingly adopting and utilizing AI, whether directly — by implementing and purchasing automated solutions or indirectly — by using AI-powered software or partnering with organizations actively using AI solutions.  

Automated technologies in their most basic form are everywhere. Many solutions and software use at least some type of AI — from chatbots serving citizens interacting with an agency to the tools developers use when making and checking code to data analytics platforms informing critical decision-making in federal agencies.  

The Biden administration made it clear that the use and adoption of AI for government use is not just happening, but its continued evolution and increased use is inevitable. With an understanding of the cybersecurity challenges and potential threats to national security, the government is working to get ahead — the Administration recently released an AI executive order, and the Office of Management and Budget also published AI guidance for the federal government to ensure safe implementation.

Start with existing frameworks to secure AI 

At the agency level, organizations recognize the benefits AI provides, but also the potential risks. For example, in a recent memo, Jane Rathbun, Chief Information Officer for the U.S. Navy, addressed the risk AI could introduce to operational security. She noted that large language models specifically save prompts and must be verified and validated by humans.

While the federal government has compliance initiatives like FedRAMP, FISMA/RMF, and the Department of Defense's Cloud Security Requirements Guide, none explicitly address risks associated with AI’s rapid development and implementation. At this time, federal agencies don’t have the luxury of deciding whether to use AI. It will, and already has, entered their networks.

Collectively, we can’t afford to wait until a mistake happens or an adversary takes advantage of a vulnerability. Government leaders must get ahead of the risks by quickly establishing a system to regulate and protect against AI threats.  

The fastest and safest way to do this is by looking at what already exists and iterating rapidly. For instance, agencies can use the recently-released NIST AI Risk Management Framework to augment the existing processes for establishing an authority to operate  for an agency system. Such an approach can provide a scalable template and groundwork for establishing an AI compliance initiative. There is a need to ensure the right processes are in place and the right people are working on the challenge — it makes sense to utilize the already developed assets and the brainpower behind them.

We need collective effort

The rapid operationalization of AI will require federal agencies and industry to work together to achieve success. Agencies should bring together individuals across the organization, including policy, data, acquisition, civil rights, and mission owners, together with industry experts, to create a transformation tiger team that can handle the cross-functional challenges and opportunities posed by AI at scale.

The people responsible for implementing AI systems within agencies should be well-experienced in cybersecurity and data risk management. With this experience, an actionable existing framework can be used and augmented quickly. NIST’s AI Framework provides excellent insights and recommendations on how to implement AI safely, but still lacks specificity and targeted guidance. Federal CIOs and CISOs have an immediate need for a solution that allows them to deploy AI information systems responsibly while meeting their obligations under FISMA. As a community of modernization and transformation professionals, we successfully incorporated new technologies, like cloud computing and, – we need this collective experience to accelerate secure AI adoption.  

AI can rapidly improve and advance our IT capabilities and enable mission success to a much higher degree. However, it is very powerful and agencies should not approach it lightly. If properly adapted for AI, governance models, such as the ATO process, fundamental data policies and programs like FedRAMP, can help us safely and securely deploy AI capabilities within the Federal enterprise.

Maria Roat is former deputy federal chief information officer. Richard Spires served as chief information officer of the Department of Homeland Security. Both are members of stackArmor's AI Risk Management Center of Excellence.

NEXT STORY: Cybersecurity must be a priority in the event of a government shutdown