President’s AI advisory committee says the government needs clearer leadership on the technology

The National Artificial Intelligence Advisory Committee's inaugural report, released Tuesday, included recommendations like designating agency leaders to determine the appropriate use and mechanisms for AI.

The National Artificial Intelligence Advisory Committee's inaugural report, released Tuesday, included recommendations like designating agency leaders to determine the appropriate use and mechanisms for AI. NurPhoto / Getty Images

Commerce Secretary Gina Raimondo said during a Tuesday meeting with the advisory committee that the government needs “to come up with some ways to do no harm [in AI] in the immediate – like this week, next month, three months, six months.”

The government needs more clarity around internal leadership, more staff and more funding as it looks to respond to a rapidly evolving artificial intelligence landscape, an AI advisory committee meant to counsel the White House said in its inaugural report released Tuesday.

The National Artificial Intelligence Advisory Committee was set up by lawmakers under the Commerce Department as part of the National Artificial Intelligence Initiative Act of 2020.

Its first report comes in the midst of ongoing attention to quickly evolving developments in AI, such as the release of Open AI’s generative AI system, ChatGPT. 

While AI has the potential to advance U.S. interests in national and economic security, it also poses risks of misuse, bias, errors, privacy and security, said the committee, which pointed to the need for more leadership around AI.

“There is a lack of clarity on who is participating and leading the U.S. government’s current AI ecosystem,” the report said.

The committee recommended the creation of a “chief responsible AI officer” position for the entire government and for the White House to fill currently vacant top positions – the director of the National Artificial Intelligence Initiative and the chief technology officer.

The committee also wants the White House to ensure that each agency has a senior official designated to oversee AI with sufficient power and resources. 

That could mean the creation of chief AI officers in agencies – which at least the Defense Department and Department of Health and Human Services already have slots for – or tapping existing chief technology or chief information officers to help agencies determine if and where the use of AI is appropriate and to install oversight mechanisms on systems’ development, deployment and use.

“Requirements should be implemented to foster agencies' strategic planning around AI, increase awareness about agencies' use and regulation of AI and strengthen public confidence in the federal government's commitment to trustworthy AI,” the report said.

There are some existing frameworks around AI already. 

The National Institute of Standards and Technology released a voluntary AI Risk Management Framework earlier this year, and the White House released an AI Bill of Rights framework for the design and use of AI last fall, although neither are enforceable.

The AI advisory committee recommended that the White House encourage agencies to use the NIST framework.

Another area with recommendations is shoring up and educating the federal workforce around AI. 

The Justice Department’s segment of its Civil Rights Division focused on AI, for example, only had one attorney in fiscal year 2022, the report points out. Congress funded 16 attorney positions in the office in fiscal year 2023, according to the fiscal year 2024 budget justification, and now that office is requesting an increase to 25 attorneys.

The release of the group’s report actually coincided with a joint announcement from the Civil Rights Division, along with the Consumer Financial Protection Bureau, Equal Employment Opportunity Commission and Federal Trade Commission. 

Top leaders told reporters in a Tuesday call that they intend to use existing legal authorities related to civil rights, non-discrimination, fair competition and consumer protection to address any bias or discrimination perpetuated by automated systems.

Federal Trade Commission Chair Lina Khan also talked about the importance of workforce with reporters, noting that there are “serious information asymmetries” regarding generative AI.

“The FTC recently launched a new Office of Technology where we're really focused on skilling up and bringing on the technologists and types of expertise that we need on board to make sure that we can ... really be grasping how these technologies are really functioning,” she said.

Asked about what the announcement means for federal agencies themselves, an EEOC spokesperson told FCW that "federal agencies are bound by the federal laws and regulations that the EEOC enforces. Employers may be responsible for the actions of vendors depending on the circumstances."

The EEOC has a particular initiative focused on the use of AI in the workplace. Although liability determinations require case-by-case analysis, the spokesperson said that "Generally speaking, however, if an employer utilizes AI or another automated system in an employment decision in a manner that results in unlawful discrimination, the employer may be liable, even if the AI, or automated system, was developed by an outside vendor."

For now, the government needs to focus on short-term steps around the use of AI, Commerce Secretary Gina Raimondo said during the public meeting of the National AI Advisory Committee, asking the panel, “How do we continue to innovate at pace, but do no harm and have enough guardrails to be responsible and transparent?”

“Proper regulation takes time,” she said. “Disinformation, deep fakes, privacy issues – it’s complicated. We have to come up with some ways to do no harm in the immediate, like this week, next month, three months, six months.”