Spamming democracy

Gmast3r/Getty Images

The threat and promise of generative AI in the regulatory process.

The White House’s Office of Information and Regulatory Affairs is considering AI’s effect in the regulatory process, including the potential for generative chatbots to fuel mass campaigns or inject spam comments into the federal agency rulemaking process.

A recent executive order directed the office to consider using guidance or tools to address mass comments, computer-generated comments and falsely attributed comments, something an administration official told FCW that OIRA is "moving forward" on.

Mark Febrezio, a senior policy analyst at George Washington University’s Regulatory Studies Center, has experimented with Open AI’s generative AI system ChatGPT to create what he called a “convincing” public comment submission to a Labor Department proposal. 

“Generative AI also takes the possibility of mass and malattributed comments to the next level,” wrote Fabrizio and co-author Bridget Dooling, research professor at the center, in a paper published in April by the Brookings Institution.

The executive order comes years after astroturfing during the rollback of net neutrality policies by the Federal Communications Commission in 2017 garnered public attention. That rulemaking docket received a record-breaking 22 million-plus comments, but over 8.5 million came from a campaign against net neutrality led by broadband companies, according to an investigation by the New York Attorney General released in 2021. 

The investigation found that lead generators paid by these companies submitted many comments with real names and addresses attached without the knowledge or consent of those individuals.  In the same docket were over 7 million comments supporting net neutrality submitted by a computer science student, who used software to submit comments attached to computer-generated names and addresses.

While the numbers are staggering, experts told FCW that agencies aren't just counting comments when reading through submissions from the public. 

Steve Balla, co-director of the Regulatory Center at George Washington University, said agencies are primarily looking for new information.

He and Dooling were authors on a 2021 report for the Administrative Conference of the United States on mass, computer-generated and misattributed comments. 

“What people are really concerned about is the intersection of these three things,” Balla said. “That you basically have a massive amount of computer-generated information that's falsely attributed.” 

AI generated comments could create more work for agencies during the regulatory process but Dooling told FCW that agencies have experience fact-checking and evaluating the substance and merits of comments.

Balancing access and authenticity

Many agencies use the General Services Administration’s public-facing Regulations.gov for comment submission, as well as the back-end Federal Document Management System, which has deduplication tools to zero in on comment spam, according to a GSA spokesperson.

“Where [AI] takes it to the next level is the volume,” said Dooling. “If agencies suddenly receive millions of comments that are unique and substantive, that could pose a challenge to their internal workforce and how they manage this.”

Several experts also pointed out that AI has the potential to help agencies detect computer-generated comments. 

Additionally, GSA added an API to the back-end system in 2021 to set up a bulk-posting mechanism for comments from membership or advocacy organizations that are third-party verified and certified. GSA also added Google’s reCAPTCHA tool in 2021 to control submissions from automated, computer-to-computer submissions to the docket. 

In the legal context, the harms of your identity being attached to a comment you didn’t submit, if any, aren’t clear, said Dooling, asking, “How serious are those harms compared to, for example, making it harder to submit comments in the first place?”

Agencies could require identity verification as a prerequisite to commenting, but “that means that a number of people will just not comment, really,” she said. “You lose something by locking this down more.”

Another element to consider is how agencies look at comments now – based on substance, not authorship, said Balla. 

“Obviously there’s the dystopian view of bots just taking over, hijacking the comment process and drowning out human voices,” he said. “But I think a key thing to keep in mind is that agencies actually aren’t charged by law to react to comments based on the identity of the submitter.”

AI could also be helpful in helping people normally not involved in the regulatory process craft comments, said Dooling. 

"The folks who generally are better at [writing comments] tend to be part of a specialized interest group that has participated in this process before," Dooling said, "whereas your average person probably doesn’t even know rulemaking exists, period.”