[ad_1]
The United States Environmental Protection Agency blocked its employees from accessing ChatGPT while the US State Department staff in Guinea used it to draft speeches and social media posts.
Maine banned its executive branch employees from using generative artificial intelligence for the rest of the year out of concern for the state’s cybersecurity. In nearby Vermont, government workers are using it to learn new programming languages and write internal-facing code, according to Josiah Raiche, the state’s director of artificial intelligence.
The city of San Jose, California, wrote 23 pages of guidelines on generative AI and requires municipal employees to fill out a form every time they use a tool like ChatGPT, Bard, or Midjourney. Less than an hour’s drive north, Alameda County’s government has held sessions to educate employees about generative AI’s risks—such as its propensity for spitting out convincing but inaccurate information—but doesn’t see the need yet for a formal policy.
“We’re more about what you can do, not what you can’t do,” says Sybil Gurney, Alameda County’s assistant chief information officer. County staff are “doing a lot of their written work using ChatGPT,” Gurney adds, and have used Salesforce’s Einstein GPT to simulate users for IT system tests.
At every level, governments are searching for ways to harness generative AI. State and city officials told WIRED they believe the technology can improve some of bureaucracy’s most annoying qualities by streamlining routine paperwork and improving the public’s ability to access and understand dense government material. But governments—subject to strict transparency laws, elections, and a sense of civic responsibility—also face a set of challenges distinct from the private sector.
“Everybody cares about accountability, but it’s ramped up to a different level when you are literally the government,” says Jim Loter, interim chief technology officer for the city of Seattle, which released preliminary generative AI guidelines for its employees in April. “The decisions that government makes can affect people in pretty profound ways and … we owe it to our public to be equitable and responsible in the actions we take and open about the methods that inform decisions.”
The stakes for government employees were illustrated last month when an assistant superintendent in Mason City, Iowa, was thrown into the national spotlight for using ChatGPT as an initial step in determining which books should be removed from the district’s libraries because they contained descriptions of sex acts. The book removals were required under a recently enacted state law.
That level of scrutiny of government officials is likely to continue. In their generative AI policies, the cities of San Jose and Seattle and the state of Washington have all warned staff that any information entered as a prompt into a generative AI tool automatically becomes subject to disclosure under public record laws.
That information also automatically gets ingested into the corporate databases used to train generative AI tools and can potentially get spit back out to another person using a model trained on the same data set. In fact, a large Stanford Institute for Human-Centered Artificial Intelligence study published last November suggests that the more accurate large language models are, the more prone they are to regurgitate whole blocks of content from their training sets.
That’s a particular challenge for health care and criminal justice agencies.
Loter says Seattle employees have considered using generative AI to summarize lengthy investigative reports from the city’s Office of Police Accountability. Those reports can contain information that’s public but still sensitive.
Staff at the Maricopa County Superior Court in Arizona use generative AI tools to write internal code and generate document templates. They haven’t yet used it for public-facing communications but believe it has potential to make legal documents more readable for non-lawyers, says Aaron Judy, the court’s chief of innovation and AI. Staff could theoretically input public information about a court case into a generative AI tool to create a press release without violating any court policies, but, she says, “they would probably be nervous.”
[ad_2]
Source link