HomePoliticsArizona state lawmaker used ChatGPT to write part of law on deepfakes

Arizona state lawmaker used ChatGPT to write part of law on deepfakes

An Arizona state representative behind a new law regulating deepfakes in elections used an artificial intelligence chatbot, ChatGPT, to write part of the law — specifically the part that defines what a deepfake is.

Republican Alexander Kolodin’s bill, which passed unanimously in both chambers and was signed this week by the Democratic governor, will allow Arizona candidates or residents to ask a judge to declare whether an alleged deepfake is real or not, allowing candidates are given a way to unmask AI. -generated disinformation.

Kolodin said he used the chatbot ChatGPT to help define what “digital impersonation” is for the bill, in part because it was a fun way to demonstrate the technology. He provided a screenshot of ChatGPT’s response to the question of what a deepfake is, similar to the language included in the bill’s definition.

“I’m not a computer scientist by any means,” Kolodin said. “And so when I tried to write the technical part of it, in terms of what kind of technological processing makes something a deepfake, I struggled quite a bit with the terminology. So I thought to myself, I’ll just ask the subject matter expert. And so I asked ChatGPT to write a definition of what a deepfake is.”

That part of the bill “was probably tampered with in the slightest way – people seemed to be pretty cool with that during the legislative process.” ChatGPT provided the “basic definition” and then “I, the human, added to the protection of human rights such things, it excludes comedy, satire, criticism, artistic expression and things like that,” Kolodin said.

See also  Biden will secure the 200th judicial confirmation as an election looms

Kolodin has used ChatGPT a few times on other legislation, he said, to help write the first drafts of amendments and save time. “Why work harder when you can work smarter,” Kolodin responded on Twitter when an Arizona reporter tweeted about his use of ChatGPT in the bill.

The federal government has not yet regulated the use of AI in elections, although groups have pressured the Federal Election Commission to do so because the technology has evolved much faster than the law, raising concerns that it could disrupt this year’s election. The agency has said it expects to have more to say on the issue this summer.

The Federal Communications Commission, meanwhile, will consider whether to require disclaimers for AI-generated content in political ads airing on radio and TV, the Associated Press reported Wednesday. The FCC previously clarified that AI-generated votes in robocalls, such as a case in which President Joe Biden’s vote was spoofed for voters in New Hampshire, are illegal.

In the absence of federal regulations, many states have introduced bills to regulate deepfakes. It is typically an area where the two sides rarely agree.

Some bills have banned the use of deepfakes in political contexts in some cases, while others require bills to indicate whether the content is AI-generated.

See also  What's next for Hunter Biden after his conviction on federal gun charges?

Kolodin’s bill takes a different approach to concerns about deepfakes in elections than many other states considering how to regulate the technology. Rather than ban or curb its use, Kolodin wanted to give people a mechanism by which the courts could weigh in on the veracity of a deepfake. Removing it would be pointless and a First Amendment issue, he said.

“Now at least their campaign has a statement from a court saying, it doesn’t look like it’s you, and they could use that for counter-narrative messaging,” he said.

The bill does allow a deepfake to be removed and the person can sue for damages if someone is depicted sexually or nude, if the person in the deepfake is not a public figure and if the publisher knew the fake was fake and refused to remove it .

The Arizona bill also takes a different approach to disclaimers. Rather than requiring them outright, as some state laws have done, it says that someone bringing a potential legal action would have no case if the publisher of the digital impersonation had communicated that the image or video was a deepfake or that its authenticity was neglected. which is in dispute, or whether it would be obvious to a reasonable person that it was a deepfake.

Kolodin said disclaimers also pose speech problems for him because they shorten airtime or, in some cases, ruin the joke or essence of a message. He cited a recent example in which Arizona Agenda, a local publication covering state politics, created a deepfake of U.S. Senate candidate Kari Lake, making it clear to a viewer that the video was not real based on what Lake said. (Full disclosure: The reporter on this story co-founded the Arizona Agenda, but is no longer involved.)

See also  Misleading GOP videos of Biden go viral. The fact checks are having a hard time keeping up.

“Any reasonable person would have realized that [it was fake], but if there had been a label on it, that would have ruined the joke, right?” said Kolodin. “It would have ruined the journalistic impact. And so I think a prescription label is further along than I wanted.”

In one case in Georgia, a state representative trying to convince fellow lawmakers to pass a bill banning deepfakes in elections used an AI-generated image and sound of two people opposed to the bill, spoofing their votes to say they endorsed it.

Kolodin hopes his bill will become a model for other states, as he worries that well-intentioned efforts to regulate AI in elections could trample on speech rights.

“I think deepfakes can play a legitimate role in our political discourse,” he said. “And when politicians regulate speech, you kind of have a fox guarding the henhouse, so they’ll say, oh, anything that makes me look weird is a crime. I definitely hope other state legislators pick this up.”

- Advertisement -
RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments