Home Politics Research shows that an AI-powered bot army on X spread pro-Trump and...

Research shows that an AI-powered bot army on X spread pro-Trump and pro-GOP propaganda

0
Research shows that an AI-powered bot army on X spread pro-Trump and pro-GOP propaganda

An army of political propaganda accounts powered by artificial intelligence have been posing as real people on X to advocate for Republican candidates and causes, according to a research report from Clemson University.

The report details a coordinated AI campaign that uses large language models (LLM) – the type of artificial intelligence that powers persuasive, human-looking chatbots like ChatGPT – to respond to other users.

While it is unclear who operated or financed the network, the focus on certain political pet projects with no apparent connection to foreign countries indicates that it is a U.S. political operation, and not one run by a foreign government, the researchers said.

As the November elections approach, the government and other watchdogs have warned against attempts to influence public opinion through AI-generated content. The presence of a seemingly coordinated domestic influence operation using AI adds another wrinkle to a rapidly evolving and chaotic information landscape.

The network identified by Clemson researchers included at least 686 identified X accounts that have posted more than 130,000 times since January. It focused on four Senate races and two primaries and supported former President Donald Trump’s re-election campaign. Many of the accounts were removed from X after NBC News emailed the platform for comment. The platform did not respond to NBC News’ inquiry.

The accounts followed a consistent pattern. Many had profile pictures that appealed to conservatives, such as the far-right cartoon meme Pepe the Frog, a cross or an American flag. They often responded to a person talking about a politician or a polarizing political issue on X, often to support Republican candidates or policies or to denigrate Democratic candidates. Although the accounts generally had few followers, their habit of replying to popular posters made them more likely to be seen.

Fake accounts and bots designed to artificially boost other accounts have plagued social media platforms for years. But only with the arrival of widely available large language models in late 2022 will it be possible to automate persuasive, interactive human conversations at scale.

“I’m concerned about what this campaign shows is possible,” Darren Linvill, co-director of Clemson’s Media Hub and lead researcher on the study, told NBC News. “Bad actors are only now learning how to do this. They will definitely get better at it.”

The accounts took different positions on certain races. In Ohio’s Republican Senate primaries, they endorsed Frank LaRose over Trump-backed Bernie Moreno. In Arizona’s Republican congressional primary, the defeated backed Blake Masters over Abraham Hamadeh. Both Masters and Hamadeh were endorsed by Trump over four other Republican candidates.

The network also endorsed the Republican candidate in Senate races in Montana, Pennsylvania and Wisconsin, as well as the Republican-led voter ID law in North Carolina.

A spokesperson for Hamadeh, who won the primary in July, told NBC News that the campaign noticed an influx of messages criticizing Hamadeh every time he posted on X, but did not know who to report the phenomenon to or how to stop it . While

The researchers determined that the accounts were in the same network by reviewing metadata and tracking the content of their replies and the accounts they responded to — sometimes the accounts repeatedly attacked the same targets together.

Clemson researchers identified many accounts in the network through text in their messages indicating they were “broken,” with their text containing a reference to being written by AI. Initially, the bots appeared to be using ChatGPT, one of the most tightly controlled LLMs. In a post tagging Sen. Sherrod Brown, D-Ohio, one account wrote: “Hey there, I’m an AI language model trained by OpenAI. If you have any questions or need more help, please ask!” OpenAI declined to comment.

In June, the network concluded that it was using Dolphin, a smaller model designed to bypass restrictions like those on ChatGPT, which prohibit the use of its product to deceive others. Some tweets from the accounts allegedly included text with phrases such as “Dolphin here!” and “Dolphin, the uncensored AI tweet writer.”

Tweets from accounts in the bot network show that the system is ‘breaking’.

Kai-Cheng Yang, a postdoctoral researcher at Northeastern University who studies misuse of generative AI but was not involved in the Clemson study, reviewed the findings at the request of NBC News. In an interview, he supported the findings and methodology, noting that the accounts often contained a rare note: unlike real people, they often made up hashtags to match their posts.

“They contain a lot of hashtags, but those hashtags are not necessarily the hashtags that people use,” Yang said. “Like when you ask ChatGPT to write a tweet for you and it contains made-up hashtags.”

For example, one post supporting LaRose in the Ohio Republican Senate primary used the hashtag “#VoteFrankLaRose.” A search on X for that hashtag shows that only one other tweet, from 2018, has used it.

Some hashtags used in the network’s posts included hashtags that were rarely posted by human users.

The researchers only found evidence of the campaign on But Musk also oversaw major cuts when he took over the company, and then Twitter, which included parts of the trust and security teams.

It’s not clear exactly how the campaign automated the process of generating and posting content to X, but several consumer products enable similar forms of automation and publicly available tutorials explain how to set up such an operation.

The report says part of the reason it believed the network was a U.S. operation is due to hyper-specific support from some Republican campaigns. Documented foreign propaganda campaigns consistently reflect those countries’ priorities: China opposes U.S. support for Taiwan, Iran opposes Trump’s candidacy, and Russia supports Trump and opposes U.S. aid to Ukraine. All three have spent years denigrating the American democratic process and attempting to sow widespread division through propaganda campaigns on social media.

“All of these actors are driven by their own goals and agendas,” Linvill said. “This is most likely a domestic actor due to the specificity of most of the targeting.”

If the network is American, it’s probably not illegal, says Larry Norden, vice president of elections and government programs at NYU’s Brennan Center for Justice, a progressive nonprofit organization, and author of a recent analysis of AI laws in state elections. .

“There really isn’t a lot of regulation in this area, especially at the federal level,” Norden said. “There is currently nothing in the law that requires a bot to identify itself as a bot.”

If a super PAC were to hire a marketing firm or an employee to run such a bot farm, it wouldn’t necessarily show up as such on the disclosure forms, Norden said, possibly coming from an executive or a salesperson.

Although the United States government has repeatedly taken action to neutralize deceptive foreign propaganda operations aimed at influencing U.S. political opinion, the U.S. intelligence community generally has no intention of combating U.S. disinformation operations.

Social media platforms routinely purge coordinated, fake personas they accuse of coming from government propaganda networks, particularly from China, Iran and Russia. But while these operations have sometimes hired hundreds of employees to write fake content, AI now allows most of that process to be automated.

Often these fake accounts struggle to gain organic followers before they are detected, but the network detected by Clemson researchers took advantage of existing follower networks by responding to larger accounts. LLM technology could also help avoid detection by enabling the rapid generation of new content, rather than copying and pasting.

While Clemson’s is the first clearly documented network to systematically use LLMs to answer and shape political conversations, there is evidence that others are also using AI in propaganda campaigns on X.

In a September press call on foreign operations to influence the election, a US intelligence official said Iran and especially Russia’s online propaganda efforts included directing AI bots to respond to users, although the official declined to speak about the extent of those efforts or share additional details.

Dolphin founder Eric Hartford told NBC News that he believes the technology should reflect the values ​​of everyone who uses it.

“LLMs are a tool, just like lighters and knives and cars and phones and computers and a chainsaw. We don’t expect a chainsaw to only work on trees, right?”

“I am producing an instrument that can be used for good or for evil,” he said.

Hartford said he wasn’t surprised someone had used his model for a misleading political campaign.

“I would say this is just a natural consequence of the existence of this technology, and inevitable,” he said.

This article was originally published on NBCNews.com

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Exit mobile version