HomeTop StoriesOpenAI says Russian and Israeli groups have used their tools to spread...

OpenAI says Russian and Israeli groups have used their tools to spread disinformation

OpenAI on Thursday released its first-ever report on how its artificial intelligence tools are being used for covert influence operations, revealing that the company had disrupted disinformation campaigns from Russia, China, Israel and Iran.

Malicious actors used the company’s generative AI models to create and post propaganda content on social media platforms, as well as translate their content into different languages. According to the report, none of the campaigns gained traction or reached a large audience.

Related: A $10 million prize is being launched for a team that can actually talk to the animals

As generative AI has become a booming industry, there is widespread concern among researchers and lawmakers about its potential to increase the quantity and quality of online disinformation. Artificial intelligence companies like OpenAI, which makes ChatGPT, have tried to address these concerns and put guardrails on their technology with mixed results.

OpenAI’s 39-page report is one of the most detailed accounts by an artificial intelligence company about the use of its software for propaganda. OpenAI claimed that its researchers found and banned accounts linked to five covert influence operations over the past three months, which came from a mix of state and private actors.

See also  Postmortem examination confirms young monk seal attacked by dog

In Russia, two operations created and distributed content criticizing the US, Ukraine and several Baltic countries. One of the operations used an OpenAI model to debug code and create a bot that was posted on Telegram. The Chinese influence operation generated text in English, Chinese, Japanese and Korean, which the operatives then posted to Twitter and Medium.

Iranian actors produced entire articles attacking the US and Israel, which they translated into English and French. An Israeli political company called Stoic operated a network of fake social media accounts that created a range of content, including posts accusing American student protests against Israel’s war in Gaza of being anti-Semitic.

Several of the disinformation spreaders that OpenAI banned from its platform were already known to researchers and authorities. The US Treasury Department imposed sanctions in March on two Russian men allegedly behind one of the campaigns OpenAI discovered, while Meta also banned Stoic from its platform this year for violating its policies.

See also  Lyme disease ticks detected in Southeast Michigan

The report also highlights how generative AI is being incorporated into disinformation campaigns as a means to improve certain aspects of content generation, such as creating more persuasive messages in foreign languages, but that this is not the only propaganda tool.

“All of these operations used AI to some extent, but none used it exclusively,” the report said. “Instead, AI-generated material was just one of many types of content they posted, alongside more traditional formats, such as hand-written texts or memes copied from the internet.”

While none of the campaigns had any significant impact, the use of the technology shows how malicious actors are discovering that generative AI allows them to scale up propaganda production. Writing, translating and posting content can now all be done more efficiently through the use of AI tools, lowering the bar for creating disinformation campaigns.

Over the past year, malicious actors in countries around the world have used generative AI in an attempt to influence politics and public opinion. Deepfake audio, AI-generated images and text-based campaigns have all been used to disrupt election campaigns, leading to increased pressure on companies like OpenAI to limit the use of their tools.

See also  Pakistani cubesat takes images of the moon during China's mission to the far side of the moon (photos)

OpenAI stated that it plans to periodically issue similar reports on covert influence operations, and remove accounts that violate its policies.

- Advertisement -
RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments