Editor’s Note: OpenAI has confirmed the use of its advanced AI tools in Russian disinformation campaigns, highlighting a significant and concerning development in the realm of cybersecurity and information governance. This marks the first known instance of generative AI being utilized in such initiatives, underlining the evolving and complex nature of information warfare. The “Bad Grammar” campaign, targeting Russian speakers in the Baltics and Moldova, is a stark reminder of the potential misuse of AI technologies. This article delves into the implications of this development, shedding light on the growing reliance on foreign AI models by Russian actors, the broader struggles of Russian AI development, and the global ramifications of AI-driven disinformation. For professionals in cybersecurity, information governance, and eDiscovery, understanding these dynamics is crucial in navigating the challenges posed by advanced technological threats.

Content Assessment: OpenAI Confirms Use of its AI Tools in Russian Disinformation Campaigns

Information - 92%
Insight - 93%
Relevance - 90%
Objectivity - 90%
Authority - 92%



A short percentage-based assessment of the qualitative benefit expressed as a percentage of positive reception of the recent article from ComplexDiscovery OÜ titled, "OpenAI Confirms Use of its AI Tools in Russian Disinformation Campaigns."

Industry News – Artificial Intelligence Beat

OpenAI Confirms Use of its AI Tools in Russian Disinformation Campaigns

ComplexDiscovery Staff

On Friday, OpenAI, the company behind ChatGPT, confirmed that its generative AI tools have been used in Russian disinformation campaigns. Although these efforts were not particularly successful, they were significant because they relied on an American-developed AI model. One such campaign, dubbed “Bad Grammar,” targeted Russian speakers in the Baltics and Moldova via Telegram, using ChatGPT for content generation, translation, and coding. This marks the first acknowledged use of generative AI in Russian disinformation initiatives, highlighting a growing trend where foreign actors exploit advanced AI models for malicious purposes.

Russian leadership once had high hopes for their AI capabilities. In 2017, Vladimir Putin proclaimed, “Artificial intelligence is the future not only of Russia but of all of mankind.” Despite these ambitions, Russia’s domestic AI efforts have lagged behind their Western counterparts. The recent disinformation campaigns show that Russian actors are turning to OpenAI’s models, a move that underscores the growing technological gap between Russia and the West.

The situation also highlights Russia’s broader struggle with AI development. Putin has previously outlined Russia’s AI strategy and the substantial investments made in this area. However, according to Putin, American AI models like OpenAI’s are perceived as threats to Russian values. At an AI summit, he stated that Western models “reflect that part of Western ethics, those norms of behavior, public policy, to which we object.” This sentiment reflects the Kremlin’s concern over the potential influence of foreign AI on Russian society and its desire to maintain control over the narrative.

Despite significant investments, Russian AI models such as Yandex’s Alice and Sber’s GigaChat have struggled to compete with Western alternatives. Recent data shows that newer versions of GigaChat lag behind OpenAI’s GPT-4 model in performance. The growing reliance on foreign AI, even for official purposes, demonstrates a decline in the efficacy of Russian AI. OpenAI’s tools have been used to generate various disinformation content, revealing a trend where Russian actors utilize superior American models despite nationalism-driven efforts to promote domestic AI.

Notably, the reliance on AI for disinformation is not limited to Russia. A recent OpenAI report revealed that other nations also employ such technology for influence operations. The information disclosed efforts like the Chinese “Spamouflage” campaign and an unnamed actor linked to the Iranian regime. The capability of AI to produce and disseminate disinformation quickly has raised concerns among experts. Armin Grunwald from the Karlsruhe Institute of Technology noted that AI makes it easier to generate and publish false information rapidly, posing a threat to democratic processes.

The Kremlin, in an effort to mitigate the potential threats posed by AI, has underscored the need to develop indigenous capabilities. Foreign Ministry spokeswoman Maria Zakharova emphasized that Russian AI development is crucial to avoid the influence of Western biases. According to Zakharova, Western AI systems could unintentionally provide biased data even if they are not explicitly designed to do so. The role of AI in shaping perceptions underscores the heightened competition in the technological arena, as nations strive to assert their influence and protect their interests.

China has also demonstrated significant advancement in AI. According to a recent index, China invests substantially more in AI technology compared to Russia. The Chinese government is also at the forefront of AI evaluation, emphasizing its strategic importance. Putin’s ambition for Russia to lead in AI faces a challenging reality as Chinese and American models continue to dominate the field.

While OpenAI admits to identifying disinformation campaigns, the extent of AI’s role in shaping public discourse remains a growing concern. The minimal engagement from these campaigns suggests they haven’t significantly influenced public opinion yet. However, the risk remains as AI technology advances. OpenAI maintains that their tools have not substantially enhanced the effectiveness of these campaigns. Yet, the deployment of AI in misinformation efforts reflects an evolving landscape where technological advancements outpace regulation and control.

As the world grapples with the implications of AI in disinformation, it is crucial to address the ethical concerns surrounding the use of these powerful tools. The international community must work together to establish guidelines and regulations that promote responsible AI development and usage. Collaboration between governments, tech companies, and civil society is essential to mitigate the risks associated with AI-generated disinformation and protect the integrity of democratic processes. Only through a concerted effort can we harness the potential of AI for the betterment of society while safeguarding against its misuse.

News Sources

Assisted by GAI and LLM Technologies

Additional Reading

Source: ComplexDiscovery OÜ


Have a Request?

If you have information or offering requests that you would like to ask us about, please let us know, and we will make our response to you a priority.

ComplexDiscovery OÜ is a highly recognized digital publication focused on providing detailed insights into the fields of cybersecurity, information governance, and eDiscovery. Based in Estonia, a hub for digital innovation, ComplexDiscovery OÜ upholds rigorous standards in journalistic integrity, delivering nuanced analyses of global trends, technology advancements, and the eDiscovery sector. The publication expertly connects intricate legal technology issues with the broader narrative of international business and current events, offering its readership invaluable insights for informed decision-making.

For the latest in law, technology, and business, visit ComplexDiscovery.com.


Generative Artificial Intelligence and Large Language Model Use

ComplexDiscovery OÜ recognizes the value of GAI and LLM tools in streamlining content creation processes and enhancing the overall quality of its research, writing, and editing efforts. To this end, ComplexDiscovery OÜ regularly employs GAI tools, including ChatGPT, Claude, Midjourney, and DALL-E, to assist, augment, and accelerate the development and publication of both new and revised content in posts and pages published (initiated in late 2022).

ComplexDiscovery also provides a ChatGPT-powered AI article assistant for its users. This feature leverages LLM capabilities to generate relevant and valuable insights related to specific page and post content published on ComplexDiscovery.com. By offering this AI-driven service, ComplexDiscovery OÜ aims to create a more interactive and engaging experience for its users, while highlighting the importance of responsible and ethical use of GAI and LLM technologies.