Governments across the globe are swiftly adopting the algorithms that have infused ChatGPT with a semblance of intelligence, driven by the anticipated substantial economic benefits of this technology.
In recent reports, it has come to light that nation-states are also rapidly integrating this technology into potential tools of disinformation, potentially igniting a concerning AI-driven arms race among major powers.
Researchers at RAND, a nonprofit think tank advising the U.S. government, have highlighted evidence of a Chinese military researcher discussing how generative AI could enhance information campaigns.
A research article from January 2023 even suggests employing large language models, such as a fine-tuned version of Google’s BERT, a precursor to advanced language models powering chatbots like ChatGPT.
William Marcellino, an AI expert and senior behavioral and social scientist at RAND, who contributed to the report, clarifies, “There’s no evidence of it being done right now,” emphasizing that it’s more of a conceptual exploration.
However, he and his colleagues at RAND are alarmed at the potential for influence campaigns to gain unprecedented scale and potency through generative AI.
Marcellino notes, “Coming up with a system to create millions of fake accounts that purport to be Taiwanese, Americans, or Germans, and push a state narrative—I think that it’s qualitatively and quantitatively different.”
AI algorithms developed in recent years have the potential to mass-produce deceptive text, images, videos, and even engage in convincing interactions on social media platforms. The cost of launching such a campaign has been estimated to be just a few hundred dollars.

The widespread availability of generative AI tools, including open-source language models, lowers the barrier for entry, enabling various actors, including technically sophisticated non-state entities, to engage in social media manipulation.
Another report from the Special Competitive Studies Project warns that generative AI could evolve into a means for nations to assert dominance.
It urges the U.S. government to heavily invest in generative AI, highlighting its potential to enhance multiple industries and provide military, economic, and cultural advantages to the nation that masters it first.
Both reports share a somber outlook, suggesting that the potential of generative AI may trigger a race to adapt the technology for military use or cyberattacks. If these assessments hold true, we may witness an information-space arms race that proves challenging to contain.
To avert the nightmare scenario of the internet being overrun by AI bots programmed for information warfare, the solution lies in human collaboration and communication.