The headlines sounded dire: “China Will Use AI to Disrupt Elections within the US, South Korea and India, Microsoft Warns.” One other claimed, “China Is Utilizing AI to Sow Disinformation and Stoke Discord Throughout Asia and the US.”
They have been based mostly on a report revealed earlier this month by Microsoft’s Threat Analysis Center which outlined how a Chinese language disinformation marketing campaign was now using synthetic know-how to inflame divisions and disrupt elections within the US and all over the world. The marketing campaign, which has already focused Taiwan’s elections, makes use of AI-generated audio and memes designed to seize consumer consideration and increase engagement.
However what these headlines and Microsoft itself didn’t adequately convey is that the Chinese language-government-linked disinformation marketing campaign, often known as Spamouflage Dragon or Dragonbridge, has thus far been nearly ineffective.
“I might describe China’s disinformation campaigns as Russia 2014. As in, they’re 10 years behind,” says Clint Watts, the overall supervisor of Microsoft’s Risk Evaluation Middle. “They’re making an attempt a number of various things however their sophistication remains to be very weak.”
Over the previous 24 months, the marketing campaign has switched from pushing predominately pro-China content material to extra aggressively targeting US politics. Whereas these efforts have been large-scale and throughout dozens of platforms, they’ve largely didn’t have any actual world affect. Nonetheless, specialists warn that it will probably take only a single publish being amplified by an influential account to alter all of that.
“Spamouflage is like throwing spaghetti on the wall, and they’re throwing quite a lot of spaghetti,” says Jack Stubbs, chief info officer at Graphika, a social media evaluation firm that was among the many first to determine the Spamouflage marketing campaign. “The amount and scale of this factor is large. They’re placing out a number of movies and cartoons on daily basis, amplified throughout totally different platforms at a world scale. The overwhelming majority of it, in the meanwhile, seems to be one thing that does not stick, however that does not imply it will not stick sooner or later.”
Since not less than 2017, Spamouflage has been ceaselessly spewing out content material designed to disrupt main world occasions, together with matters as various because the Hong Kong pro-democracy protests, the US presidential elections, and Israel and Gaza. A part of a wider multibillion-dollar influence campaign by the Chinese language authorities, the marketing campaign has used hundreds of thousands of accounts on dozens of web platforms starting from X and YouTube to extra fringe platforms like Gab, the place the marketing campaign has been making an attempt to push pro-China content material. It’s additionally been among the many first to undertake cutting-edge strategies equivalent to AI-generated profile pictures.
Even with all of those investments, specialists say the marketing campaign has largely failed attributable to a variety of elements together with problems with cultural context, China’s on-line partition from the skin world through the Nice Firewall, an absence of joined-up considering between state media and the disinformation marketing campaign, and using ways designed for China’s personal closely managed on-line surroundings.
“That is been the story of Spamouflage since 2017: They’re huge, they’re all over the place, and no one seems at them apart from researchers,” says Elise Thomas, a senior open supply analyst on the Institute for Strategic Dialogue who has tracked the Spamouflage marketing campaign for years.