In its latest election security report, the Office of the Director of National Intelligence (ODNI) claims that Russia, Iran, and China are stepping up efforts to influence U.S. elections. The report, released in mid-September, warned of a growing trend in foreign influence operations, particularly the use of artificial intelligence (AI) to manipulate the U.S. information environment. According to ODNI, each of America's adversaries is deploying its own tactics to achieve its objectives.
Generative AI: A new tool for influence
According to the report, foreign powers, including Russia and Iran, are increasingly incorporating generative AI technology into their influence operations. Intelligence agencies cited in the report noted that while these powers are using AI to more efficiently produce election-related content, it has not yet “revolutionized” their operations. However, the IC continues to assess that the risk to U.S. elections from AI-generated content is real.
“The IC has observed foreign actors, including Russia and Iran, using generative AI techniques to increase their influence on their respective U.S. elections,” the report said, adding that these techniques were part of a broader pattern identified as early as July 2024. Generative AI is being used to speed up content creation across a range of mediums, including text, images, audio and video.
But the report warns that the impact of AI-generated content depends on the sophistication of the actors involved. “The risk to U.S. elections from foreign AI-generated content depends on whether foreign actors can overcome the limitations built into many AI tools and remain undetected,” the report explains. The IC continues to closely monitor attempts to inject deceptive AI content into U.S. election discourse, reassures a top U.S. spy.
Russia
Russia has been cited as the country with the largest use of AI-generated content in the 2024 election cycle. According to ODNI, Russia has deployed AI to create content across all mediums and has published AI-generated images of prominent U.S. figures. In a July report, ODNI called Russia “the greatest threat to U.S. elections.”
ODNI specifically identifies the objectives of Russian covert operations as “enhancing (the former president's candidacy) and disparaging (the vice president and the Democratic Party).” To accomplish that, Russia often creates content that supports “conspiracy narratives” that align with its objectives.
The report states:
For example, the IC assessed that Russian influence figures were responsible for fabricating a video claiming that a woman was the victim of a hit-and-run accident committed by the Vice President and for altering a video of a speech by the Vice President.
They appear to have fabricated a story that Harris hit a 13-year-old girl with his car in 2011, a story that was reportedly circulated on platforms such as X.
Earlier this month, the Microsoft Threat Analysis Center (MTAC) identified the Kremlin-linked “troll farm” Storm-1516 as the perpetrator of the creation and distribution of the above article using AI-generated content and actors. The center, which works with law enforcement, government, and across the tech industry, warned that Russian election interference is shifting significantly “to target the Harris-Waltz campaign.”
“Russian AI-generated content also seeks to highlight divisive issues in the United States, such as immigration,” the ODNI report added.
Immigration has consistently been one of the biggest concerns for voters, as many believe the Biden administration's open border policy has caused serious economic strain not only on ordinary Americans but also on local and state governments. The blame for this lies squarely with Harris, who has served as the administration's “Secretary of State for Border Affairs” since March 2021.
Iran
According to the ODNI report, Iran has similarly stepped up its efforts, using AI to generate social media posts and fabricate news articles, commonly referred to as “fake news.” These operations, carried out in multiple languages, including English and Spanish, are aimed at creating discord around important U.S. issues, such as the Israel-Gaza conflict and the 2024 presidential candidates. Iran is said to be generating false content portraying Israeli actions against Gaza to undermine U.S. public support for Israel’s 11-month military operation against Hamas. The issue remains highly contentious, with factions of the Democratic Party and some conservative figures alike criticizing the operation, which has resulted in more than 40,000 civilian casualties in Gaza. Israeli intelligence considers these figures “largely accurate.” Despite the controversy, both Donald Trump and Kamala Harris have consistently voiced strong support for Israel and have reiterated that commitment on multiple occasions.
“Iranian actors are using AI to generate social media posts and write fake news articles on websites purporting to be real news sites,” the ODNI alleges, a tactic reportedly aimed at targeting a wide range of U.S. voters across the political spectrum.
China
China, meanwhile, has focused its AI efforts on broader influence operations rather than direct election interference. The ODNI report outlined how pro-China forces have used AI-generated news anchors and fake social media accounts with AI-generated profile pictures to “stoke division” on topics such as drug use, immigration and abortion. At the same time, China's use of AI is primarily aimed at shaping global perceptions of China, rather than directly influencing U.S. election outcomes, according to ODNI.
AI and Elections
The rapid advancement of AI has raised concerns about its potential impact on elections. Deepfakes, hyper-realistic videos generated by AI, could easily be weaponized to skew political discourse, manipulate voter perceptions, and undermine the integrity of the democratic process. Experts warn that once these digital forgeries become indistinguishable from reality, they could be used to create fake speeches, events, and scandals about candidates.
AI has frequently been a major topic in discussions surrounding the current race for the White House.
X owner and Trump supporter Elon Musk came under fire for sharing an AI-generated video of Kamala Harris in which she describes herself as a “deep state puppet,” the “ultimate diversity hire” and someone who “knows nothing about running a country.”
The incident clearly violates the platform's own policies against synthetic and manipulated media that may “mislead or confuse people.”
In August, five secretaries of state pressured Company X to add election warnings to its Grok chatbot after the company provided misleading voting information.
Similarly, OpenAI’s ChatGPT initially directed users to CanIVote.Org for election-related inquiries, but after significant inaccuracies were discovered, it stopped answering election questions entirely.