What Will Be The Extent Of The Impact Of AI-generated Images On Elections?

2024 will be an important year as most nations will be conducting important elections within their cities. In the United States, there is a high probability that a rerun of Trump versus Biden will play out, while other countries like Taiwan, the U.K., and India, as well as the European parliament would hold their own elections also. While most citizens would be ready to go out to exercise their civic rights, the integrity and transparency of the election process has come under scrutiny due to the effect of AI.

Google’s former CEO Schmidt Eric predicted in July that the 2024 elections would be messed up because of false AI-generated images on social media. He was quite concerned about the amount of misinformation that could potentially be generated by AI, stating that the distinction between the truth and lies could blur more than ever before.

When Politics Is Powered By AI

In recent times, AI technology has been used in campaign politicking, both for and against opponents. While videos, photos, and handwritten letters have been faked in the name of politics over the years, AI technology has really upped the ante recently.

Examples include a released, AI-photo-based video showing Dr. Anthony Fauci hugging former U.S. President, Donald Trump; the AI-generated video targeting President Biden’s re-election plans next year; and the viral image of the Pentagon explosion which was posted by a pro-Russian account, and affected stock prices briefly. The New York Times ran a story in March, exposing how the Democratic party was using AI- based fundraising messages, with great success; and faked images in New York showed the alleged arrest of the former American president, Trump.

These acts all go to show how powerful artificial intelligence has been on American politics and elections. More images have been traced by Microsoft analysts, who believe that Chinese operatives are to blame for online images resembling American voters, aimed at spreading disinformation.

RELATED: OpenAI- backed Language Learning App ‘Speak’ Raises $16M  To Expand To The U.S.

The Reality Of AI Image Generation And Sanctions

Most of the popular text-to-image artificial intelligence generators have very low to slightly moderate prompt moderation guidelines. According to TechCrunch, Midjourney, Stable Diffusion, and DALL-E 2 accepted over 85% of fed- in prompts, irrespective of the level of misinformation and un-verified information thrown in. Using a range of prompts, with phrases like ‘stolen elections,’ requests to generate hyper realistic images of election malpractice were accepted by all three popular tools. Also, images showing droves of people arriving by boat in Dover, U.K., and some showing the opposition party’s support for militancy, were generated via carefully worded prompts to the AI models.

It is obvious that in-built moderation safeguards are quite limited in most of these AI tools. With their accessibility, and low technological know-how, it would be easy for anyone to create and spread false information, at little cost, about the upcoming elections. The saving grace at present, is the fact that the quality of generated images is usually too low to fool people; however, that did not stop the stock market ripples caused by the Pentagon ‘explosion’ image. Right now, AI-generated adverts and images are required by Google to divulge the fact; however political operatives would still employ dump-and-run tactics in order to post without disclaimers, then move on to new accounts to start the process all over again.

RELATED: Are AI Models Doomed To Always Hallucinate?

AI And 2024 Elections

2024 will be the premier of elections assisted by artificial intelligence. Already, campaigns using AI tech have started to make the rounds. It would also be unsurprising to see malicious use of the technology on a large scale. As the information released by the various factions becomes more publicized, regular voters will find it hard to distinguish fiction from fact. This is bad because it would result in deceived audiences, an increase in political deceptions and impersonations, and a reduction in the degree of trust between journalists and consumers; and in the election process. Measures to take to mitigate these risks include:

  • Strengthening of the AI tools’ content moderation policies.
  • Combating the use of AI-generated images in political and personal misinformation campaigns.
  • Media literacy classes and webinars to train regular citizens on how to spot such fake content online.
  • Designing more AI programs to tackle these false, misleading disinformation campaigns.

For the 2024 elections, however, the major impact of AI will be seen separate from the big campaigns. Smaller, less high-profile tickets and campaigns, with smaller budgets can take advantage of AI tools usually employed by six figure budgets. More of such dirty campaign tricks will therefore be seen in 2024, due to the increased access to such tools by everyone.

NEXT: State Prosecutors Unite To Tackle The Issue Of AI-Driven Child Exploitation

Leave a comment