Deceptive AI in elections: Major technology giants, such as Amazon, Google, and Microsoft, have made a collective pledge to address what they describe as misleading artificial intelligence (AI) in electoral processes. Twenty leading firms have signed an agreement committing themselves to combatting deceptive content that influences voters. They have vowed to employ technological solutions to identify and counter such material. However, an industry expert has expressed scepticism, suggesting that this voluntary pact may not be sufficient to prevent the spread of harmful content.
The “Tech Accord to Combat Deceptive Use of AI in 2024 Elections” was unveiled at the Munich Security Conference, underscoring the urgency of addressing this issue as billions of people prepare to cast their votes in countries like the US, UK, and India. Among the commitments outlined in the accord are efforts to develop technologies aimed at mitigating risks associated with deceptive AI-generated election content, as well as ensuring transparency by disclosing actions taken by the firms. Additionally, the signatories have pledged to share best practices and educate the public on recognizing manipulated content.
While the accord represents a step forward in acknowledging the challenges posed by AI, computer scientist Dr Deepak Padmanabhan from Queen’s University Belfast believes more proactive measures are necessary. He argues that waiting to take down harmful content after it’s posted may allow more realistic yet harmful AI-generated content to persist longer on platforms compared to easily detectable fakes. Moreover, Dr Padmanabhan criticizes the accord’s lack of nuance in defining harmful content, citing the example of AI-generated speeches by incarcerated politicians like Imran Khan.
The signatories of the accord have pledged to target content that deceptively alters the appearance, voice, or actions of key electoral figures, as well as misinformation regarding voting procedures. They emphasize the importance of preventing these tools from being weaponized during elections. Microsoft’s president, Brad Smith, highlights the responsibility of technology companies in safeguarding electoral integrity. This comes amid concerns voiced by US Deputy Attorney General Lisa Monaco regarding the potential for AI to amplify disinformation during elections. Google and Meta have already established policies regarding AI-generated content in political advertising, requiring advertisers to disclose the use of deepfakes or manipulated AI content.