Twenty technology companies involved in the development of artificial intelligence (AI) have made a joint announcement on Friday, February 16th, regarding their commitment to preventing their software from influencing elections, including those in the United States.
The agreement recognizes the significant risks posed by AI products, particularly in a year when approximately four billion people worldwide are expected to participate in elections. The document emphasizes concerns about deceptive AI in election-related content and its potential to mislead the public, thereby posing a threat to the integrity of electoral processes.
Furthermore, the agreement acknowledges the slow response of global lawmakers to the rapid advancements in generative AI, prompting the tech industry to explore self-regulation. Brad Smith, vice chair and president of Microsoft, expressed his support in a statement.
The pledge has been signed by 20 companies, including Microsoft, Google, Adobe, Amazon, Anthropic, Arm, ElevenLabs, IBM, Inflection AI, LinkedIn, McAfee, Meta, Nota, OpenAI, Snap, Stability AI, TikTok, TrendMicro, Truepic, and X.
However, it is important to note that this agreement is voluntary and does not impose a complete ban on the use of AI content in elections. The 1,500-word document outlines eight steps that the signatory companies commit to taking by 2024. These steps involve the creation of tools to distinguish AI-generated images from authentic content and ensuring transparency with the public regarding significant developments.
Free Press, an open internet advocacy group, has criticized the commitment, considering it an empty promise. The group argues that tech companies have failed to follow through on previous pledges for election integrity after the 2020 election. They advocate for increased oversight by human reviewers.
U.S. Representative Yvette Clarke has expressed her support for the tech accord and hopes that Congress can further build on it. Clarke has sponsored legislation aimed at regulating deepfakes and AI-generated content in political advertisements.
On January 31st, the Federal Communications Commission voted to prohibit the use of AI-generated robocalls that feature AI-generated voices. This decision was made in response to a fake robocall during January’s New Hampshire primary, which falsely claimed to be from President Joe Biden. The incident raised concerns about the potential for counterfeit voices, images, and videos in politics.
Magazine:
Crypto+AI token picks, AGI will take ‘a long time’, Galaxy AI to 100M phones: AI Eye