Meta, the parent company of Facebook and Instagram, has revealed its plan to address the issue of misusing generative artificial intelligence (AI) in order to protect the integrity of the electoral process on its platforms for the upcoming 2024 European Parliament elections in June.
In a blog post published on February 25, Marco Pancini, Meta’s head of EU Affairs, emphasized that the platform’s “Community Standards” and “Ad Standards” would apply to AI-generated content as well. Pancini stated that AI-generated content would be subject to review and evaluation by independent fact-checking partners. One of the ratings will indicate if the content has been “altered,” which refers to content that has been faked, manipulated, or transformed, including audio, video, or photos.
Meta’s existing policies already require photorealistic images created using its AI tools to be clearly labeled. The recent announcement revealed that Meta is also working on new features to label AI-generated content posted by users, even if it was created using tools from companies like Google, OpenAI, Microsoft, Adobe, Midjourney, or Shutterstock.
Moreover, Meta plans to introduce a feature that allows users to disclose when they share an AI-generated video or audio, ensuring that it can be flagged and labeled accordingly. Failure to disclose such content may result in penalties.
In addition, Meta expects advertisers running political, social, or election-related ads that have been altered or created using AI to disclose the usage of AI. The company’s blog post mentioned that between July and December 2023, Meta removed 430,000 ads across the European Union for not including the required disclaimer.
This issue has gained significant importance as major elections are scheduled to take place worldwide in 2024. Both Meta and Google have previously addressed rules related to AI-generated political advertising on their platforms. On December 19, 2023, Google announced that it would restrict answers to election queries on its AI chatbot Gemini, previously known as Bard, and its generative search feature in the lead-up to the 2024 U.S. presidential election.
OpenAI, the developer of the AI chatbot ChatGPT, has also taken steps to alleviate concerns about AI interference in global elections by establishing internal standards to monitor activity on its platforms.
On February 17, 20 companies, including Microsoft, Google, Anthropic, Meta, OpenAI, Stability AI, and X, signed a pledge to combat AI election interference, acknowledging the potential dangers if left uncontrolled.
Governments worldwide have also taken measures to address AI misuse ahead of local elections. The European Commission initiated a public consultation on proposed guidelines for election security to mitigate democratic threats posed by generative AI and deepfakes.
In the U.S., the use of AI-generated voices in automated phone scams was banned and made illegal after a deepfake of President Joe Biden was circulated in scam robocalls, misleading the public.
Magazine: Google to rectify diversity issues with Gemini AI, ChatGPT exhibits abnormal behavior: AI Eye