The United States is among several countries preparing for a significant election cycle in 2024. The rise of publicly accessible artificial intelligence (AI) tools has led to an increase in political deepfakes, which require voters to develop new skills in order to distinguish between what is real and what is fake.
On February 27, Mark Warner (D-Va.), the Senate Intelligence Chair, stated that America is “less prepared” for election fraud in the upcoming 2024 election compared to the previous one in 2020. This is primarily due to the surge in AI-generated deepfakes in the U.S. over the past year. According to data from SumSub, an identity verification service, North America has experienced a staggering 1,740% increase in deepfakes, with a tenfold increase in the number of detected deepfakes worldwide in 2023.
In January, citizens of New Hampshire reported receiving robocalls with the voice of U.S. President Joe Biden, urging them not to vote in the primary. This incident prompted U.S. regulators to prohibit the use of AI-generated voices in automated phone scams, making them illegal under telemarketing laws.
However, as is often the case with scams, where there is a will, there is a way, regardless of any existing laws. As the U.S. gears up for Super Tuesday on March 5, when a significant number of states hold primary elections and caucuses, there is growing concern over false AI-generated information and fakes.
Cointelegraph interviewed Pavel Goldman Kalaydin, the head of AI and machine learning at SumSub, to gain insight into how voters can better prepare themselves to identify deepfakes and handle situations involving deepfake identity fraud.
Kalaydin emphasized that despite the already substantial increase in the number of deepfakes worldwide, he expects this number to grow even more during election seasons. He highlighted two types of deepfakes: those created by “tech-savvy teams” who use advanced technology and hardware, making them harder to detect, and those produced by “lower-level fraudsters” who utilize readily available tools on consumer computers.
“It’s crucial for voters to be vigilant in scrutinizing the content they see and to remain cautious when encountering video or audio content,” Kalaydin stated. He also provided several telltale signs to watch out for in deepfakes.
However, Kalaydin cautioned that the technology behind deepfakes will continue to advance rapidly, making it increasingly difficult for the human eye to detect them without dedicated detection technologies.
Kalaydin argued that the real issue lies in the creation and distribution of deepfakes. While AI accessibility has created opportunities, it has also contributed to the proliferation of fake content. The distribution of deepfaked content is facilitated by the absence of clear legal regulations and policies, leading to the spread of misinformation online.
“This leaves voters misinformed, increasing the risk of making ill-informed decisions,” Kalaydin warned. He proposed potential solutions, such as mandatory checks for AI or deepfaked content on social media platforms to inform users. Additionally, he suggested implementing user verification on platforms, where verified users would take responsibility for the authenticity of visual content, while non-verified users would be clearly labeled, encouraging others to exercise caution when trusting the content.
Governments worldwide are beginning to consider adequate measures in response to this uneasy climate. India has issued an advisory to local tech firms, requiring approval before releasing new “unreliable” AI tools for public use ahead of its own 2024 elections. In Europe, the European Commission has established AI misinformation guidelines for platforms operating in the region, given the multiple election cycles taking place. Meta, the parent company of Facebook and Instagram, has also released its own strategy to combat the misuse of generative AI in content on its platforms specifically for the European Union.