In our increasingly AI-driven world, blockchain technology has the potential to play a crucial role in preventing the negative consequences of AI-powered apps like Facebook from becoming widespread and normalized.
Artificial intelligence platforms such as ChatGPT and Google’s Bard have gained popularity in recent years but have also faced criticism for their biases, which have been accused of exacerbating political divisions. As depicted in popular films like “The Terminator,” “The Matrix,” and “Mission: Impossible — Dead Reckoning Part One,” it is clear that AI is a powerful force that we may struggle to control.
AI has the potential to transform the global economy and civilization itself, with its capabilities ranging from spreading disinformation campaigns and operating killer drones to compromising individual privacy. In fact, in May 2023, global tech leaders issued an open letter comparing the dangers of AI to nuclear weapons.
One of the major concerns surrounding AI is the lack of transparency in its training and programming, especially in deep learning models that are difficult to scrutinize. Since sensitive data is used to train AI models, they can be manipulated if the data is compromised.
In the future, blockchain technology will likely be used in conjunction with AI to enhance transparency, accountability, and auditability in the decision-making process. By storing the training data on a blockchain, the provenance and integrity of the data can be ensured, preventing unauthorized modifications. Stakeholders can track and verify the decision-making process by recording the model’s training parameters, updates, and validation results on the blockchain.
This use case demonstrates how blockchain can prevent unintentional misuse of AI. However, the intentional misuse of AI poses a much more dangerous scenario that we may face in the coming years.
Even without AI, centralized Big Tech companies have a history of enabling behavior that manipulates individuals and democratic values for profit, as exemplified by Facebook’s Cambridge Analytica scandal. In 2014, the “Thisisyourdigitallife” app offered to pay users for personality tests, which required access to their Facebook profiles and their friends’ profiles. Facebook allowed Cambridge Analytica to gather user data without permission.
The consequences of this breach were two mass-targeted psychological public relations campaigns that significantly influenced the outcomes of the 2016 US presidential election and the United Kingdom’s European Union membership referendum. Has Meta (formerly Facebook) learned from its mistakes? It appears not, as they recently unveiled their new app, Threads, which collects user data similar to Facebook and Instagram. Threads users unknowingly gave Meta access to their GPS location, camera, photos, IP information, device type, and device signals. This practice is justified by Web2 companies by claiming that users agreed to the terms and conditions. However, it would take an average of 76 working days to read every privacy policy for each app used by a standard internet user. As a result, Meta now has access to almost everything on the phones of over 150 million users.
When combined with AI, this invasive surveillance and the intelligence of AI could have far-reaching consequences. Blockchain technology presents a potential solution, but it is not without its challenges.
One of the main dangers of AI lies in the data it collects and weaponizes. Blockchain technology has the potential to enhance data privacy and control, which could help mitigate Big Tech’s data harvesting practices. However, it is unlikely to completely stop Big Tech from accessing sensitive data.
To effectively protect against the intentional dangers of AI and prevent future scenarios like Cambridge Analytica, decentralized social media platforms, preferably based on blockchain, are needed. These platforms would reduce the concentration of user data in one central entity, minimizing the potential for mass surveillance and AI-driven disinformation campaigns.
In essence, through blockchain technology, we already have the tools to protect our independence from AI at both the individual and national levels.
While measures proposed by OpenAI CEO Sam Altman, such as collaboration among major AI developers and global organizations for AI safety, are important steps, they fail to address the vulnerabilities created by centralized Web2 entities like Meta. To truly safeguard against AI, further development is urgently needed in the rollout of blockchain-based technologies, particularly in cybersecurity, and the establishment of a competitive ecosystem of decentralized social media apps.
Callum Kennard is the content manager at Storm Partners, a Web3 solutions provider based in Switzerland. He graduated from the University of Brighton in England.
This article provides general information and should not be taken as legal or investment advice. The views expressed here are solely those of the author and do not necessarily represent the views of Cointelegraph.