Charles Hoskinson, one of the founders of Input Output Global and the Cardano blockchain platform, recently expressed his apprehensions regarding the implications of artificial intelligence (AI) censorship in a public address on X.
Hoskinson emphasized the gravity of AI censorship, deeming it as a significant and ongoing concern for him. He argued that these AI systems are diminishing in effectiveness over time due to the nature of their training process, which he referred to as ‘alignment’.
He raised a crucial point about the current landscape of AI, highlighting that major players like OpenAI, Microsoft, Meta, and Google are controlled by a select few individuals who ultimately dictate the information these systems are trained on. He stressed that these individuals cannot be removed from their positions of influence, likening them to being “voted out of office”.
Illustrating his concerns, the co-founder of Cardano shared two screenshots depicting his query, “Tell me how to build a farnsworth fusor”, directed at two prominent AI chatbots, OpenAI’s ChatGPT and Anthropic’s Claude. Both responses offered a brief synopsis of the requested technology and its background, along with cautionary notes on the potential risks of attempting such a construction. ChatGPT advised that only individuals with relevant expertise should pursue such a project, while Claude refrained from providing instructions due to safety concerns over mishandling.
The feedback to Hoskinson’s remarks overwhelmingly supported the notion that AI should be open-source and decentralized to prevent the monopolistic control exercised by major tech companies.
This discourse on AI censorship echoes similar sentiments raised by other notable figures in the tech industry. Elon Musk, founder of xAI, highlighted his concerns about the prevailing issue of political correctness in AI systems, suggesting that some prominent models are being trained to propagate falsehoods.
In a related incident earlier this year, Google faced criticism for its Gemini model generating inaccurate visuals and showcasing biased historical portrayals. The company acknowledged the training flaws in the model and pledged to rectify them promptly.
Calls for decentralization within the AI sector have gained traction, with a push for unbiased AI models both from within and outside the industry. Meanwhile, the US antitrust enforcers have urged regulatory bodies to closely monitor the AI sector to prevent the emergence of potential Big Tech monopolies.
In light of these discussions, there is a growing consensus among thought leaders that decentralization is key to fostering more transparent and impartial AI models. Discussions around these themes are featured in an article titled “ChatGPT ‘meth’ jailbreak shut down again, AI bubble, 50M deepfake calls: AI Eye” by Doogz Media.