OpenAI co-founder Sam Altman is reportedly seeking to raise a significant amount of money, up to $7 trillion, to address the global shortage of semiconductor chips. Altman believes that the world needs more AI infrastructure, such as fab capacity, energy, and data centers, than what is currently being planned. Building this infrastructure is crucial for economic competitiveness, and OpenAI aims to help in this endeavor.
However, scaling AI infrastructure to this extent raises questions about the ultimate goal of achieving artificial general intelligence (AGI), which surpasses human intelligence. Altman acknowledges the risks and challenges associated with AI systems but emphasizes the need to focus on securing our collective future rather than dwelling on potential failures.
OpenAI requires more computing power and data centers to overcome its growth limitations, particularly in relation to the shortage of AI chips necessary for training large language models like ChatGPT. While the amount of money Altman is seeking is substantial, there is a sense of irresponsibility in his request. It is essential to address the risks and challenges posed by AI systems and ensure that they do not create more problems than they solve.
AI systems rely heavily on data, and with the emergence of generative AI, there is a need for vast amounts of data. However, this reliance on data brings significant risks, such as incomplete or erroneous data, inappropriate use, and algorithmic bias. These issues, along with concerns about hallucinations, disinformation, copyrights, user privacy, data security, and environmental implications, have not been fully addressed or mitigated.
Governments and regulatory bodies are calling for responsible AI practices. President Joe Biden has signed an executive order that requires companies to develop AI tools to address cybersecurity vulnerabilities, protect privacy, consumers, patients, and workers, and address algorithmic bias and discrimination. OpenAI made a commitment to manage AI risks responsibly, but it is yet to demonstrate tangible actions in this regard.
The European Union’s AI Act also emphasizes transparency, documentation, and auditability of AI systems. However, current AI systems are not designed to provide this information, and practical solutions have not been provided. Blockchain technology could potentially assist in implementing auditable responsible AI systems.
Addressing responsible AI practices, including auditability and mitigating environmental implications, should be prioritized before scaling AI systems. It is crucial to innovate responsibly and ensure that AI systems are safe, secure, and trustworthy to secure our collective future. While Altman’s approach may differ, it is essential to prioritize these considerations.
Dr. Merav Ozair, a professor specializing in emerging technologies, emphasizes the importance of responsible AI and the need to address these concerns before scaling AI systems. She suggests that OpenAI should consider implementing auditable responsible AI systems to meet legislative requirements and ensure trustworthiness.
Disclaimer: This article is for informational purposes only and should not be considered legal or investment advice. The views expressed in this article are solely those of the author and do not necessarily represent the views of Cointelegraph.