OpenAI, the developer of CbatGPT, made headlines recently when it fired CEO Sam Altman due to a loss of confidence by the board. However, Altman’s return to the company after 90% of OpenAI staffers threatened to resign caused a stir. This incident sparked interest from companies looking to attract top-tier talent by offering to match OpenAI’s salaries.
The controversy surrounding Altman’s firing and the lack of transparency that accompanied it highlighted the need for regulations in AI development, particularly in terms of security and privacy. As companies rapidly develop their artificial intelligence divisions, a reshuffling of talent could give one company an advantage over others and potentially challenge existing laws. While President Joe Biden has taken steps to address this issue, his reliance on executive orders, which do not require input from Congress, raises concerns. These orders rely on agency bureaucrats to interpret them and could be subject to change with each new administration.
Earlier this year, Biden signed an executive order related to “safe, secure, and trustworthy artificial intelligence.” This order directed AI companies to protect workers from potential job losses and tasked various federal agencies with establishing governing structures. The order also prompted the Federal Trade Commission (FTC) to self-evaluate its authority to ensure fair competition in the AI marketplace and protect consumers and workers from potential harms enabled by AI.
However, relying solely on executive orders has its limitations. Such an approach lacks permanence and can cause confusion and uncertainty, as seen in the SEC and CFTC’s attempts to classify cryptocurrencies as securities. Policies developed by agencies without legislative support do not have the same lasting impact. Public input is crucial for the passage of regulations backed by agencies, but the legislative process allows users of AI and digital assets to have a stronger voice and contribute to the creation of laws that address real problems faced by users, rather than ones created by ambitious bureaucrats.
Biden’s failure to address the ethical implications of widespread AI implementation is concerning. Issues like algorithmic bias, surveillance, and privacy invasion are not receiving adequate attention. These concerns should be addressed by Congress, which consists of elected officials, rather than agencies composed of appointees.
Without the rigorous debate and consideration required for Congress to pass laws, there is no guarantee that laws promoting security and privacy for everyday users will be established. Users of AI need to have control over how their personal data is used and stored, especially considering that many users do not fully understand the underlying technology and the security risks associated with sharing personal information. Additionally, laws should ensure that companies conduct risk assessments and maintain their automated systems responsibly.
Relying solely on regulations enacted by federal agencies will lead to confusion and a lack of trust in AI among consumers. This was evident in the digital assets domain after the SEC’s lawsuits against Coinbase, Ripple Labs, and other crypto-related institutions, which made some investors wary of involvement with crypto companies. A similar situation could unfold in the field of AI if the FTC and other agencies sue AI companies and tie up crucial issues in the court system for years.
It is crucial for Biden to engage with Congress on these matters rather than relying solely on the executive branch. In turn, Congress must rise to the occasion and craft legislation that addresses the concerns and aspirations of a diverse range of stakeholders. Without collaboration, the United States risks repeating the challenges faced in the digital assets domain, falling behind other countries and hindering innovation. Most importantly, the security and privacy of American citizens, as well as those around the world, are at stake.