The European Securities and Markets Authority (ESMA) has emphasized the importance of banks and investment firms prioritizing their clients’ best interests when utilizing artificial intelligence (AI), according to a recent public statement released on May 30. The statement specifically focuses on the use of AI in accordance with the European Union’s Markets in Financial Instruments Directive (MiFID) securities law, stating that financial institutions will bear full legal responsibility for consumer protection. ESMA acknowledges the potential transformative impact of AI on retail investment services in terms of efficiency and innovation, but also highlights the significant implications it may have on the behaviors of financial institutions and the protection of retail investors. Consequently, it emphasizes the need for these firms to demonstrate an unwavering commitment to acting in the best interests of their clients, regardless of the specific AI tools employed. This commitment applies to both internally developed AI tools and third-party AI services, including generative AI chatbots like OpenAI’s ChatGPT and Google’s Gemini.
It is worth noting that earlier this year, the EU introduced comprehensive AI regulations, making it the first jurisdiction worldwide to do so. However, this recent statement from ESMA is distinct from the EU AI Act and focuses solely on compliance with MiFID. In addition to AI regulation, the EU has also taken proactive measures in other AI-related areas. For instance, the EU Council reached an agreement on May 24 to leverage supercomputers in order to enhance the region’s AI ecosystem and support startups. Furthermore, the European Blockchain Observatory and Forum (EUBOF) released a report on May 27, highlighting the potential synergy between blockchain and AI integration, particularly in sectors such as healthcare and finance, where data security is of paramount importance.
In a related context, renowned science fiction author David Brin has proposed deploying AI against each other as a preventive measure to avoid a potential AI apocalypse.