Soon a pact concluded on a voluntary basis between companies developing AI and Europe?

While waiting for the AI ​​Act, the future regulation on artificial intelligence, the European Commission has explained that it will conclude a pact with the giants of the sector. The objective: to try to fill the legal void before the entry into force of the law in good and due form. But will these companies agree, beyond a declaration of good intentions, to bind themselves to obligations that could harm their commercial interests?

On the desks of democracies around the world, the puzzle is the same. On the one hand, we need to regulate artificial intelligence (AI), a sector that has exploded since the launch of ChatGPT in November 2022. And we need to do it quickly to prevent AI systems from remaining in a gray area. , not offering adequate protection to users, rights holders or citizens. But on the other hand, adopting laws takes time. Until regulations like the AI ​​Regulation for the European Union (EU) come into force – which is expected to take at least two and a half years – Thierry Breton, EU Commissioner for Digital and Industry , offered a intermediate solution.

Companies like Google – which has developed a chatbot Bard, a competitor to ChatGPT – could actively engage with lawmakers. And these companies, if they are willing, could conclude a pact with the EU, a kind of commitment on fundamental principles to be respected. This idea was discussed in Brussels between Thierry Breton and Sundar Pichai, the CEO of Alphabet, the parent company of Google and YouTube, reports ReutersWednesday, May 24.

An agreement on fundamental principles pending the law

This pact could be signed by the European and foreign companies in the artificial intelligence sector. Thierry Breton explained on his Twitter account on May 24: “ Sundar and I agreed that we could not afford to wait for AI regulations to actually become enforceable and that we needed to work with all AI developers to develop an AI pact on a voluntary basis before the legal deadline “.

He added that the adoption of the regulation was expected by the end of the year – with its entry into force expected at the end of 2024, at the earliest. Same story with the European Commissioner for Competition, Margrethe Vestager, who also met the CEO of Google. Because there is “ no time to lose “, and “ AI technology is changing extremely fast “, the latter asked a” voluntary agreement on universal rules for AI with companies, while insisting that the AI ​​Act was being finalized.

What would these pacts contain? Would they be binding? For the moment, we just know that these rules would be provisional, until the entry into force of the regulations on AI. They would be applied only on a voluntary basis, like a ” preregulation “Which would settle fundamental questions, explained Margrethe Vestager to our colleagues from echoes

. It would therefore be a base of minimum rules, safeguards to be respected.

Will companies accept binding rules?

This initiative is, on the one hand, a good idea because it will allow companies to actively participate in the development of standards – they will be all the more inclined to apply them if they have helped to design them. But on the other hand, it is not certain that these companies accept in this pact obligations which seem to them too heavy, too intrusive, or which are quite simply contrary to their commercial interests, unless there is popular pressure? After the speeches of good intentions, will these companies play the game?

The purpose of the European visit of the CEO of Alphabet was, moreover, to go through Brussels to express his concerns about future European regulations. Concerns also raised by Sam Altman, whose start-up developed ChatGPT. Each time, the burden of obligations that could hinder the innovation of AI systems is highlighted. The boss of OpenAI went further, pointing out that the obligations contained in the AI ​​Act could have an unintended consequence: depriving Europe of ChatGPT. The latter explained, during a round table organized by a British university, that ChatGPT would be classified as a high-risk system. It is one of the four classifications of the AI ​​Act which obliges companies to respect a level of obligation of transparency and rather advanced governance. However, these rules would be particularly restrictive, according to the latter. ” If we can comply (with the AI ​​Act), we will, and if we can’t, we will cease our activities… We will try. But there are technical limits to what is possible “, he warned.

Read also : ChatGPT: OpenAI could be forced to deprive Europe of it, according to its co-founder

Other initiatives instead advocate the development of international standards to regulate the sector. OpenAI in a blog post published this week pleaded for the establishment of an international agency, like what exists for nuclear with the IAEA, the official body of the United Nations. Margrethe Vestager, the European commissioner, also announced a partnership with the United States – where the majority of companies developing generative AI come from – to establish minimum standards for AI, before the entry into force. of the European regulation. It now remains to embark on the implementation of its declarations.

Source :