What are the risks highlighted by the AI ​​Act, the world's first piece of legislation on artificial intelligence?

What are the risks highlighted by the AI ​​Act, the world’s first piece of legislation on artificial intelligence?

After two years of work and thirty-seven hours of uninterrupted trilogue between the Parliament, the Council and the Commission of the European Union, here we are: the world’s first law on artificial intelligence (AI Act) has finally born. An unprecedented political achievement to regulate a technology that many consider to be the driving force of the fourth industrial revolution. However, in Brussels, the entry into force of this text by 2025 arouses antagonistic reactions.

On the one hand, Thierry Breton, Commissioner for the EU Internal Market, shares his enthusiasm: “Europe is going to become the best place in the world to do artificial intelligence.”* On the other, Daniel Friedlaender , European head of the Computer and Communications Industry Association, a major sector lobby, points to the “potentially disastrous consequences for the European economy”.

The EU has had to find a difficult balance between freedom of innovation and the imperative of regulation. It was a question of limiting the potential excesses of artificial intelligence while not restricting European researchers, who face the considerable lead of the United States and China in the sector. The latter envisages the development of AI under the close control of the State, while its American rival designs it according to the law of the market. Europe wants to articulate it around respect for the fundamental rights of citizens.

Since the appearance, in November 2022, of ChatGPT – a robot capable, in a few seconds, of writing essays, poems or maintaining a conversation with its user – public opinion has measured the dazzling advances in AI . It now concerns the lives of each of us, in addition to representing an economic, industrial and political issue. It is therefore essential to know what we consider tolerable or not. The European regulation advocates “regulation by risk” and classifies AI threats in a pyramid manner: unacceptable, high, moderate and zero. Here is a short explanatory guide, with several specific examples.

* In La Tribune Sunday, December 10, 2023.

What are the unacceptable risks of artificial intelligence according to the AI ​​Act?

Certain uses, deemed “contrary to European values” are prohibited:

  • The social rating of citizens. It is practiced in China, with social credit based on facial recognition in public spaces and mass surveillance of social networks.
  • The recording of individuals because of their political, religious, philosophical opinions, their origins or their sexual orientation.
  • Recognition of emotions in the workplace and educational establishments: cameras are now capable of detecting discontent, anger, etc. among employees (also in force in China).

What are the high risks of artificial intelligence according to the AI ​​Act?

In areas deemed sensitive for a country, AI is authorized but subject to reinforced constraints: humans must be able to intervene on the machines and know how to act in the event of a malfunction or deviation. These measures concern:

  • Sensitive infrastructures such as industrial sites which use AI to manage their security system or energy consumption.
  • Education and systems of learning or correction of copies.
  • Human resources departments using payroll, leave and personnel management software.
  • Maintaining order and facial recognition systems for individuals in public spaces. Thus, if it is prohibited in real time, in particular to prevent authoritarian uses linked to mass surveillance, it will be possible for certain police investigations and the fight against terrorism.

What are the moderate risks of artificial intelligence according to the AI ​​Act?

There are so-called “generative” AIs such as ChatGPT, software designed to create artificial texts, images, sounds or videos at the request of a human. The regulation requires that the artificial nature of the content produced is clearly apparent through:

  • A visual mention, in particular to prevent misinformation.
  • Respect for copyright and the clear identification of these productions as artificial.
  • The information given to the user of chatbots – these conversational robots common on institutional websites – that they are not talking with a human but with software.

Similar Posts