"Artificial intelligence is not dangerous in itself"

“Artificial intelligence is not dangerous in itself”

Artificial intelligence (AI) seems very recent to us. In fact, it is not…

No, the discipline was invented in 1956 with the aim of simulating all of human mental faculties (perception, memory, reasoning, communication, calculation, etc.). Its various current versions are only the stages of a project that humanity has been pursuing for a very long time: to build a machine capable of reasoning.

When did this ambition arise?

At the beginning of the reign of Louis XIV, a young Frenchman of barely 19 years old designed a mechanical calculator that could automatically add and subtract numbers. His name? Blaise Pascal. About his creation – the pascaline – he wrote: “The arithmetic machine produces effects that come closer to thought than anything animals do. But it does nothing that could lead one to say that it has will like animals.” These questions still motivate us today.

We are in the 21st century. What does the arrival of the ChatGPT conversational agent, whose use is spreading like wildfire in the world of work, change?

Humanity has reached a new milestone, that of talking machines, accessible to the general public. For a long time, we considered that a language was above all a grammar, a lexicon, a syntax, etc. What ChatGPT demonstrates is that it is also, and above all, the result of statistical relationships between words. Its language model is based on what researchers call “embedding”. In French, this term means “that which fits precisely into another object”. In other words, ChatGPT does not really take into account the meaning of words, it brings them closer to other terms used in the same contexts. As a result, “dog” and “cat” have close embeddings, because they are often interchangeable. Conversely, “goat” and “cigar” have a more distant embedding.

Are these machines reallyintelligent » ?

Their operation relies on impressive computing power, which is set to grow in the coming years. But it is essential to keep in mind that they are only parrots and that they can make mistakes by committing statistical errors. There is no spirit, no intention in these software programs, and that is not the goal of developing one.

Are there different types of AI?

Yes, depending on their specificities and level of learning. So-called “weak” AI is capable of solving specific problems in a specific domain, such as customer service chatbots*. Simulation AI reproduces situations in virtual environments – an air flight to train a pilot for example.

Are there applications present in our daily lives without us being aware of them?

They are even commonplace. Facial recognition or fingerprint systems on our smartphones are AI, as are dictation and text transcription machines or GPS, capable of calculating routes in real time…

Which sectors of activity will be disrupted in the years to come?

Tasks will obviously be automated. One of the jobs most affected by AI is already that of translators and interpreters. In the health sector, X-ray image analysis will probably soon be taken over by AI, at least partially, reducing the work of radiologists.

Other routine tasks such as cashiers are likely to be eliminated. But many jobs are multitasking and, as such, AI cannot fully replace them, only support those who perform them. Take agriculture: AI can supervise the watering of vegetables and fruits using sensors and weather data, to make it optimal and avoid waste.

With the widespread use of text, image and video creation tools, anyone can produce and transmit information online. True or false… Does AI pose a risk of massive disinformation?

Fears about generative AI, which can do what you just said, are justified because the harmful effects of disinformation are already being observed. Whether it is fake news articles or computer-generated images, all this falsified news targets specific audiences, ready to believe these lies. These harmful uses require regulation, but above all they challenge us in terms of media education. Because artificial intelligence is not intended to disinform, it only does what is asked of it.

Is there a way toto trace » these productions?

One solution being considered is for software to insert marks that are imperceptible to the naked eye but detectable digitally, such as the drawings that impregnate the paper of certain official documents or banknotes to ensure their traceability – watermarks, in technical language. Some American giants have committed to stamping their artificial productions with an indelible seal so that they can be distinguished from human creations.

Last year, hundreds of scientists and businesspeople, including Tesla CEO Elon Musk, published an open letter calling for safeguards against AI that is deemeddangerous for humanity ». What do you think?

This is a fine example of arsonist firefighters. Those who develop this technology are calling for a pause to protect themselves from its dangers! There is no evidence to raise concerns about an existential risk for humanity. The “general” artificial intelligence that some fantasize or fear, the one that would develop a conscience, a will of its own, capable of turning against its creator, is a chimera. AI is not dangerous in itself; it is its use that can be. “I’m not afraid of robots,” said Ray Bradbury, the famous American science fiction author, “I’m afraid of the people behind the robots.”

* Conversational agent responding in real time to an Internet user’s questions.

Similar Posts