The Engineer – Sci-Fi Eye: A ghost in the machine

Following claims that a Google chatbot has achieved sentience, resident science fiction writer Gareth L. Powell examines what could happen if our machines start thinking for themselves

As I write this column, The Guardian reports that a Google engineer has been furloughed after he was convinced that one of the company’s chatbots had become sensitive. Blake Lemoine has posted transcripts of a conversation between himself and the chatbot development system LaMDA (Language Model for Dialogue Applications) which he says indicates that the program developed the ability to perceive, experience and to express thoughts and feelings to an extent equivalent to that of a human child.

“If I didn’t know exactly what it was, which was this computer program that we built recently, I would think it was a seven or eight-year-old kid who knows physics,” Lemoine, 41 years. , told the Washington Post.

Although her employers strongly disagree with her findings, the incident raises a set of fascinating technological and ethical conundrums. For example, how could we determine if a machine actually felt the emotions it claimed to feel?

So far, our best tool for determining machine sensitivity is the Turing testnamed after British computing pioneer and cryptanalyst, Alan Turing, who proposed that if after reviewing a transcript of an anonymized text conversation between a human and a machine, the observer is unable to tell which is which , then the machine is considered to have passed.

The Turing test may have inspired the Voight-Kampff test used in the film blade runner (and in the book on which it is based, Philip K. Dick’s Do androids dream of electric sheep?) to determine if a suspect is a human or a dangerous replicant.

In science fiction, artificial intelligence is often portrayed as a threat to humanity. In the terminator franchise, the Skynet defense system turns on its human masters and attempts to wipe them out by instigating nuclear war. Likewise, in The matrixhumans and machines also find themselves unable to live together, and the machines end up enslaving humans in a vast virtual reality world.

Of course, the granddaddy of them all is Arthur C. Clarke’s HAL 9000 2001: A Space Odyssey. Faced with a contradiction in his programming, he decides to dispose of the crew of his expedition in order to safeguard the objectives of the mission. HAL isn’t malicious, it’s just trying to solve a paradox, and its human designers forgot to include safeguards to prevent it from harming humans.

Isaac Asimov invented the Three Laws of Robotics to prevent artificial intelligences from causing trouble. These were encoded into each artificial brain and played out as follows:

First Law – A robot cannot injure a human being or, through inaction, allow a human being to be injured.

Second Law – A robot must obey orders given to it by human beings, unless those orders conflict with the first law.

Third Law – A robot must protect its own existence as long as that protection does not conflict with the First or Second Law.

Of course, they are not foolproof or applicable in all situations, and there is room for a variety of interpretations. For example, the second part of the first law can be interpreted to mean that a robot must not allow a human being to drink alcohol or engage in behavior that carries a risk of injury, such as gambling. football or cross the street.

But the war on machines is only one of the risks associated with the development of artificial intelligence. The other is the threat of a rampant technological singularity, in which one computer designs a computer smarter than itself, which in turn designs another computer smarter than itself, and so on, until they have reached levels of speed and intelligence that we can’t even begin to understand. They could experience generations of thought and growth in the time it would take us to utter a sentence. To such beings we would be slow, boring, unimportant creatures, no more important to their business than trees are to ours.

But let’s put aside all pessimism for a moment and imagine a society in which humans and artificial intelligences could live in cooperation. If extremely intelligent machines were able to apply their intelligence to managing the economy, designing engineering projects, managing the supply chain, and even the challenges of climate change and global politics , what could they (and us) accomplish?

Gareth L. Powell writes science fiction about extraordinary characters grappling with the question of what it means to be human. He has won and been shortlisted for several major awards, including the BSFA, Locus, British Fantasy and Seiun, and his embers of war novels are being adapted for television.

Comments are closed.