Amazon Prime’s Darkest Sci-Fi Reveals Controversial Robot Debate
The Internet is abuzz with talk of whether artificial intelligence can get really sentient after a Google chatbot claimed to be just that.
But while the internet may be freaking out about AI sensibility right now, Hollywood has been obsessed with the subject for decades, specifically exploring the idea through robot-human clashes in films like I robot and The Terminator franchise.
Perhaps the most recent iteration of this contentious AI debate appears in Archive, a 2020 sci-fi film streaming on Amazon Prime. The film delves into a range of sci-fi topics, including uploading your consciousness into a digital afterlife, but its central premise teases one particular question: can we create robots that are truly “equivalent” to Human being ?
Reverse sat down with two experts in the field of robotics and artificial intelligence to unpack the heavy science at the heart of this impossibly dark sci-fi film and ask if it misrepresents AI in its quest for equivalence human.
“There are many different definitions of AI at the human level,” baobao zhanga assistant professor of political science at Syracuse University who studies AI governance, says Reverse.
Coil Science is a Reverse series that reveals the real (and fake) science behind your favorite movies and series.
Will AI become “equivalent” to humans?
ArchiveThe protagonist of , a lone scientist named George Almore, has built three different android prototypes – each more advanced than the last – in an effort to create a truly “human-equivalent” AI. An android is a robot that looks like a human being. He tests his prototypes’ ability to display inherently human qualities, such as empathy, through a video game of playing with a puppy.
“You are all an attempt at the same thing. Multi-level learning. Artificial intelligence. The human equivalent of the Holy Grail,” George tells his third and most advanced prototype, J3.
But if you ask an AI researcher if humans can build an “equivalent” robot – or perhaps one that surpasses humans – they’ll give you a more complicated answer than the movie offers.
“I think it’s important to break down what AI outperforms humans means. We need to distinguish between AI outperforming humans in specific tasks and AI outperforming humans in all or almost all tasks. says Zhang.
For example, Zhang’s research focuses more on defining what she calls “human-level” AI in relation to work.
The 2020 sci-fi movie trailer, Archive.
“In our study, we defined ‘human-level machine intelligence’ as when machines can perform 90% or more of tasks better than the median human worker paid to perform that task,” says Zhang.
Sven Nyholmassistant professor of philosophy at the University of Utrecht and author of the book Humans and robots: ethics, agency and anthropomorphism, feels the same as Zhang.
“Well, it would be nice to know the answer to this question: equivalent in what?” asks Nyholm.
Nyholm says it’s plausible that humans could develop an AI that Acts like similarly – in other words, mimics – human behavior in a “restricted set of situations”. But a robot that performs at a level equivalent to humans in all situations seems much less realistic.
Also, if we’re talking about an AI that matches not just the cognitive intelligence of humans, but our full emotional range and capacity for empathy, that’s probably more unlikely, despite what Archive suggests.
“If we have AI technology that lacks an animal or human-like brain or nervous system, it’s hard to see how it could experience feelings or have affective states similar to ours or those of animals. “, says Nyholm.
But there is another, less obvious definition of “equivalence”: a moral definition. Nyholm describes the search for John Danaher, which argues that if a robot behaves in a manner equivalent to a human being, it should have the same moral status as human beings. Nyholm is a little less certain.
“But that’s, of course, a big ‘if’, because creating robots that behave equivalently to how humans behave is very hard to do,” says Nyholm.
Can we really compare AI to humans?
In Archive, George uses a lot of vaguely scientific gibberish to explain how he built his prototypes, throwing in actual terms like “deep learning” — an AI training method that’s modeled after how humans learn — but which is part of its film science rooted in actual AI research? Not really.
Real-world deep learning research Is model AI metrics against human performance, according to Zhang, but that’s about the only thing the movie gets right. It is in the details of the film where Archive leaves the realm of real AI research and enters science fiction.
The film’s scientist, George, developed three different versions of androids, each more developmentally advanced than the next. The first prototype stopped developing mentally at the age of five and is emotionally monotonous. The second prototype is mentally more developed than the first and expresses basic emotions like jealousy. The third and final prototype is meant to be “equivalent” to a human, containing all of the complexities of being human. It uses equivalent brain scans of humans at different ages to show how far each prototype has progressed.
But real-life experts say that’s an overly simplistic comparison between AI and human development.
“The idea that we could map AI development to human development — as in, say, a certain AI system equating to a five-year-old child — is pretty unrealistic,” says Nyholm.
The reason this is unrealistic has to do with the fact that AI systems work in a very different way to human bodies. For example, while the AI can become very good at specific tasks, like playing the game Go, he is often very bad at other tasks outside his field of expertise.
Zhang agrees. “I think it’s hard to match AI development with human development,” she says.
Do AI researchers care about “equivalence” as much as Hollywood?
For all the attention Hollywood has devoted to films examining robots that have matched or surpassed humans, it’s not really such a big concern for researchers working in this field.
“I wouldn’t say it’s a big discussion concern among AI researchers. The majority of AI researchers are working on much narrower topics,” Nyholm says, but he adds that researchers sometimes consider scenarios of science -fiction to inspire their work.
Researchers are interested in making sure AI doesn’t harm people, but not because they’re worried about robots getting too smart, as Hollywood likes to suggest. An example might include racial bias in facial recognition technology, which is based on computer-generated algorithms.
“I think it’s really important that the ‘narrow’ AI systems deployed today are safe, fair, and robust,” says Zhang.
Zhang adds that Hollywood movies like Archive tend to focus on storylines that anthropomorphize – ascribe human qualities – to robots, which can lead the general public to misunderstand how AI actually works in our daily lives in software applications like chatbots or search engines. research.
“Most AI systems today are not androids or robots; instead, they are embedded in software applications with no physical representation,” says Zhang.
“I think it probably doesn’t quite fit in with a lot of real-world AI research. However, it does make for a good engaging narrative in a sci-fi story,” Nyholm adds.
Archive is streaming now on Amazon Prime.