science fiction writers have long predicted a terrifying clash between humans and their own creations, but perhaps no one saw the threat more clearly – or earlier – than 20th-century writer Isaac Asimov.
Decades after the publication of his collection of short stories I robot, Hollywood adapted the book into an epic movie starring Will Smith. The movie is now streaming on HBO Max, and it’s worth revisiting for several reasons, according to Sergio Suarez Jr., founder and CEO of TackleAI, a company that works on real-life implementations of artificial intelligence.
“I think what scares a lot of people is that we’re not that far away from that being a reality,” Suarez Jr said. Reverse.
Although I robot‘s suggestions about AI consciousness are even more fiction than fact, the film also raises darker implications about the nature of AI – unfolding in real time today. Let’s take a closer look.
Coil Science is a Reverse series that reveals the real (and fake) science behind your favorite movies and series.
Can AI challenge humans?
I robotThe central premise of is based on Asimov’s “Three Laws of Robotics”, codes that all robots must obey based on their programming:
- A robot cannot injure a human being or, through inaction, allow a human being to be injured.
- A robot must obey orders given to it by human beings, unless those orders conflict with the first law.
- A robot must protect its own existence as long as that protection does not conflict with the First or Second Law.
In the real world, it is now possible for programmers to code similar “universal truths” for AI to follow. However, this mechanism could cause robots to disobey real-world laws to follow their universal truth – similar to a starving human who will steal food to survive.
“If his directive is to continue to exist or continue to help the human, then the only logical thing is to defy the law of any kind so that we can continue to help,” Suarez Jr. says.
In the film, Detective Del Spooner (Will Smith) realizes that a robot called Sonny is capable of disobeying these directives. Spooner is determined to blame Sonny for the murder of U.S. Robotics head Alfred Lanning.
“It’s already happening.”
I robotThe premise of was prescient in many ways. In real life, the AI is perfectly capable of defying human orders – and that worries Suarez Jr.
“Even if you give an AI a very black and white directive, it will be able to navigate it, mostly by interpreting it differently,” he says. “It’s already happening.”
So your Zoomba can be programmed to clean your hardwood floors in a straight line, but if an object obstructs the cleaning robot, it can drive around the obstruction rather than just stopping.
“I think this movie is coming to…is we don’t really know why,” Suarez Jr. said. “We don’t understand why [AI] makes some decisions.
After all, a bot doesn’t leave you a journal entry explaining why it makes certain decisions – it just makes them. As neural networks (the system used to train AI that is based on how the human brain works) become more sophisticated, many developers do not even understand the AI they are programming, leaving little hope for the rest of us.
“I would say 99.9% of developers in the AI industry don’t understand how neural networks are put together or work,” says Suarez Jr..
Will there be robot-human conflicts?
The film’s emotional conflict hinges on the mistrust between Spooner and Sonny, the robot suspected of murdering Alfred Lanning.
Spooner doesn’t trust Sonny because he hates robots. As a young man involved in an accident, Spooner asked a robot to rescue a young girl, but the robot chose him, believing he had a better chance of survival than the girl.
The conflict isn’t necessarily about Spooner being right and the robot being wrong, but about a difference in perspective. Many humans would likely use similar logic to justify saving a grown man rather than a child.
“What percentage of humans would choose – if they had this information and if they knew the statistics – I wonder how many humans would still have chosen the child?” wonders Suarez Jr.
“Each AI is going to have its own viewpoints and its own perspective.”
This kind of robot-human conflict propels the film forward, but it also happens in real life due to the difference in perspective between the different humans. Humans are the ones programming the AI, so robots will incorporate the perspective of the human who created them, leading to potential conflict when the AI does something the human does not want or anticipate.
“Each AI is going to have their own unique views and perspective based on the information they’ve been given,” Suarez Jr. says.
In an age where misinformation abounds, it is more important than ever to carefully guard the information that humans feed to AI to prevent future conflicts. Suarez Jr. likens the analogy to a parent preventing their children from watching violent movies.
“It’s not so much about AI, but about curating the information that we give it,” he says.
With careful curation, AI can be an invaluable tool in helping humans by automating tedious tasks — like sorting through piles of spreadsheets — rather than harming them.
Is AI awareness a real possibility?
By the end of the film, the robots are able to outgrow their programming and become independently sentient, turning against their human overlords. While several aspects of I robot trouble Suarez Jr., he’s not that concerned about the AI becoming sentient in the near future.
Why not? Well, on the one hand, it’s hard to define what constitutes consciousness or sentience—that imprecise self-awareness that separates human beings and some animals, like primates, from all other living organisms.
“How do we know we are alive? How do we know we count? asks Suarez Jr. “We don’t know why we know. It’s one of those things that you can’t really describe.
Anyone who has spent time dealing with an automated customer service chatbot knows that AI can mimic humans quite well, giving the illusion of consciousness, but it can’t necessarily generate independent thoughts or deviate beyond certain types of responses.
“How do we know we are alive? »
But AI doesn’t need to become conscious to seriously harm society. Ethicists have sounded the alarm about the dangers of AI, such as biases in facial recognition technology used by law enforcement, which can perpetuate discrimination against people with darker skin.
“I’m incredibly concerned about this,” says Suarez Jr.. He explains that AI relies on the information given to it to make decisions, so having a diverse group of developers is important to reduce the likelihood of biases finding their way into the AI, reducing their potential harm to society.
“You have to make sure that the people who organize this information are not just a specific group of people, but rather as diverse as possible, says Suarez Jr.. “That will be the key to AI.”
I robot is now streaming on HBO Max.