From tin-can robots to sophisticated, sentient virtual environments, artificial intelligence (AI) is a dominant theme in science fiction. With real-world advances in machine learning and deep learning, the gap between fact and fiction is narrowing.
Real AI isn’t all about robots and self-aware computers. From Siri, search engines and motion-sensing video games to medical imaging and diagnostics, artificial intelligence is an increasingly significant part of our lives. Cray systems are used every day to solve artificial intelligence problems through machine learning and deep learning approaches.
In this three-part blog series, we’ll look at a few examples of AI in sci-fi and see how they match up with reality. First up: robots.
The Robot Revolution
Robots — especially humanoid robots — are often the first thing that comes to mind when we think about artificial intelligence.
In 1927, the first big-screen robot appeared in the German film Metropolis, set in the year 2026. This “Maschinenmensch” was a female robot created in the likeness of a human, and she helped usher in the age of the evil robot.
Addressing the fear of a robot takeover, Isaac Asimov introduced his Three Laws of Robotics in the 1940s:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Asimov’s laws were influential in science fiction, and he eventually adapted them to include a “zeroth” law: A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
Some fictional robots were designed to be more cuddly than scary. In the 1970s Star Wars introduced the lovable C3PO and his quirky sidekick, R2D2, who stole the hearts of millions of human moviegoers. Similarly, the 2008 movie WALL-E starred a cute little garbage-collecting robot, a sympathetic character who falls in love and embarks on a great adventure. We like WALL-E because his emotions are so relatably human.
Fan favorite Data from Star Trek: The Next Generation looked and acted nearly human — more so as the series went on — and, like Pinocchio wanted to be a real boy, Data wanted to be human.
In the 21st-century reboot of Battlestar Galactica, the Cylons were indistinguishable from our own race. The fact that they looked and acted just like us made them more frightening than they would have been as purely mechanical antagonists.
And the movie Blade Runner, based on Philip K. Dick’s 1968 sci-fi classic Do Androids Dream of Electric Sheep?, follows a bounty hunter hired to track down six rogue robots who look like humans. The only way he can tell they’re robotic is by administering an empathy test, reminiscent of the Turing test popularly known as a method for determining passable machine intelligence.
In the world of real science, the terms “artificial intelligence” and “machine learning” first appeared in the 1950s. So did the first AI program and artificial neural network. A mobile, reasoning robot followed in 1966, described by a Stanford Research Institute scientist as “the first electronic person.” The robot, called “Shaky,” could look around and maneuver himself on wheels. He “may not seem like much,” according to a 1970 Life Magazine article. “No death-ray eyes, no secret transistorized lust for nubile lab technicians. But in fact he is a historic achievement.”
Many of today’s robots don’t look like stuff of science fiction. They may be factory robots comprised of complex circuitry and a mechanical arm, or a device like the Amazon Echo, which gives users easy access to information and communications tools and control over connected home devices. Other designs are creepily realistic in resembling the human form, like the “world’s first android newscaster” created in Japan.
But robotic technology hasn’t yet come up with a realistic replica of human function. Machines have been successful in finding hidden patterns in data and beating humans at games like chess, Go and Jeopardy. They can score well on intelligence tests and, according to Moravec’s paradox, undertake reasoning on a high level – but they struggle with human sensorimotor skills like perception, attention, visualization, coordinated movement and social interaction.
Robots don’t experience emotion like Data and WALL-E — and they don’t yearn to be human.
Stay tuned for Part 2 in the series, Beyond Robots: Advanced AI in Sci-Fi.