Fifteen minutes into the series premiere of Almost Human, a robot gets casually pushed out of a moving car. He, or maybe it’s an it, despite the fact that the MX model of police bot has a distinctly male face and male voice, tumbles onto the highway and is immediately pulverized by other vehicles. The show’s protagonist, Detective John Kennex (played by Karl Urban) doesn’t register anything. Not the dark pleasure of someone whose just destroyed an uppity printer, or the momentary panic that might follow destroying millions of dollars (one can only assume) worth of property, while also endangering the driving public. He pulls the passenger-side door shut, and goes about his grizzled business.
As narrative beats go, this brief bit of robocide gets a lot done. It establishes that Kennex does not like androids, who, during the time he’s spent in a coma, have become mandatory partners for all human cops. It also queues up his next partner, a mothballed model whose entire line was pulled from service because of its aggravating tendency to experience feelings, and, correspondingly, suffer the occasional emotional breakdown. DRN, pronounced Dorian (played by Michael Ealy) is the more human of the two police officers, a big robotic softie who has to remind his fleshier counterpart to open up. Dorian has to endure neo-racist quips and dismissals—”Synthetic off,” Kennex commands during an early interaction, inspiring the same reaction Dorian that any human might have—as well as the strangely casual threat of being the second robot partner in a row to be sent hurtling down the interstate. The intention is obvious. We’re meant to root for Kennex, but connect with the plight of Dorian. And nowhere within the first or second episodes of the Almost Human is there a hint of a robot uprising to come. Technology in 2048, we’re told, is unregulated and out of control. Sentient machines are part of the solution, not the problem.
For mainstream science fiction, this an interesting departure. It addresses, in its oafish, Hollywood way, the growing global discussion of robot ethics and robot rights, a largely preemptive attempt to lay the groundwork for the culpability of machines and their makers. That Almost Human intersects those issues might not be coincidental. The show’s executive producer, J.J. Abrams, became part of first batch of Director’s Fellows at MIT’s Media Lab earlier this year, where he chatted with researcher Kate Darling about the then-in-development series. Darling, who co-taught a class on robot rights with Larry Lessig at Harvard Law School, has become increasingly drawn into topics related to robot law and ethics. That informal meeting led to a one-hour conference call with show’s writers and series creator J.H. Wyman, where they dug into her research about how humans and robots currently interact, and might in the future.
“They asked me how society is going to perceive robots in the future,” says Darling. “I told them, That’s kind of up to you. These TV shows and movies tend to shape popular perception of robots more than anything else.” Fictional stories about robots, as enemies of our species, are then echoed by robotics articles in the popular press. “The way this show is made is going to shape the debate over this technology.”
There are familiar sci-fi tropes in Almost Human, such as the noble robotic exception, whose emotional intelligence evokes empathy in the viewer, even while we dismiss the setting’s other advanced automatons as soulless hardware, or vessels for malice. It’s the role filled by the T-800 in Terminator 2, with its learning mode and heroic sacrifice. Or Roy Batty in Blade Runner, showing mercy in the rain. Even the lamentable 1992 TV show Mann & Machine featured an android cop coming to grips with her feelings.
That fictional roboticists could so effortlessly create machines capable of real emotions—an accomplishment almost incomprehensible in its complexity, the artificial intelligence field’s equivalent of building a teleportation chamber—is standard-issue Hollywood hand-waving. But where Almost Human diverges from the norm is in showing a society that takes for granted the integration of robots. If there were debates about arming fully-autonomous police bots, or letting sexbots sell their wares, they appear to have been settled by 2048. Advanced weapons and unsettling biotech might be running amok, but robots are fully under control.
As Darling points out, it isn’t J.J. Abrams’ or J.H. Wyman’s responsibility to detail the legal battles and societal hurdles that will stand in the way of humans putting assault rifles in the hands of robots, and shrugging off the occasional gunning down of a bystander. “Sci-fi nerds, who are really into this stuff, will probably be disappointed in this show, and think it needs to look at those gray areas,” says Darling (a self-described sci-fi nerd, and fan of the show). “It terms of change public perception, though, the first step is to simply raise this issue, that we might actually accept robots as life-like creatures in society.”
What’s most compelling about Almost Human, at least in terms of its potential impact on human-robot interaction, isn’t its exploration of what it feels like to be a feeling robot. The nuances and ramifications of a technological capability that’s indistinguishable from magic can make for great stories, and poor scientific speculation. What’s relevant is how we feel about robots.
In that regard, the show doesn’t mesh with Darling’s own research, much of which focuses on how humans sympathize with the unfeeling machines of our own era, robots that don’t even simulate emotions, much less experience them. In her work, which includes studying the online reactions posted on a video of the Pleo robotic dinosaur toy being beaten, strangled, and otherwise abused, and the sense of loss that military personnel feel when their bomb-disposal bots are irreparably damaged, the evidence is consistent. “We’re going to empathize with those things. Even when they are not designed to elicit emotional responses from humans, we bond with them,” says Darling.
Robotic partners, for example, will become something like a buddy, or at least a pet. Kennex may not have developed an attachment to the MX he pushed into traffic—it was their first day together—but the curt, master-and-slave tone that other officers seem to use with their partners seems unrealistic. Dorian may be a born charmer, but the faceless, voiceless explosive-ordinance-disposal robots that receive ad hoc military funerals in Afghanistan aren’t exactly the life of the party. Surly as they are, the MXs would grow on their human counterparts. “I don’t even know how you could design robots to minimize the empathy we eventually feel for them,” says Darling.
It might not matter, though, what the show’s humans think of their bots. Fiction is for our benefit, not its characters. If one of the goals of _Almost Human _is to change our perception of robots, the end of the second episode was a solid start. An illegally-manufactured sexbot is slated for deactivation. Dorian, being Dorian, requests to be there. She is not a very bright bot, and it’s unclear whether she has rudimentary emotions, or is merely programmed to create bonds through familiarity, to better service clients. So it’s unclear whether she understands why she’s in this white, antiseptic room, her back against an upright stretcher-like platform, with a technician milling about behind her. Maybe she’s afraid. Maybe not.
“Where am I going?” she asks, smiling a little.
“To a better place.”
“Will you be there?” she asks.
He pauses. “I will remember you.”
She dies.
Robophobia is pervasive, and deeply-ingrained, and often pretty fun. But if you watch that scene, and it does nothing, brace yourself: You might not be human, either.