Last month, a robot car hit a human-driven car for what appears to be the first time. The human occupying that car, a Google employee, trusted the machine’s judgement when it changed lanes. There may not be a precedent for an exact case of Google driverless car fender-bender before, but that doesn’t mean there’s no precedent at all. To get there, we’ll need to go to 1950, robots as functional metaphors, and a look into the existing history of robot law.
Here’s the case, from University of Washington assistant law professor Ryan Calo’s new study “Robots In American Law“:
While the 1950s may have imagined driverless cars, they didn’t need that imagination for robots to work as metaphor. A teenager giving his date step-by-step instructions was enough for a jury to place the teenager as the operating person, and not the person physically behind the wheel (it probably didn’t help she was a woman at a time when gender inequality was more profound than it is today).
Calo, who recently co-edited a book on Robot Law says he wrote this latest paper to try and not just anticipate the future of robot law, but to uncover where in the past the law specifically covered robots. Defined broadly, with factory machines, America’s had robots for around 60 years. (“Robot” as a term itself comes from the 1921 Czech play R.U.R., about artificial workers who then revolt against humanity). Just 30 years after the play’s publication, American juries are familiar enough with the idea of automatons and robots to decide a traffic case on it.
Frye v. Baskin is “the first example of a court saying that a person is a robot of another person, and therefore they’re not in trouble,” Calo told Popular Science, “and that’s happened many many many times since.”
“Another case involving an actual robot was an early autopilot case, where a pilot for the Navy was flying a plane on autopilot, an old mechanical autopilot, and ran into a couple in a small plane,” Calo said, describing the 1947 case Ferguson v. Bombardier Services Corp. “The question was whether or not the fact that it was on autopilot absolves him of responsibility. And the court says no, you have a responsibility as the human in the loop in order to monitor the system. That precedent has come to be invoked again and again, all the way into the 1990s. This technology from the 1950s is dictating how people think about a human in the loop in the 90s.
So we’re entering the 21st century with a court system that already understands robots better than we might think. Yet there’s a major gap in the knowledge.
“What they don’t grapple with is ’emergent behavior’,” Calo said, “like the Twitterbot that threatened to kill a fashion show or like Google bots labeling a picture of African Americans as gorillas, or a stock market flash crash. Courts are not yet dealing with robots that are capable of emergent behavior. But they’ve grappled with every other thing.”