Science

4 Reasons America's Laws Governing Robots Are Terrifyingly Outdated

Robots are evolving faster than the laws that rule their existence.

by Sarah Sloat
Giphy

The “Three Laws of Robotics,” which Isaac Asimov dreamt up for his Robot series, remains an entirely fictional concept. In the real world — which is now full of robots — there are very few statutes regarding the behavior of automatons. Right now, the person doing the most to raise alarm over this potentially troublesome hole in our legal system is Ryan Calo, a law professor at the University of Washington. Calo recently authored the paper “Robots in American Law,” which basically details six decades of confused jurisprudence. Because robots blur the line between person and instrument, Calo asserts, they fit neatly in the cracks of our current legal system.

“The challenges robots pose will only become more acute in light of the explosive growth of the robotics industry over the next decade,” writes Calo. “We are in the midst of a robotics revolution.”

While the hypothetical legal situations that involve robots may be as endless and unknown as their potential usage, we can already point to specific issues where legal clarification is needed. These are the known unknowns of robotics law and they are potentially disastrous. The unknown unknowns, well, they could be worse. Who knows?

A 1988 Samsung advertisement

Samsung/University of Michigan

The Impersonationbot Problem

Humans like humanoid robots for very impractical psychological reasons. This leads to a lot of Madame Tussauds-style uncanny valley nonsense — some of which may be actionable. But, before it can be debated whether a robot is actually stealing someone’s identity, there has to be an agreed-upon definition of what a robot is. Calo believes that to officially be considered a robot, the machine has to match up to these rules:

  • A robot can sense its environment.
  • A robot has the capacity to process the information it senses.
  • A robot is organized to act directly upon its environment.

With that in mind, maybe one of the most famous legal cases involving a robot is White v. Samsung: the suit Wheel of Fortune’s Vanna White brought against the South Korean technology company Samsung for running an ad featuring a female-shaped robot. White believed this robot, with its blonde wig and hostess-like presentation, violated her right of publicity and falsely implied she endorsed Samsung products. She lost the first trial, but won a Ninth Circuit appeal. The judge ruled that Samsung and its robot violated her individual rights of publicity because it appropriated her likeness.

But this hasn’t been the final ruling in all such cases — Calo writes that the courts are still struggling with the idea of whether “a robot version of a person can be said to represent that person in the way the law cares about.” A prime example of this is the idea that impersonation and existence is hinged on the ability to be purposeful and spontaneous. In 1989, a Maryland special appeals court ruled that Chuck E. Cheese’s animatronic puppets were not considered legal performers — meaning the restaurant didn’t get an extra tax, because the performances of these robots didn’t include the possibility of “spontaneous human flaw.”

But, in the future, robots will be spontaneous in some sense. Robots can already “learn” tasks by watching demonstrations, via trial by error, and can adapt their likeness to meet the task at hand.

The Treasurebot Issue

Boston Dynamics messes with its robot.

GifGrabber

According to Calo, courts are still figuring out how robots can legally be considered extensions of people. These situations primarily happen in the moments that are straight out of a spy thriller — salvaging shipwrecks and instances of burglary. For example, in the late 1980s, a court decided that the unmanned robotic submersible equipment — sent to the depths by the Columbus-America Discovery Group — could technically be in possession of the S.S. Central America, which sank carrying heaps of gold in 1857. Until then, the rule was that to have custody, a wreck had to be found by human divers. This case opened the doors to a whole world of robotic underwater exploration, enabling treasure-hunters to go deeper than ever before.

With burglary, there is less consensus. Because of the increasing presence of robots, it’s difficult to determine intent: In some cases, a robot could be sent in by a thief to steal, and, in others, a robot already in place can be compromised by the thief. This is the sort of hypothetical that trips the legal system up.

“Imagine a thief were to take control of a robot already in the home and use it to drop an expensive item — car keys or jewelry — out of the mailbox,” writes Calo. “This activity would violate federal laws against hacking. But is it burglary? The robot had permission to enter the facility; the owner placed it there.”

The Deathbot Dilemma

Robots sometimes kill on accident.

Giphy

Robots consistently hurt people: The legal cases concerning harm caused by robots can be traced by to 1948; today, the Occupational Safety and Health Administration says that every year about two people die at the hands of a robot in a United States factory. That’s not a whole lot, but it’s still a number that exists — and humans must figure out whether robots need to face legal ramifications.

“Someone will have to decide whether manufacturers of robots that are increasingly designed to run third-party code will be liable when that code leads to physical harm,” writes Calo. “The prospect that robots will behave in ways that are not foreseeable to the designer or user is probably closer than many legal scholars admit.”

Figuring out whether a robot can be held responsible will be particularly difficult as more and more robots are programmed to learn things on their own. As robots learn autonomy in mobility and actions, the legal system is going to have to adapt: It can be assumed that a Roomba can’t be blamed for causing someone to trip, but it’s increasingly obvious that sophisticated robots capable of choosing action are just around the corner. Currently, the legal system is essentially baffled when considering these machines — which is why Calo proposes that the government create something like a Federal Robotics Commission, which can deal with self-driving cars once they decide to go rogue.

The Toybot Trouble

Bender does not respect the laws of the land.

GifGrabber

Tariffs aren’t the sexiest of subjects, but are pretty important if you’re a business trying to make a profit abroad. Historically, “animate” objects are taxed about 20 percent less than “inanimate” objects. Here’s where it gets weird: Dolls typically have always been considered animate, because they represent animate life. But a X-Men figurine is taxed because — and this has all been decided in court — in the hypothetical “life” of the toy once it became mutant, it was no longer human.

Robots have consistently been considered inanimate, but that definition will be (or maybe already is) outdated. Can robotic limbs, with their resemblance to human flesh, be considered animate? What about robots and supercomputers that run off biotech fueled from cell proteins and neurons? If the definition of “animate” is resembling biological life, then robots are there. This has implications beyond something like tariff fees — being considered animate could directly affect other legal issues, like the capability of being liable for harm.

There are few solutions here so far, but researchers from the University of Nevada have an idea: Treat robots like domestic animals. Alive yet not morally culpable, a robot could be a lot like a dog who can’t be put in jail for biting the neighbor.