How to Get Lost in Augmented Reality
There are no laws against projecting misinformation. That's good news for pranksters, criminals, and advertisers.
For several weeks this summer, the world was peppered with Pokémon and possibility. People rushed out, phones in hand, to chase Charizards and explored neighborhoods both old and new. For most of July and part of August, it felt like the augmented reality future had finally arrived. To a certain degree, it had. But Pokémon Go was and is a basic program. Though it did start a conversation about how A.R. works across race and class, it did not force early adopters to confront how manipulation and consent work in mixed reality.
The next A.R. phenomenon likely will.
Augmented reality offers designers and engineers new tools and artists and new palette, but there’s a dark side to reality-plus. Because A.R. technologies will eventually allow individuals to add flourishes to the environments of others, they will also facilitate the creation of a new type of misinformation and unwanted interactions. There will be advertising (there is always advertising) and there will also be lies perpetrated with optical trickery.
Two computer scientists-turned-ethicists are seriously considering the problematic ramifications of a technology that allows for real-world pop-ups: Keith Miller at the University of Missouri-St. Louis and Bo Brinkman at Miami University in Ohio. Both men are dismissive of Pokémon Go because smartphones are actually behind the times when it comes to A.R. The cutting edge stuff is mostly used in industrial settings because of the expense (a representative from NGRAIN declined to comment on prices, but said that their technology was frequently used for inspections of Lockheed Martin’s F-35s and F-22s) and functionality. A.R. provides very practical assistance when dealing with complex systems. Gaming applications aside, that’s the role it’s likely to occupy in most peoples’ lives — for better or worse.
“A very important question is who controls these augmentations,” Miller says. “It’s a huge responsibility to take over someone’s world — you could manipulate people. You could nudge them.”
For now, this issue has remained largely undiscussed for the simple reason that the market isn’t saturated. Google Glass bombed and nothing has yet stepped into that space. But Miller says it won’t take long — maybe a few years — for A.R. to become common, if not ubiquitous. Beyond construction work, augmented reality is being tossed around as a potential alternative to exposure therapy for patients, a way for doctors to practice surgical maneuvers before doing a procedure, and a tool for consumers to make better decisions. Will a headset be required in a decade? Not if Magic Leap has anything to say about it.
The problem with free A.R. and cheap A.R. and common A.R. is that any technology designed to help us understand our world can also be used to do the opposite.
One of the most immediate problems both Brinkman and Miller see in augmented reality’s future is that it empowers people to blur the lines between their fantasies and the realities of others. As researchers Batya Friedman and Peter Kahn put it in their paper, New Directions: A Value-Sensitive Design Approach to Augmented Reality:
At times, augmented reality attempts to create a system such that the user cannot tell the difference between the real world and the augmentation of it. Yet, when all is said and done, and the technology is turned off, many users will want to know what was “real” and what was “augmented computation.” Was the TV news reporter really standing in front of gunfire in Bosnia? Or was the news reporter in a quiet studio with an augmented backdrop?
Which means that our world and our perception of that world could become difference thing (more than they already are). There’s an incentive for corporations and groups alike to misinform and deceive, to augment reality very specifically and very cynically.
“We’ve done experiments looking at how augmented reality can use optical illusions to trick you into evaluating the size of smartphones, where the smaller one seems bigger because of advertising graphics,” Brinkman says. “Augmented reality can make the smaller one seem bigger than the other and convince you that the smaller one is better.”
Brinkman’s experiment actually points to the main controversy saddling augmented reality: its ability to deceive users without their informed consent. Miller and Brinkman acknowledge that augmented reality applications usually ask for consent to use data that you’re sending back to companies — location, frequency of use, etc. — but suggest that most companies assume that if you’re using their technology, you’re automatically allowing them to access your everyday actions and change the algorithm to potentially access more of your daily activity and personal information than you might know. And for the most part, they’re legally correct: Both Brinkman and Miller said there are no laws in place that they know of that prohibit a company from using information gathered from augmented reality.
The problem is partially biological. Consider your eyes: They are a delivery system of information incapable of discerning between what is digital and what is real in instances where the two appear very similar. When your retina processes what you see and shoots it off to your brain via optic nerves, your brain isn’t smart enough to discern the difference between real and unreal, taking the image you see — should it be seamless enough, which the future of augmented reality is aiming to be — as truth.
We’ve been wrestling with how perception can get muddled by augmented reality since at least 1996 when researchers vaguely outlined the future of “mixed reality” and how we would see the real and virtual worlds. In the 20 years since, neither neuroscience nor our actual selves have evolved far enough to gauge the challenges of being part of this brave new world — it’s difficult to find studies that link how perception is affected by augmented reality, focusing instead on how we might perceive augmented reality as inferior for its poor visual quality. But Brinkman and Miller disagree with these perspectives, saying that instead of the quality of augmented reality, we should be focused more on how it will affect us. The problem? Augmented reality isn’t integrated enough into our world — outside of Pokémon Go — to be something we study with concrete data and evidence. And that’s the reason why there are no laws in place.
And that is frightening, given the morally questionable — yet perfectly legal — things you could do with augmented reality. You could tempt a diabetic or heart disease patient to purchase something they shouldn’t. You could take cyberbullying to a whole new level by unleashing a phobia on a victim. You could torture a prisoner in their own environment with electric shocks or waterboarding or what have you, without ever touching the prisoner yet intimidating them to the point of confession. The point is this: You could use augmented reality to harm someone psychologically, perhaps physically, maybe even to the point of death — and you could walk away scot-free.
It’s not just the obvious darker uses of augmented reality that are problematic; the so-called “good” aspects of augmented reality can be twisted towards the dark side, too. Miller mentions augmented reality as a potential therapeutic avenue for pedophiles to live out their impulse to sexually exploit children to avoid having them act out in real life. On the surface, this means that pedophiles could maintain their urges in private and without harming children outside their doors, allowing them to live a “normal” life without legal consequences. But the other side of the coin is that these pedophiles are abusing children — even if they’re imaginary — and that’s morally troublesome.
“I don’t know of any policy or anything being done,” Miller said. Brinkman reiterated that sentiment. “I think policy tends to be reactive,” he said, pointing to Google and Microsoft being leaders in this area but also being extremely eager to “avoid something terrible,” a vague way of saying that corporations realize augmented reality could be used for more morbid purposes but are hoping the safeguards they have set up for current applications will suffice. “So far, it’s a conversation that hasn’t been fully had yet, because we don’t know how people will want to use these [augmented reality], and we don’t know what the challenges will be.”
Even if we give humans the benefit of the doubt and hope that augmented reality won’t encourage sordid behavior, consent is a real concern. Miller thinks the first step is having informed consent that is explicit and checks in often. “If you’re going to build these augmented realities, people should have control of what that device is doing to them and have the chance to choose how much augmented reality they’d like to take part in,” he said.
Both Miller and Brinkman brought up the very real, extremely complicated privacy risks that make taping over your laptop camera seem like child’s play. Almost all augmented reality systems work by identifying your location and landmarks around you, then sending a visual fingerprint of sorts to a cloud. “It’s possible to do that in a privacy preserving way,” Brinkman says. “But it’s also possible to screw that up.” And if all that information is just chilling in a cloud, hackers have a total treasure trove of information should they be able to figure out where you are, what you’re doing, and how you’re making decisions.
Augmented reality has the capability to make our lives much better — more entertaining, more helpful, more interesting, less expected. In the interest of speeding mass adoption of a great new tool, Miller says it’s incumbent on officials and experts to anticipate public concerns instead of waiting to troubleshoot until after the trouble starts, because by then, it’ll be too late.
“I don’t know of anyone doing anything to protect users from the future of augmented reality,” Miller said. “And that’s, frankly, not good.”