A.I. Policy Is Lagging Way Behind in America
At the A.I. Now symposium, techies and policy wonks come together to discuss the incredible potential of A.I. to transform our world for the better — and the obstacles that are in the way.
Ed Felten, the deputy U.S. chief technology officer for the White House Office of Science and Technology Policy, says humans have two major responsibilities when it comes to the development and advancement of artificial intelligence.
The first, he says, is “to make the benefits of A.I. a reality.” The second: “to address the risks of A.I.”
Felten was speaking to a roomful of people at New York University’s Skirball Center for the Performing Arts at A.I. Now — a summer lecture series co-sponsored by the White House that sought to examine and discuss key issues related to the future of A.I. technology.
A.I. is at a crossroads, A.I. Now co-chairs Kate Crawford (a researcher at Microsoft Research) and Meredith Whittaker (the founder and lead for Google Open Research), pointed out. Private and public sectors need to work together to create some sort of feasible A.I. policy. But the problem is that while tech companies are making tremendous strides towards advancing the actual code and architecture that goes into making A.I. an all-powerful force, our current policy structures are outdated or, worse, non-existent.
For too long, A.I. has been tossed aside as a futuristic concept, inapplicable to the modern age. But A.I. has quietly manifested itself into urban policy, sifting through loads of data and delivering services to people in a way mere human power is incapable of ever achieving. Felten cited examples in the way algorithms can use data to link people to affordable housing, or enforce transparency so that the public has access to valuable information.
That’s not to say that A.I. is perfect; it’s very much not. During the main panel for the evening, Latanya Sweeney, an A.I. researcher at Harvard University, discussed a story she called, “The Day My Computer Was Racist.”
A reporter who interviewed Sweeney searched her name through Google and discovered that her name was popping up under advertisements for sites offering to collect and share criminal arrest data. Sweeney had never been arrested, and her name wasn’t even part of the website’s database — yet her name was splashed prominently in the advertisement. Like every good scientist, Sweeney took her own personal experience and ran a study and found that these ads were significantly more likely to list the names of black individuals than white individuals. The reason? Google’s delivery of these ads is the result of an algorithm giving more weight to what humans clicked on when ads appeared. When a black name was Googled, more people clicked on these ads. The system learned the pattern and began to deliver those ads in greater frequency.
It’s a valuable lesson to ground the promise of A.I.: humans ultimately power technology, and racist actions on the part of humans can affect design and algorithms and, yes, even A.I.
Google could easily detect those biases in their ad services and work to correct them. “They chose not to,” Sweeney argued.
Could a more modern policy framework compel Google to remediate this problem? Perhaps. Nicole Wong, who was Felten’s predecessor from 2013 to 2014, emphasized that many people — including A.I. researchers themselves — had a real concern about a “growing asymmetry in power” between the people who use big data and the people who are ultimately affected, which ranged from the subjects of the data, or the ones affected by the decisions informed by such data.
These concerns aren’t just confined to the private sector. Roy Austin, deputy assistant to the president at the White House Domestic Policy Council, touched on how law enforcement could be greatly improved by A.I. — or create a massive overreach in power and abuses when it comes to the privacy of civilians. “The question becomes, ‘What do we do with this data?’” he said. It’s one thing to have data, but who has access to it? For how long? Who will have access to it? Again, we don’t have policy answers or solutions to these questions and uncertainties — and that is troubling as we barrel towards a future more and more controlled by A.I.
A third pillar of concern had less to do with nefarious uses of A.I., and more to do with how A.I. and autonomous systems are displacing humans from jobs. Henry Siu, an A.I. researcher at the University of British Columbia, discussed “routine” occupations (where employees conduct a very specific set of tasks that almost never deviate from a set routine), and how these job losses are the ones most vulnerable to disruptions in technology. Automation has created a jarring downsizing in these jobs — and they aren’t coming back.
Sounds like the same old story of industrialization, but it’s not quite. While “this revolution may already be here … [it] may be less exotic than we envisioned,” cautioned David Edelman, special assistant to the president for Economic and Technology Policy. Job loss “won’t happen all at once.” Both he and Siu emphasized that the solution is to create an educational climate where people don’t stop going to school they are constantly acquiring new skills and specializations that let them adapt with technology.
It might be comforting to policymakers to realize that the United States isn’t alone in tackling these issues. But if America intends to continue leading the way for A.I., it’s got to step it up in the policy arena.
Mustafa Suleyman, the cofounder of Google DeepMind, discussed the potential for A.I. to aid healthcare systems and allow doctors on machine algorithms to diagnosis certain diseases and illness — freeing up time for humans to come up with treatment methods. For Suleyman, who is British, it didn’t seem like a farfetched notion to get a system like this set up within hospitals. Sweeney pointed out, however, that “in the U.S., you do not have control over your own data” — there are no regulatory measures to ensure that information is not abused. And that’s a huge problem.
“I want everything that I can squeeze out of every success of technology,” said Sweeney. “The problem isn’t the technology side; the problem is that we’re out of pace with public policy.”