Science

Why the Artificial Intelligence Community Doesn't Like Elon Musk

"I've never heard anybody have anything good to say about Elon Musk."

by Joe Carmichael
Flickr (Adam Dachis) / Getty Images (Pascal Le Segretain) / Wikimedia Commons [Photo Illustration / Joe Carmichael]

Elon Musk is a polarizing figure. He’s promised that, with his two companies, his sole interest is to free humanity from its current tethers — Tesla, with its autonomous electric cars, and SpaceX, with its plans for Mars-bound spacecraft. Together, these companies have the potential to release us from our dependence on fossil fuels, virtually eliminate car accident fatalities, and get us off Earth. Musk’s vision for the future of the species is bright. As long as you don’t let him talk about artificial intelligence.

When he does bring up A.I., people start to don tin foil hats. He’ll invoke Skynet, call it “our greatest existential threat,” and say that, by developing A.I., “we are summoning the demon.” These claims ensure that Musk and A.I. remain in the headlines and that the possibility of an A.I.-wrought apocalypse does not go ignored. But many, if not most actual A.I. experts aren’t too happy about the de facto poster boy’s tendencies. And that’s putting it nicely.

“You’re putting a red cloth in front of a bull,” said Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, when asked about Musk’s habit. “Elon Musk is irresponsible, disingenuous, and I just can’t understand why he does what he does.” Etzioni, given his stature and role within the A.I. community, knows the insiders well. “I’ve never heard anybody have anything good to say about Elon Musk,” he told Inverse. “They might be out there, I just haven’t heard it. Not a fan.”

That’s not to say that Etzioni thinks anyone who fears malevolent A.I. is irrational. (He’s not without reservations: “I wouldn’t say that I’m scared of anybody — with the possible exception of the NSA,” he said.) In his opinion, malevolent A.I. “is a reasonable thing to think about in the 50-year-plus, 100-year-plus time frame.” He admires Nick Bostrom’s work at the Future of Humanity Institute, for instance, which studies and prepares for potential existential risks — even though he respectfully disagrees with many of Bostrom’s conclusions. “Nick is a philosopher, somebody who thinks on the long-term scale. He’s building a serious research center. I do think that investigating that has merit when we’re thinking in rational terms, and rational timescales.”

But when the investigations are harebrained and speculative, and distract from the actual, pressing issues, Etzioni starts eyeing the matador. “[Musk] invests in A.I., heavily — not just in Tesla, but in other investments that he’s made,” Etzioni explained. “At the same time, he uses literally religious imagery to attack and impugn the field. So, I don’t get it.”

Wikimedia Commons

Bostrom, however, thinks Musk is better than other doomsayers, though he does admit that too much negativity is bad for the field. “I admire Elon Musk for not so much his talking, but for doing something in addition to that,” Bostrom told Inverse. “He actually donated some money to do useful research on this. So, I think that distinguishes him from the vast majority of other people in this space, who maybe talk the talk but are not actually doing anything to help.”

Musk is indeed fostering A.I.’s advancement, with Tesla, OpenAI, and personal investments. Teslas drive themselves with an in-house, “narrow” A.I. — an A.I. that’s limited to doing one task really well. If you believe Musk, we need only fear general A.I.s, or superintelligences, which would be capable of reasoning and acting on their own. He’s doing what he can to make sure that no such entity arises — at least not without the necessary protections in place.

Near the end of 2015, Musk (along with Y Combinator President Sam Altman) sponsored OpenAI, a nonprofit A.I. research company, dedicated to progressing A.I. while keeping it safe, beneficial, and benevolent. (OpenAI declined to comment for this story.) That same year, he signed an open letter along with other powerful names (Etzioni and Bostrom included) to pledge support for beneficent A.I. In 2014, he invested in Google’s DeepMind, which he said was just to keep an eye on it, to monitor its inevitable ascension. (He also mentioned Terminator — again.)

When he elaborates on his fears, and steps outside the ease of a 140-character tweet, he seems like less of a rabble-rouser; he just wants to democratize A.I., and make sure that there is no despot or tyrannical machine. “The timeframe is not immediate, but we should be concerned,” he wrote on Reddit in January. “There needs to be a lot more work on AI safety.” A sovereign A.I., or a superintelligence ruled by a powerful few, seems likely to produce a bad future. Musk just wants a good future.

But some believe that Musk is harming the field more than helping it. In Etzioni’s mind, two examples of actual issues are A.I.’s impact on jobs and on privacy. When Musk cites Terminator, even in jest, six million Twitter followers lose sight of these real, necessary conversations. “With artificial intelligence, we are summoning the demon,” Musk told an audience at MIT in 2014, which exasperated Etzioni. “I don’t want to attribute motivation to him,” Etzioni said, “but I do think saying that you’re unleashing the demon is just irresponsible for somebody who’s building self-driving cars.”

In Bostrom’s mind, Musk is doing his best to navigate delicate terrain. “The issue needs to be identified, so that people can see that it exists, and that it’s important to work on it, and so that somebody funds it and talent goes in,” Bostrom explained. “Beyond that, though, I think the alarm is counterproductive. What you don’t want is to alienate the A.I. research community, because they are ultimately the ones who will need to implement whatever safety techniques are developed.” Both sides need to cooperate, and be patient. Those who wish to prepare people for what they see as the world’s inevitable demise “will need to be very sensitive to the desire to avoid provoking public backlashes, or alarmism, or crazies.” Meanwhile, the A.I. community must acknowledge that the demagogues’ fears are not wholly unfounded. “Yes, this is a big technology, and there needs to be some discussion about where it will lead; it doesn’t do to just pretend that there is no possible thing that could go wrong,” Bostrom said.

This discussion is ongoing, and its outcome could alter our collective fate. The A.I. research community tends to be wide open with its advances: two of the biggest players, Facebook and Google, routinely share open-source code with the world, for instance. But this generosity can cut both ways: If any individual or organization can bootstrap a powerful A.I. at no cost, the probability of a good future plummets. Not to mention the government, with its many surveillance and warfare agencies and initiatives; based on its track record, it seems unreasonable to trust that the government’s motives will always be pure.

Etzioni, for his part, thinks competition is good. “I would say that we truly benefit from an ecosystem, and I say that very sincerely,” he said. “So, I think it’s great that the major corporations are investing in A.I., and the government.” Without the Defense Advanced Research Projects Agency, he said, and its decades-long effort to develop natural language processing, we’d have no Alexa.

Oren Etzioni

Oren Etzioni

Etzioni’s taken two actions to demonstrate that A.I.-induced chaos remains either a fiction or a long, long way off. First, he set up a $50,000 challenge: Get an A.I. to pass part of an eighth-grade science test. All 8,000-plus participants got an F. Then, he surveyed the leading experts in A.I., asking each to predict when a superintelligence would arise. Of the 80 respondents, 92.5 percent said it would never happen or would take more than 25 years.

Regardless, the road ahead will be paved with money and (for now, human) brainpower. Stephan Bugaj is Vice President of Creative at Hanson Robotics, which makes A.I.-powered, exceptionally lifelike androids. Bugaj says that Musk represents a disconcerting trend: Wealthy people and companies who are “driving the future the hardest” also seem to be most afraid of A.I. “You have guys like Elon Musk going around, these big shots who are basically saying we should be afraid of robots,” Bugaj told Inverse. “I would say, well, why? Are they making robots that we should be afraid of? And if not, why aren’t they saying anything, loudly? Say, ‘No, we do create the future that we will have for ourselves, and we’re not doing that.’”

“[Musk] says, literally, that humanity is incapable of the moral and ethical decision-making needed to have a positive A.I. future,” Bugaj said. He and Etzioni think humanity is up to the task. A.I. will undeniably change our world, but there’s no reason to believe in a Skynet future. “It’s not an urgent thing. It’s not by any way imminent,” Etzioni said.

Musk will doubtless continue his crusade, and his followers will gladly echo back his proclamations. The A.I. community will continue to develop beneficent A.I., and continue to call his prophecies into question.

Hanson Robotics, one emblematic member of that community, is “spending money on trying to make positive robots. That’s kind of all we do,” Bugaj said. With few exceptions, Musk prefers to point out the flaws over the beauty, and thereby mislead the public. Bugaj would rather empower the public.

“We believe that humanity is up to the task if we want to be,” he says. “If we decide to spend all of our money on killer robots, instead, to blow each other up for oil and land and politics, then, yup: That’s what we’ll get.”

Related Tags