Elon Musk Stresses Out Over the Telsa Model 3 and A.I.
"It scares the hell out of me."
When asked about what keeps him up at night, SpaceX CEO Elon Musk had two answers: the Tesla Model 3 production and artificial intelligence.
It’s been five years since Musk was at South by Southwest. This year’s return to the festival included a Q&A session earlier today called, simply enough, “Elon Musk Answers Your Questions!” Jonha Nolan, co-creator of Westworld and frequent collaborator with his brother Christopher Nolan, hosted the event that included questions from the audience. When the subject of A.I. came up, Musk expressed his growing concern about it.
“The danger of A.I. is much greater than nuclear warheads by a lot. Why don’t we have any oversight? This is insane,” Musk said.
When asked why A.I. experts don’t share the same concern has him, Musk answered that none of the experts predicted the rate of improvement for A.I. to grow so rapidly. The example he used was AlphaGo, a program developed by DeepMind (a subsidiary of Google) to play the board game Go. In a matter of several months, the program was able to defeat the best human players in the world. Then came AlphaGo Zero, software that learned the game by playing itself, which was able to beat AlphaGo 100-0.
Musk has been on the forefront of raising the warning flag over the future of A.I. He pleaded to state governors at last year’s National Governors Association meeting and in a joint letter to the U.N. In 2015, Musk and several well-known Silicon Valley leaders launched Open A.I. with a goal to advance A.I. in a way to benefit humanity.
“I’m very close to the cutting edge of A.I. It scares the hell out of me,” Musk said.
Nolan followed-up on how we should deal with the future of A.I. Although he admitted that he was not one for government oversight, Musk added that there is oversight to stop the creation of nuclear warheads and to him, the threat of A.I. is far more dangerous.
In the best case scenario for the world, Musk thinks humanity would need to create a symbiosis with A.I., and that it should try to maximize freedom for people.
“If humanity decides superintelligence is the right mode, we need to do it carefully. This is the most important thing we can possibly do,” he said.