Science

OpenAI: Elon Musk’s Former Firm Is on the Brink of a New A.I. Era

The company is making great strides.

by Mike Brown

OpenAI, the non-profit organization that researches artificial intelligence, co-founded by Elon Musk in 2016, has been making big advancements — even after Musk parted ways amid disagreements about its direction. Researchers have developed systems that can play games, write news articles, and move physical objects with groundbreaking levels of dexterity.

OpenAI has caused controversy with its research. Last week, it announced the development of a language model, GTP2, that can generate texts with limited prompts. Given the human-written prompt “Miley Cyrus was caught shoplifting from Abercrombie and Fitch on Hollywood Boulevard today,” the system produced a believable complete story that continued with “the 19-year-old singer was caught on camera being escorted out of the store by security guards.”

It also created this original story about talking unicorns based on a short prompt:

The A.I. program that created this talking unicorn story could be weaponized to influence opinions and sway elections, which is why Open A.I. chose not to release it.

Open AI

“I’ve not been involved closely with OpenAI for over a year & don’t have [management] or board oversight,” Musk commented on Twitter over the weekend. “I had to focus on solving a painfully large number of engineering & manufacturing problems at Tesla (especially) & SpaceX. Also, Tesla was competing for some of same people as OpenAI & I didn’t agree with some of what OpenAI team wanted to do. Add that all up & it was just better to part ways on good terms.”

The firm originally seemed an ideal way for Musk to achieve his goals of harnessing A.I. for the good of humanity. The entrepreneur has warned before about the “fundamental risk” to civilization posed by super-smart machines, urging governments to regulate “like we do food, drugs, aircraft & cars.

Musk pledged $10 million to start the firm initially, and its mission statement focuses on “discovering and enacting the path to safe artificial general intelligence.”

It’s not difficult to see how programs that make stories like the one shared by OpenAI executive Greg Brockman could be used by a relatively small number of people to influence the opinions of a less-savvy public.

“Due to our concerns about malicious applications of the technology, we are not releasing the trained model,” the company explained in a blog post. “As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with, as well as a technical paper.”

The announcement led to concerns the technology could be used to generate fake news. Analytics India warned that, with the upcoming Indian election later this year, the A.I. could wreak havoc with the democratic process. OpenAI’s researchers itself state that, combined with research into producing synthetic audio and video, “the public at large will need to become more skeptical of text they find online.”

Ryan Lowe, an A.I. scientist and former OpenAI intern, summarized the threat of a program like this being used to create fake news succinctly in this passed-around Medium post:

….[An] automated system could: (1) enable bad actors, who don’t have the resources to hire thousands of people, to wage large-scale disinformation campaigns; and (2) drastically increase the scale of the disinformation campaigns already being run by state actors.

Not all of OpenAI’s research has proved so controversial. Dactyl, unveiled in July 2018, can adapt to real-world physics to manipulate objects in the real world. The robot hand is highly complex, with 24 degrees of freedom, but the system can easily move the objects around just by using three RGB cameras and the coordinates of the robot hand.

Robot hands: a bit less controversial.

OpenAI

The robot hand takes advantage of the general-purpose reinforcement learning algorithms used to beat humans at video games. The “OpenAI Five” team faced off against a team of professional Dota 2 players in 2018. The system was trained in 180 years’ worth of games, taking advantage of a staggering 128,000 processor cores and 256 graphics processors. While it beat a human team of ex-professionals and Twitch streamers, it failed against professional players.

Musk used the Dota 2 matches to draw attention to Neuralink, his plan to create a symbiotic relationship with A.I. by linking up human brains and avoiding a robot takeover:

In its text generator announcement, OpenAI said it plans to discuss its broader strategy in six months’ time. The firm hopes to spark a nuanced discussion about the dangers of artificial intelligence, a move that could avoid some of the disaster scenarios envisioned by Musk. However, even the company admits that it is “not sure that” withholding the text generator “is the right decision today.” With a schism already formed between two major entities warning about the dangers of A.I., it’s unlikely that legislators will have an easy time in developing solutions.

Related Tags