Science

An A.I. in London is Writing Its Own Music and It Sounds Heavenly

'Folk-Rnn' doesn't want to replace humans, just help them.

by Mike Brown
The Bottomless Tune Box/YouTube

London has given the world some of the world’s greatest musical acts, but its next trick could be something altogether more robotic. In the city that spawned David Bowie, Pink Floyd, and the Spice Girls, two college professors are working on an artificial intelligence capable of making its own music. And it’s already played its first show.

“One way in which A.I. people think about music is as a sequence of notes, and that’s a mistake,” Oded Ben-Tal, senior lecturer in music at Kingston University, tells Inverse. “Music is a social activity, music is a cultural activity, and I think that’s part of the thing of what interests us.”

The race is on to see whether A.I. can add something meaningful to this cultural activity. Ben-Tal has been working with Bob Sturm, lecturer in digital media at Queen Mary, University of London, to see whether they can create a system to generate traditional Celtic folk music. The resultant project, “folk-rnn,” is open source. But this project won’t replace composers — the pair want to develop a tool that musicians can use to help in the creative process, a new source of inspiration.

Many of the original folk tunes, passed down through generations, have been transcribed into ABC notation, a system invented by Chris Walshaw from the University of Greenwich. Ben-Tal and Sturm fed over 23,000 ABC transcriptions into the system to see whether it could teach it to generate its own notations using existing songs as inspiration. The result was encouraging: of a selected volume of 3,000 tunes, musician Daren Banarsë claimed that around one in five were surprisingly good.

“There was a lot more better ones than I thought there were going to be,” Banarsë, a regular player on London’s Irish folk music scene, tells Inverse. “There was a lot of stuff that could be used that wasn’t far from sounding authentic.”

Banarsë opted to incorporate some of the pieces into his performances in pubs around the King’s Cross area of north London, sometimes telling the audience that an upcoming piece will be A.I.-generated. The audience can’t tell which is which, but the performers dislike the idea of playing tunes removed from historical meaning.

“You can feed a system a sequence of notes, and it can produce some other sequences of notes that are kind of interesting, but that’s not music yet,” Ben-Tal says. “But when you actually invite people to think about these notes, to work their stuff through them, then I think it can become a bit more interesting.”

Can A.I.-produced music imitate the real thing?

Mike Brown/Inverse

With that in mind, the pair invited a number of musicians to come together for a show called “Partnerships,” a reference to the relationship between human and machine. The show featured a mix of compositions, all performed by humans, with varying levels of input from the A.I. Some compositions took the computer’s work as a starting point, some used the project as inspiration, while others directly played the generated work as it stood.

“In this concert, we were interested in exploring how useful this is to creating music,” Sturm tells Inverse. “We don’t want to replace people, we want to augment one’s creative explorations.”

The event was held at St. Dunstan’s church in east London’s Stepney district. The current stone structure, built in the year 952, has served as the parish church for over a thousand years. From the outside, you probably couldn’t tell that it played host to the future of music:

Hear that? It's the sound of the future.

St Dunstans

Much like the venue, the music carries a deep history. Úna Monaghan, a composer and researcher currently based at Cambridge University, played the A.I.’s compositions back and fed those recordings into the machine. Monaghan says that she’ll sometimes hear a unique take produced by the computer that takes the traditional music and uses it as inspiration. Depending on her mood, she’ll think of it as “innovating,” or a “breakdown,” or simply “not good enough.”

“I looked at my notes yesterday and I have something called ‘QBW,’ which is shorthand I’ve used to describe the parts where it has quirks but it works,” Monaghan says. “So whenever this happens I note that as something that is clearly strange, something to do with the computer system and wouldn’t really occur in the normal traditional music that I know…but it also works, so in some way it still manages to convince me or I’m still able to play in the same style.”

Ben-Tal performed a four-movement piece called “Bastard Tunes.” This involved asking the A.I. to generate folk music, then modifying the parameters to pull it further away from the source material. Ben-Tal then arranged these new outputs into a larger piece, with the title referring to the fact that the compositions have been pulled away from their origins:

Another was performed by Richard Salmon on organ. For these pieces, the team took some of the generated folk tunes, sent them to Sony Computer Science Laboratories in Paris, and fed them through a second program called DeepBach. This is an A.I. that produces harmonizations based on J. S. Bach’s chorales. The result is below:

It’s not just traditional folk music that played a role. Nick Collins, a music composition reader at Durham University, used the A.I. to create three compositions based on three different artists: Ed Sheeran, Iannis Xenakis, and Adele.

During the performance, audience members were invited to fill out a questionnaire asking what they thought of the performance. Many were surprised at how organic the music sounded, and they enjoyed how the performance experimented with several ideas about how to use the technology in their work. One respondent said that the human musicians obscured the work of the machine, as performers can elevate dull material and make it sound interesting.

Some people expressed concerns about the role of artificial intelligence in music composition. One person wrote, “The computer generated pieces ‘miss’ something — would we call this ‘spirit’, emotion, or passion?”

Another wrote, “I think the science is fascinating and it’s important to explore and push boundaries, but I’m concerned for the cultural impact and the loss of the human beauty and understanding of music.”

It’s not the first time the system has encountered skepticism.

“I haven’t met one musician that I’ve told about this that hasn’t reacted with something close to the negative side of things,” Monaghan says. “Their reaction has been from slightly negative, to outright ‘why are you doing this?’”

Rather than seeking to replace musicians, “Partnerships” was a celebration of how A.I. could work with musicians. None of the performers at the show were advocating for machines to take over their jobs, but were instead exploring how these technologies could aid their work.

“I’m a composer myself, so I can’t imagine not wanting to have some input,” Monaghan says.

For now, though, the job of the composer is probably in safe hands. Many people stated that two of Monaghan’s pieces, “The Choice” and “Chinwag,” were their favorites. Neither of them used the A.I.’s compositions.

Related Tags