How AI and Machine Learning Are Being Used to Make Art

By BrainStation May 13, 2019
Share

In the age of automation, there was one job that was supposed to be beyond the capacity of any robot: artist.

But recently, technological advances have upended that idea, and machine learning and artificial intelligence (AI) are now increasingly being used to create art across all disciplines, from music to film to painting and even literature.

If that sounds like the stuff of sci-fi, here are some examples of how AI is shaking up the arts.

Metal Machine Music

For her third album, Proto, San Francisco experimental composer Holly Herndon decided to enlist the help of an unusual collaborator named Spawn.

In fact, Spawn is an AI recording system. After writing and recording a score with an ensemble in her studio, Herndon would feed the results back to Spawn, which can mimic musical ideas. She would then take Spawn’s contributions back to the band and record the songs again. On songs like “Eternal,” it’s near impossible to differentiate Herndon’s voice from Spawn’s, or even to discern how many voices one is hearing at any given moment.

Herndon has said that while most attempts at using AI to make music are motivated by economics – for instance, Warner Music recently acquired startup Endel, an app that generates personalized “soundscapes” – she wanted to use the technology to enhance the role of the artist, rather than removing the artist completely. The album has been getting strong reviews.

“I know I’m known as ‘laptop girl,’ but I’m always asking myself: where does the human performer fit into this? How do we continue to develop without automating us off the stage? This frees us up to be human together,” she told The Guardian.

Herndon is far from the only musician taking an interest in AI.

Ambient producer Sevenism has used several tools created as part of Google’s Magenta to fuel his prolific output, including NSynth – a neural network trained on over 300,000 sounds – and Piano Genie. And singer-songwriter Taryn Southern’s 2018 album I Am A.I. was produced entirely using four tools: Amper Music, IBM’s Watson Beat, Magenta, and AIVA.

U.K. artist Ash Koosha, meanwhile, even introduced a virtual singer named Yona, an “auxiliary human” that uses a text-to-speech process with the goal of ultimately replicating the voice of a pop singer.

“My hypothesis is that singers will become redundant because this machine will be able to convey every range of the human voice – an anti-pop manifesto of sorts,” he told The Fader.

A Novel Idea

It makes sense that AI and machine learning could have a major impact on technology-imbued disciplines like film and music, but surely an author’s analog life is the exception?

Actually, writers are beginning to look into the potential of AI to assist in composing something even as complex as a novel.

Author Robin Sloan

Author Robin Sloan

After Robin Sloan received positive critical feedback for his debut novel Mr. Penumbra’s 24-Hour Bookstore, he took a different approach for his second book. Using self-created software that finishes his sentences with a keystroke, Sloan’s method was to write a snippet of text, hit tab, and see what the computer suggested should come next.

He built out the computer’s database of texts by using old science-fiction magazines at first, before finding their language too limited. He then added work by John Steinbeck, Joan Didion, Philip K. Dick, and others, as well as Johnny Cash’s poetry and various other texts.

“I have read some uncounted number of books and words over the years that all went into my brain and stewed together in unknown and unpredictable ways, and then certain things come out,” Sloan told The New York Times. “The output can’t be anything but a function of the input.”

The “Art” in Artificial Intelligence

In December 2018, Christie’s became the first auction house to present a work of art created by an algorithm for bidding. It sold for $432,500 – nearly 45 times its high estimate.

Obvious AI art

Obvious’ AI art

The painting – a portrait of a stocky gentleman in 18th-century dress – was created by Paris collective Obvious. There was thought to be a higher degree of difficulty in creating a human portrait using AI – since, as opposed to an abstract work or a landscape, people are likely to notice irregularities in a representation of a person – but that’s part of what intrigued the team.

They fed the system with a data set of 15,000 portraits painted between the 14th century and the 20th. The algorithm was then composed of two parts: the Generator, which made a new image based on the set, and the Discriminator, which tried to differentiate between the images created by the algorithm and by people.

“We did some work with nudes and landscapes, and we also tried feeding the algorithm sets of works by famous painters,” said Hugo Caselles-Dupre of Obvious.

“But we found that portraits provided the best way to illustrate our point, which is that algorithms are able to emulate creativity.”

AI experiments in the art world seem poised to continue, and it’s not limited to paintings. New York artist Ben Snell sold a sculpture that was created with an algorithm (its training data was an archive of more than 1,000 classical sculptures), while Mario Klingemann sold his AI-created video installation Memories of Passersby I for $52,000 earlier this year.

Movie Magic

There probably aren’t many Hollywood directors who could wrap a film shoot in a matter of 48 hours, but Benjamin isn’t like most Hollywood directors.

Benjamin is an AI system who created a film called Zone Out for a two-day AI filmmaking challenge. Starring Silicon Valley actor Thomas Middleditch and Elisabeth Gray, the six-minute film was created with a combination of thousands of hours of old films and green-screen footage of the two actors.

While it probably isn’t going to leave Steven Spielberg scanning the job ads, the film is nevertheless an impressive step forward already on Benjamin’s first film, Sunspring. L.A.-based director Oscar Sharp – who calls himself the director of the director – decided to let Benjamin do everything on the Zone Out project, including writing the script, selecting scenes, and assembling sentences with recordings of the actors.

“What I was really trying to do is attempt to automate each part of the human creative process to see if we learn anything about what it really is to be a human person creating films,” Sharp told Wired.

Meanwhile, at least one member of his team was thinking long-term.

“If this fails, I’ll be employable for the rest of time,” Gray told Wired. “And if it in fact works, then I may not be employable as an actor, but at least I will have been there at the moment when we realized we were going to be replaced by computers.”

Are you future-proof? Learn more about online data science courses