Ross Goodwin
Jun 9, 2016 · 22 min read

By Ross Goodwin

Due to the popularity of Adventures in Narrated Reality, Part I, I’ve decided to continue narrating my research concerning the creative potential of LSTM recurrent neural networks here on Medium. In this installment, I’ll begin by introducing a new short film: Sunspring, an End Cue film, directed by Oscar Sharp and starring Thomas Middleditch, created for the 2016 Sci-Fi London 48 Hour Film Challenge from a screenplay generated with an LSTM trained on science fiction screenplays.

Today, Sunspring made its debut on Ars Technica, accompanied by a superb article by Annalee Newitz. Have a look!

To call the film above surreal would be a dramatic understatement. Watching it for the first time, I almost couldn’t believe what I was seeing — actors taking something without any objective meaning, and breathing semantic life into it with their emotion, inflection, and movement.

After further consideration, I realized that actors do this all the time. Take any obscure line of Shakespearean dialogue and consider that 99.5% of the audience who hears that line in 2016 would not understand its meaning if they read it in on paper. However, in a play, they do understand it based on its context and the actor’s delivery.

As Modern English speakers, when we watch Shakespeare, we rely on actors to imbue the dialogue with meaning. And that’s exactly what happened in Sunspring, because the script itself has no objective meaning.

On watching the film, many of my friends did not realize that the action descriptions as well as the dialogue were computer generated. After examining the output from the computer, the production team made an effort to choose only action descriptions that realistically could be filmed, although the sequences themselves remained bizarre and surreal. The actors and production team’s interpretations and realizations of the computer’s descriptions was a fascinating case of human-machine collaboration.