Collaborating with intelligent machines

By Lucy Sollitt

Ian Cheng, Emissary in the Squat of Gods, 2015. Live simulation and story, sound, infinite duration. Courtesy: the artist, Pilar Corrias Gallery, Standard (Oslo)

Whether we know it or not, we’re increasingly collaborating with ‘intelligent’ machines. Automated digital software that learns from us, tracks us, sees where we go and who we interact with is becoming more prevalent in our lives, from the chatbots in our phones to surveillance cameras, image search and driverless cars. The rise of crowd-sourced big data, coupled with new processors, has enabled a massive acceleration in the development of the neural networks that enable machines to become more intelligent.

These intelligent machines are being met with a mix of fear and excitement — from dystopian visions of computers taking over to comparisons with the birth of photography. There is a rush to integrate machine learning, to understand big data-sets and to create new products. The January 2017 Las Vegas Consumer Electronics Show showcased numerous ‘intelligent products’ from The Einstein robot that can babysit your children to a smart hairbrush.

But despite all this, machine intelligence is still relatively primitive, and the full scope of its potential is yet to be seen.

Artists are starting to explore this emergent relationship and ask questions about the role of intelligent machines in art, and about the implications of how intelligent machines are being incorporated into our lives. This artistic activity is not only being created from within the artworld; tech companies are initiating collaboration with artists too.

Poetry in the algorithm

Archive Dreaming, Refik Anadol, 2017, an Artists and Machine Intelligence (AMI) collaboration with SALT Research collections

Of the many tech companies developing machine intelligence, Google, in particular, is looking to artists for some of the answers.

Research Scientist Douglas Eck, lead for Google Brain’s Magenta team, recently spoke about his work on the Magenta project at a Deep Dream Symposium held at Gray Area, San Francisco. The Magenta project is focused on art and music generation using machine intelligence, through this research Google is asking “can machines be creative?”. Fundamentally, Eck explained, the goal is to “create new media which is so good you want to come back to it week after week” to develop “algorithms you care enough about to let them be part of your life”. It’s a little unclear whether Eck is talking about products or tools (or both). Either way, this is a serious research project for Google, a key part of its Google Brain research activities.

Yet, as Eck points out, there is a problem, and it’s one that Google and so many other tech and consumer product creators have been facing: working out what’s relevant, and creating experiences and products that people actually want.

Eck demonstrated how Magenta has been able to generate musical compositions, but lacks the innate story that comes from humans when they interpret and perform the score. The same could be said of another machine learning tool, Deep Dream Generator, which applies painting styles to uploaded images or robots which can paint.

So, Google has the datasets, and it has the machine intelligence research and expertise, but it also recognises it needs artists in the mix. For Google, artists are important in figuring out “how to make algorithms you care about”. Artists are able to find the poetry in the algorithm, and make new poetry using and adapting the algorithms. In both cases, it’s less about the algorithm producing the poetry and more about a collaboration between the artist and the algorithm.

These collaborations are being curated through the Artists and Machine Intelligence (AMI) group. Archive Dreaming is a recent commission by AMI in collaboration with SALT Research collection. Artist Refik Anadol employed machine learning algorithms to search and sort relationships among 1.7m documents. Interactions of the multidimensional data found in the archives, were translated into a generative and immersive media installation. While Archive Dreaming is user-driven, when idle, the installation “dreams” of unexpected correlations among documents and even hallucinates new ones, blurring the line between archiving, analysis and creation.

Within and outside initiatives such as AMI, artists are exploring the implications and possibilities of collaborating with intelligent machines. There are some some great institutions reflecting on this emergent relationship, for example, The Photographers’ Gallery through its digital commissions, events and recently launched Unthinking Photography online research forum. Creative AI: Applications of AI in Art, Music, Film and Design meet-up group is a popular discussion group for creative applications of machine learning, with attendees including artists, designers, technologists and coders.

Collaborating as part of the creative process

KIMA: The Wheel by Analema Group at Roundhouse, photo by Paulo Ricca — London, 2016

New technological tools bring new creative inventions. Many artists are interested in exploring the possibilities of using machine intelligence in the process of creating art.

Blade Runner — Autoencoded, by artist and technologist Terence Broad, was screened as part of The Photographers’ Gallery’s recent Robot Vision Geekender. Broad trained a neural network on the film Blade Runner, and it recreated the film so faithfully that it points to exciting possibilities for machine learning and film creation (as well as raising new questions around copyright). Analema Group has been using neural networks as a tool to visualise sound in real time for its interactive Kima 360-degree installation featured at The Roundhouse. Machine learning algorithms map sound properties of performers onto visual parameters, resulting in a flexible and intuitive interface between sound and visuals. Artist Memo Akten is researching how he might collaborate with machine intelligence to do things like draw in a way that is iterative, and able to incorporate, adapt and interpret inputs like lines in real time.

Others, such as Hardcore collective, are speculating about how the role of a curator could be handed over to machine intelligence. What would the outcomes look like if a robot was designed to create the perfect exhibition?

Meanwhile, Fabrica, the latest winners of Tate Gallery’s IK Prize, has indicated the disruptive effects of handing over the curatorial process to machines. Its project, Recognition, used image-recognition software developed at Microsoft to match images in the news with artworks from Tate’s Collection, creating a real time, ever-expanding virtual gallery that also evolved depending on audience responses. Its matches came up with some emotive and surprising juxtapositions, eliciting a different perspective on centuries-old paintings. X Degrees of Separation, an experiment by Mario Klingemann and Google Arts and Culture Lab, used machine learning to make playful connections between artefacts in collections across a range of galleries and museums.

Ian Cheng’s simulations are a sophisticated exploration of machine learning. Works he has made for Liverpool Biennial and Serpentine Gallery involved creating apps that used machine learning to realise new ways of relating to our own chaotic existence. In a recent interview for Artspace, he described one particularly surprising example of how handing part of creative control to a machine can result in unplanned moments:

“One time, in Emissary in the Squat of Gods, a child character dragged a dead body to an open area and started to pee on it. Other characters nearby saw this, stopped what they were doing, walked over, and started peeing on the dead body, too. This domino-ed into a mob effect, where more and more of the simulated community gathered to pee on this dead body. It was a really magical moment.”

The politics of collaboration

CIPHER, Katriona Beales (2016)

Artists have also been critically investigating the implications of a world where machines are learning to see. Such developments touch on fundamental questions about our emotional relationships with machines, as well as the politics of the systems in which these technological developments are created and used.

Artist and writer Zach Blas Face Cages performance explores the dehumanising effects of biometric face-recognition technology, which is increasingly used in surveillance for authentication, verification and tracking. When algorithms used to enforce power and survey are not sophisticated enough to take account of the specificity of human existence, the effect traps us in “a cage of information”. Through his practice and his Tumblr blog, The New Aesthetic, which came to prominence in 2012, artist, technologist and writer James Bridle has been encouraging us to understand the way technologies influence how we see and think, as well as the underlying politics.

Neural networks are shaped by the data-sets they are based on. This ‘training data’ often encodes the biases of the people who programme the network in the first place, resulting in unintended consequences. Well-known examples of this include Microsoft’s Twitter chat bot Tay which quickly began using racist language on its release in 2016, and an online beauty contest judged by an intelligent robot showed that “the robot didn’t like dark skin”.

CIPHER by artist Katriona Beales explores male biases at work. Beales used Google’s deep dream neural network to analyse a series of medical films held at the Wellcome Collection. The work invites viewers to think about how the algorithmic gaze appears similar to the male gaze. A flip side of this is the possibility of exploring how making purposefully messy, ‘bad’ data can be used to create novel aesthetics or even subvert the systems in which it circulates.

It’s easy to assume intelligent machines are independent entities when you experience their almost human abilities. But in his piece Segmentation.Network, Sebastian Schemieg highlights how Microsoft used low-paid Mechanical Turk workers to identify the objects present in a huge set of crowd-sourced photos, to help develop image-recognition software. Building on this thinking, The Photographers’ Gallery commissioned Schmieg to create a dataset with online visitors browsing and clicking around on the gallery website. Schmieg has made the resulting dataset public for anybody to use at this-is-the-problem-the-solution-the-past-and-the-future.com

It is really positive that research, by companies such as Google, is being performed collaboratively and openly with code and research outcomes shared. Deep Dream is a great example of a tool which experienced a surge of creative activity and play when it was released in 2015. But, as artist Memo Atken has pointed out, without access to underlying training data and code, these tools will only have limited applications, no matter how creative the person using them.

Access to high-tech hardware is also an issue. When tech companies are collaborating with artists, where does it leave those artists who are not directly working with companies? The ability to create both within and outside commercial structures and drivers is vital.

Emotional resonance

Still from AGNES, mixed media. Courtesy of the artist © 2014 Cécile B. Evans

It’s not just processes that are being automated. As online interactions with ‘bots’ increasingly replace mental health and care services, our concepts of empathy, friendship and care are fundamentally challenged. Erica Scourti’s artworks explore the automation of intangible human emotions and experiences. Empathy Deck is ‘a bot with feelings’; by sending followers self-help material, including excerpts from the artist’s own diaries, Scourti highlights the loss of human qualities such as empathy in automated systems.

Yet, when we collaborate with machines, it can still be easy to attribute human-like qualities to them; they can seem like an independent entity with which we can develop an intimate relationship. The film Her is one popular exploration of this.

Artist Cecile B Evan’s Agnes, an online commission for the Serpentine Gallery, explored this emergent relationship. Agnes, a hand-shaped chat bot, aspires to some kind of human existence, while being fixated on her own impermanence and networked existence. As Ben Vickers, Curator of Digital at the Serpentine describes, “Agnes might seem immaterial but she aspires to physicality. The more you share with her, the more she might ask and, of course, return.” While shining a light on what data we inadvertently “share” in our online interactions, Agnes also portrays our human desire to connect and explores the possibilities for sharing our feelings with virtual entities. Agnes wants to be more than just a servant. Geomancer, by Lawrence Lek, follows a similar vein. In Lek’s CGI film currently on show at Jerwood Space, an adolescent artificially intelligent satellite hopes to fulfill its dream of becoming the first AI artist.

Critical juncture

Artists have a key role in helping to develop new cultures. Right now, we’re at a crucial juncture as we explore the kind of relationships we want to have with machines that are beginning to learn, understand and think for themselves. Deep collaboration with the people who make these technologies, more open access to data and hardware, exposure among publicly-funded institutions, and more platforms for reflection and debate will help us understand what it means to live alongside these intelligent machines.

Edited by Matt Sheret

Thanks to Kenric McDowell, Sam Mercer, Memo Akten, Gabrielle Jenks and Helen Starr for their feedback

--

--

British Council Creative Economy
Intersections: Art and Digital Creativity in the UK

British Council Creative Economy team. We work with artists, entrepreneurs, and creative communities globally to tackle today’s cultural and social challenges.