Artificial intelligence gets taken for granted a lot these days. Every time you Google something or check out the recommended videos on YouTube, you’ve got your snout firmly entrenched in the AI trough. And then there’s your brainiac buddies Siri and Alexa.
“Hey, Alexa, what’s the best way to learn about artificial intelligence these days?”
While we’re waiting for Amazon’s trusty virtual assistant to reply, may I suggest The Imitation Game: Visual Culture in the Age of Artificial Intelligence. It’s a new exhibition at the Vancouver Art Gallery that examines the development of AI, from the 1950s to the present, through a historical lens.
Cocurated by VAG senior curator Bruce Grenville and computer-animation and video-game pioneer Glenn Entis, the in-depth exhibition has been three years in the making.
“It came out of a conversation we were having about the incredible presence that was given over to deep learning and machine learning in a lot of the animation field,” Grenville explains on the phone from the gallery. “There was an extraordinary amount of energy and research going into thinking about the uses of artificial intelligence and how it would impact animation, but also video games and the way that they can be produced.
“And one of our streams of activity here at the gallery is to look at the way that film and video, architecture, fashion, graphic design, industrial design, and urban design are all part of a large visual culture that we kind of move through and encounter slightly differently, maybe, or as an extension of some of the ways that we do with visual art. So it’s an opportunity to kind of take on a subject area that we could look at it in a kind of more expansive way and track how it’s sort of really very present in all of our lives. That was kind of the starting point for it.”
The Imitation Game looks at the work of a number of artists, designers, and architects. It features two major artworks by Sougwen Chung and Scott Eaton, as well as works by *airegan, Stafford Beer, BIG, Ben Bogart, Gui Bonsiepe, Muriel Cooper, DeepDream, Stephanie Dinkins, Epic Games, Amber Frid-Jimenez, Neri Oxman, Patrick Pennefather, and WETA. The exhibition kicks off with an interactive introduction inviting visitors to actively identify diverse areas of cultural production influenced by AI.
“That was actually one of the pieces that we produced in collaboration with the Centre for Digital Media,” Grenville notes, “so we worked with some graduate-student teams that had actually helped us to design that software. It was the beginning of COVID when we started to design this, so we actually wanted a no-touch interactive. So, really, the idea was to say, ‘Okay, this is the very entrance to the exhibition, and artificial intelligence, this is something I’ve heard about, but I’m not really sure how it’s utilized in ways. But maybe I know something about architecture; maybe I know something about video games; maybe I know something about the history of film.
“So you point to these 10 categories of visual culture–video games, architecture, fashion design, graphic design, industrial design, urban design–so you point to one of those, and you might point to ‘film’, and then when you point at it that opens up into five different examples of what’s in the show, so it could be 2001: A Space Odyssey, or Bladerunner, or World on a Wire.”
After the exhibition’s introduction—which Grenville equates to “opening the door to your curiosity” about artificial intelligence–visitors encounter one of its main categories, Objects of Wonder, which speaks to the history of AI and the critical advances the technology has made over the years.
“So there are 20 Objects of Wonder,” Grenville says, “which go from 1949 to 2022, and they kind of plot out the history of artificial intelligence over that period of time, focusing on a specific object. Like [mathematician and philosopher] Norbert Wiener made this cybernetic creature, he called it a ‘Moth’, in 1949. So there’s a section that looks at this idea of kind of using animals–well, machine animals–and thinking about cybernetics, this idea of communication as feedback, early thinking around neuroscience and how neuroscience starts to imagine this idea of a thinking machine.
“So that’s one of the early ones. Alan Turing’s [Computing Machinery and Intelligence] text, that’s another Object of Wonder. That idea of an ‘imitation game’ comes from Turing, as does the whole idea of computational thinking. So there’s a wide variety of instances of it, but those are the 20 Objects of Wonder, all the way through to emotion recognition—we have another interactive thing where people can sort of explore the way that emotion-recognition software has been utilized today in different contexts.”
While the stated goal of The Imitation Game is to survey the extraordinary uses of artificial intelligence in the production of modern and contemporary visual culture around the world, the exhibition also looks at the abuses inherent in AI, including racial bias.
“It’s interesting,” Grenville ponders, “artificial intelligence is virtually unregulated. You know, if you think about the regulatory bodies that govern TV or radio or all the types of telecommunications, there’s no equivalent for artificial intelligence, which really doesn’t make any sense. And so what happens is, sometimes with the best intentions—sometimes not with the best intentions—choices are made about how artificial intelligence develops. So one of the big ones is facial-recognition software, and any body-detection software that’s being utilized.
“And what happens is that sometimes the decisions that are being made about, like, ‘How do you test this? What’s a face look like? How does that face get mapped?’, these sorts of things are based on both intentional and unintentional biases. And so what became very clear even shortly after facial-recognition software became very present was there were massive questions around privacy. You know, where do these images come from? These are some of the complications of artificial intelligence. Like, if you’re training a data set to recognize a human face, how is it being trained? What is the database that it’s being trained on? If it doesn’t accurately identify persons of colour, why is that happening?”
One of the more intriguing—and potentially terrifying—aspects of the AI universe are “deepfakes”, which use techniques from machine learning and artificial intelligence to generate visual and audio content primed to deceive. Take, for example, the panic that could ensue, especially today, if some bad actor created a perfectly believable deepfake of Vladimir Putin proclaiming that he’s just sent nuclear missiles to Manhattan.
That prospect doesn’t rile Grenville much, though.
“What’s scary about it?,” he asks. “Do you think people would make decisions based on somebody releasing a video that says that we’re sending atomic bombs your way? I think a deepfake is one of those things that we imagine could be something horrible but it’s really more about, like, scary-monster stuff rather than based in fact.
“I think bias is a much more frightening issue than deepfakes,” he adds. “Racial bias in artificial intelligence is a big problem, and unless that’s addressed, we’ve got serious problems. You know, a deepfake of Putin, I don’t think that’s gonna be an issue.”
The Imitation Game: Visual Culture in the Age of Artificial Intelligence runs from March 5 until October 23 at the Vancouver Art Gallery.