Subscribe to get the latest on artists, exhibitions and more.
Subscribe to get the latest on artists, exhibitions and more.
Artist and academic Terence Broad in conversation with curator Luba Elliott as well as Leyla Fakhr and Jamie Gourlay.
LF: I’m joined today not only by Jamie, my colleague, but also by our wonderful guests: Luba Elliott, and Terence Broad. To give a bit of context, Terence will be releasing a body of work with SOLOS called (un)stable equilibrium, and Luba is the curator of this exhibition.
I believe this is the first time SOLOS is collaborating with a curator, and I’m particularly pleased that Luba accepted. I’ve said this before but this is a very fast-moving industry. My background is in contemporary art, but there’s a depth of knowledge I lack, especially around AI art. I’ve always admired Luba for her expertise and what she brings, particularly in relation to artworks involving AI.
Luba and I have spoken several times about artists she’s excited about, and she introduced me to Terry’s work. So I thought we could start with you, Luba. Could you tell us a bit about what you do (even though many here are probably familiar with your academic work) and also, why Terry?
LE: Thank you for the introduction, Leyla. It’s thrilling to hear I’m the first curator SOLOS is working with, it’s a great honour. In my practice, I specialise in creative AI or AI art, and I’ve been working in this space since 2016, organising exhibitions and events, often alongside academic AI conferences like NeurIPS, CVPR, and ICCV, lots of acronyms related to AI and computer vision.
I’ve worked with all sorts of people in this space, but I’m really excited about the upcoming release of Terry Broad’s work. I’ve often felt that Terry is underappreciated in this scene. Back in 2016, when I was starting my career and running a creative AI meetup in London, Terry’s work stood out to me immediately.
He had a project called Blade Runner – Autoencoded, which involved training a neural network to reconstruct the film Blade Runner frame by frame. It got a lot of media attention at the time and was exhibited widely, from the Whitney to the Barbican.
LF: Is that the first time you saw Terry’s work—through Blade Runner?
LE: Yes, I think it was either the artwork itself or through mutual circles. Terry was studying at Goldsmiths at the time, and Memo Akten, another AI artist you might know, was doing a PhD there too. So it was a combination of seeing the work and being in overlapping networks.
LF: I went to Goldsmiths too, Terry, we talked about that, didn’t we?
Terry: Yes, we did.
LE: Wait, you went to Goldsmiths, Leyla? I didn’t know that.
LF: I did! I studied something very impractical, curating. I got my MA in Curating at Goldsmiths. I loved it. It’s a great university.
LE: Yes, it’s got so many different departments. A really good place.
LF: Maybe you could tell us more about Blade Runner – Autoencoded, Terry. It was the piece that really put you on the map right after university. I believe it was shown at the Whitney, was it part of their biennial? You’ll have to remind me. But I’d love to hear more about the thinking behind the work and how it was made.
TB: Yes, so it was a project I did for my master’s at Goldsmiths. I was doing a research MA in Creative Computing, and this was just as generative AI was starting to take off. You could just about generate images (very small ones) with things like GANs in 2015.
My supervisor tasked me with researching how to generate images with AI, but I was more interested in generating video. Since it was a research degree, I spent a lot of time training models on videos—initially, quite restricted material like trains driving around Norway, where there wasn’t much variation.
One weekend, I got bored of looking at these train video generations and thought, “I’ll give Blade Runner a go.” It was something I had always planned to do for fun after finishing the course, but around two-thirds of the way through, I decided to just try it. I was really surprised by how well it worked.
At the time, the general belief was that GANs could generate images, but only ones that were quite similar—like handwritten digits or faces where features are in the same place. The idea that you could feed it a dataset of varied, complex visuals and get meaningful output was still considered unworkable. But this actually worked surprisingly well.
Initially, we thought we might train it on Blade Runner and then feed in other films to see how they'd look interpreted through Blade Runner's memory. But what really caught people’s attention was the reconstruction of Blade Runner itself—training a model on the film and then putting each frame back through that model to see how it remembered them.
I did that for the entire film, frame by frame. I reconstructed the full sequence and uploaded it to Vimeo. I wrote a blog post on Medium and shared it on Reddit. It ended up on the front page of Hacker News.
About a week later, Warner Brothers issued a Digital Millennium Copyright Act takedown of the film on Vimeo. I felt like a complete idiot. But then a journalist from Vox got in touch. She noticed the video had been taken down but was still embedded in the blog. She interviewed me, and after I gave her my usual spiel, she contacted Warner Brothers for comment. They ended up rescinding the takedown notice.
She published the article, and it went viral—something like three million views. Within weeks, I was getting emails from curators at the Whitney, the Barbican, and museums around the world asking if they could show the work. So yeah, that’s how it all took off.
LF: Do you think people at the time really knew what they were looking at? This was quite a while ago now.
TB: It was definitely one of the first pieces of its kind, remaking a film using AI like that. It doesn’t sound very impressive now, but back then the standard for GANs was around 32x32 or maybe 64x64 pixels. I managed to train my model at 256x144, which was quite high-res at the time, something like a 16:9 aspect ratio. That level of fidelity with generative AI was unheard of, and I think that’s why it caught people’s imagination.
But no, I don’t think many people really knew what to make of it. It was all still very new.
JG: Sorry, your first-ever exhibition was at the Whitney?
TB: Yes. It was surreal. An absolutely wild experience.
LF: I’m sorry, Warner Brothers really need to lighten up. What did they think they had to lose?
JG: And what did it actually mean to train a model on Blade Runner at that time? I know what training a model means now, but what did it involve back then? It might be silly to ask, but was it very hard?
TB: It was a very difficult process. It took about eight months - something you could probably now do in a weekend using open-source code and Google Colab.
At the time, TensorFlow had just been released. It was Google’s machine learning library, but it was still in its early days - buggy and not very well documented. I had to teach myself machine learning and deep learning from scratch. It took about four months just to get a functioning codebase that could be trained on the film, and another two to three months to actually train the model.
I used a very modest GPU and a basic gaming PC. By today’s standards, it’s laughable - you couldn’t run even a basic text-to-image model on that setup now.
JG: And the only training data was Blade Runner?
TB: Yes, exactly. I extracted every frame from the film, which amounted to just under 200,000 images. At the time, most people trained GANs with 10,000 to 100,000 images, so it was in the ballpark (maybe a bit larger) but definitely still feasible.
JG: So those frames were just a sequence of still images?
TB: Right. I used software called FFmpeg, which lets you extract all frames from a video file. A video is essentially a sequence of images, your computer plays them at, say, 60 frames per second, and your brain interprets that as motion.
What was interesting with Blade Runner was the variation in frames. Some sequences were very static - you might get a thousand nearly identical frames - while others had constant motion. The model could remember and reconstruct the static scenes very well, but struggled with scenes that changed a lot frame to frame. That contrast made the reconstructions feel compelling.
JG: I can’t imagine there were many people back in 2016 with both the technical knowledge and the interest in AI who chose to use it to make art. That doesn’t sound like a typical path. What made you decide to explore AI in a creative capacity?
TB: Well, a bit of backstory, I left school and went to art college. I did an art foundation at Newcastle College, then went to Camberwell College of Arts to study sculpture. I was teaching myself coding and electronics on the side, and I was really interested in digital and electronic installation art.
At the time, being into technology wasn’t cool. People had brick phones and typed essays on typewriters. It was very anti-tech, and Camberwell wasn’t the right fit for me. Then I found out about this degree at Goldsmiths called Creative Computing. It sounded like exactly what I was struggling to teach myself.
Shoutout to Goldsmiths, they more or less invented Creative Computing. I recently learned I was part of the first-ever Creative Computing degree. It was run by the computing department, so we were mostly learning computer science, but through project-based, creative practice. We studied graphics rendering, audio signal processing, synthesis, all kinds of things.
I originally thought it was an art degree, but it turned out to be very programming-heavy, which actually worked in my favour. By the time I started my master’s, I had a strong technical foundation. I’d already done a research project on computational photography and light fields, so I was interested in how computers could be used in image-making.
When generative AI started to emerge, it felt obvious that this was a new paradigm in visual creation. Even early on, when researchers at MIT or DeepMind were producing tiny, low-res thumbnails, I could see the creative potential. Because of my background in fine art and creative computing, it felt inevitable that I’d end up working with this technology in an artistic context.
LF: That’s fascinating, I didn’t know you came from a sculpture background. I also wanted to ask: why Blade Runner? You mentioned early on you always knew you wanted to work with it, once you were more confident with the coding and model training. Why that particular film?
TB: It seemed like the obvious choice. If you were going to remake any film using generative AI, it had to be Blade Runner. For those unfamiliar, it’s based on the book Do Androids Dream of Electric Sheep? by Philip K. Dick. The story is about androids, many of whom have artificial memories. The main character, Rick Deckard, is tasked with hunting them down, but he starts to question his own memories and whether he himself might be an android.
So the film explores the blurred lines between real and artificial memory. That resonated with me, because with neural networks, you feed them training data and they "remember" things from it. Reconstructing a film from memory, via a neural network, felt conceptually aligned with Blade Runner.
I also found it frustrating that the datasets people were using to train models were so boring—handwritten digits, faces, repetitive images. I thought, why not use something more compelling, like film? There’s a long history of artists treating film as a material, so Blade Runner just made sense. And honestly, I was also really anxious someone else would do it first.
LF: That’s when you know it’s a good idea – when you’re constantly worrying someone else might beat you to it.
Luba, from your perspective, given your deep knowledge of artists working with AI, was this one of the first times (that awful ‘first’ word I’m sorry!) an artist had recreated a full feature film using these techniques? At that length and resolution?
LE: That’s a tricky one. There were other artists working with AI at the time, but with different types of models, and I’m not sure any had fully reconstructed an entire film. What stood out to me about Blade Runner – Autoencoded was that Terry chose this specific science fiction film and asked: what would a neural network remember from it?
I was fascinated by the shapes, colours, and patterns the model extracted, and how different the reconstructed version was from the original. The reception also struck me, particularly the Warner Brothers takedown. Of course, films have always been pirated, but it was amusing to see this blurry, low-fidelity reconstruction flagged as potential copyright infringement. It clearly looked different to human eyes, but apparently not to automated systems. That was quite revealing.
LF: And it didn’t even have audio!
LE: Exactly. I’m not an expert in piracy detection, but it’s telling that it was flagged in the first place.
LF: You curated an exhibition in 2021 that included Terry’s work, right? A group show? If I remember correctly you had works by Entangled Others, Anna Ridler, Mario Klingemann, Helena Sarin, Libby Heaney, a really strong line-up.
LE: Yes, that was probably the first time Terry and I worked together in an exhibition context.
Before that, I was also organising a workshop at NeurIPS, the big academic AI conference with over 10,000 attendees, lots of researchers from DeepMind, MIT, Oxford, etc. I ran a smaller event there with around 300 people, combining paper presentations with an art gallery. I think Terry may have featured there too.
But in terms of art world or NFT-related work, the exhibition on Feral File was the first NFT exhibition I ever curated, and the first real collaboration between Terry and me.
I titled the exhibition Reflections in the Water, partly because aside from climbing and hiking, I really enjoy swimming, which is possible even in London. Water is always changing depending on the season, quality, and type of body it's in. I saw parallels between how water systems behave and how AI functions. You try to generate something in a particular way, but it doesn’t always work out as planned. You have to keep iterating.
I think (un)stable equilibrium would’ve fit perfectly into that show if it had been ready at the time.
LF: Yes, indeed.
LE: What I really appreciated about Terry’s contribution to that exhibition was how well it aligned with the theme. I think he liked the water framing too. His work Fragments of Self feels like a reflection, his reflection, on a shifting surface, maybe water or a disappearing background. It felt like a perfect fit.
LF: I feel it’s a good moment to segue into (un)stable equilibrium, which is the series being presented on Verse via SOLOS. Luba, could you describe the body of work from a curatorial perspective, and why it matters? It’s already received strong media attention, including a great article in MIT Technology Review.
LE: Absolutely. (un)stable equilibrium is a project Terry made in 2019. I’m sure many listening have seen images or videos from it, these beautifully abstract, colour field-like works.
What’s particularly special is that, as we've established, Terry is not just an artist but also a researcher with a PhD in Creative Computing. He’s capable of building his own AI systems from scratch. In this project, he worked with two neural networks generating images, without any training data. That’s quite radical.
Typically, as with Blade Runner – Autoencoded, artists use datasets to train a model, which then produces images in the style of that data. But in (un)stable equilibrium there’s no dataset at all. The networks begin generating without reference, and a feedback loop emerges, one network tries to generate images similar to the other.
What fascinated me was how Terry created this in 2019, when most artists were still obsessed with datasets, either building them or finding them, and when the dominant goal was realism: making AI produce high-quality faces, objects, and lifelike visuals. Terry instead turned to abstraction. He built a system that didn’t imitate but originated.
LF: That’s such a sign of a great artist isn’t it.
LE: While everyone was focused on datasets, Terry was doing something completely different. It made me laugh.
LF: What was your thinking at the time, Terry? Did you know what you were aiming for, or was it the result of experimentation?
TB: Coming off the Blade Runner project, it had been a whirlwind. It went viral, got exhibited globally, but it was also incredibly stressful. I remember flying to New York for the Whitney opening, absolutely panicking, convinced I’d get a cease and desist from Warner Bros. The curators just laughed and said no one was going to sue the Whitney, but I was genuinely worried. That fear hung over every show.
So (un)stable equilibrium was the first work I made after that, and also the first major breakthrough in my PhD. My thesis was about using generative AI without imitating or deriving value from someone else’s work. Even beyond the legal concerns, I felt uneasy about having built a reputation on top of an existing film. I wanted to find a way of working with AI that didn’t rely on training data, that didn’t involve copying anything.
I spent a year experimenting and trying different ideas, loss functions, diving deep into the maths. But nothing was working. Eventually, I thought: what if I just train something without any data at all? That would force it to be original.
I had no idea what it would look like. Usually, when training a model, you monitor the loss curves and periodically check outputs every 10,000 iterations or so. But I took a completely different approach. I watched the output from every single iteration. I wrote code, started the training, and just observed.
That’s how I originally learned to code, using visual tools like Processing. I think I’m a visual thinker, and this approach just made sense. I spent a long time, very stubbornly, trying to get it to work. With my background, both in industry and research, I had a decent sense of how to shape it, but it was still largely experimental.
Eventually, it started producing what you see now. The turning point was when I added competition between the two networks. At first, they were just copying each other, which led to everything converging into a single, dull image—grey blobs. But when I made them compete on colour diversity - pushing each other to generate more colours - it changed everything. Suddenly the system entered a more dynamic, abstract visual space.
And what surprised me was how tasteful the colour palettes became. That wasn’t programmed, it just emerged as a happy accident.
LF: Do you remember exactly what the instruction was? Was it literally, “produce more colour”? Because you can’t go beyond the full spectrum, what were they actually trying to do?
TB: I could explain it mathematically, but in simple terms: the two networks were in a feedback loop. At first, they were just trying to replicate each other’s outputs. But that made them converge too quickly, and the output became very uniform.
So I introduced a constraint, each had to produce more colour diversity within each batch of images. That incentive to expand the palette is what broke the system out of stasis and led to the visual complexity and vibrancy the series became known for.
TB: When training, each network would generate a batch of images. Within that batch, one network had to produce more colours than the other. It’s all in my PhD, including the mathematical formulation for anyone interested in the technical implementation, but it was quite a crude method.
LF: So the images were otherwise visually identical, except one had to contain more colours?
TB: Exactly. Each batch should have greater colour diversity. Both networks were copying each other, but they were also competing to be more colourful. That competition led to divergence—not just in colour but in shape and texture too.
The final piece is a video with two images side by side, representing the output of the two different networks. They're not identical, but close enough that the system considers them similar. Still, each network's response diverges in subtle ways.
JG: And the outputs you placed next to each other, did they just happen to work visually, or were they from the same time in the training process?
TB: In each video, you're seeing the outputs of the two networks trained together. There were six experiments in total. Each animation is built from pairs generated at the same point in time using the same input. I didn't curate them for aesthetic harmony, they’re simply the direct outputs of the system, shown in synchrony.
To generate the videos, I used a method called latent space interpolation. Each generator receives a number, called a latent vector, and produces an output. By gradually changing this number, you generate a continuous sequence of images. Both networks received the same input and produced slightly different results.
LF: It’s such an unusual experiment. Has anyone else done something similar?
TB: There’s Joel Simon’s project Dimensions of Dialogue, which also dates to 2019. I believe we developed our projects in parallel. Then there’s Alex Romeo Santos, a researcher in Paris, who used untrained networks to generate live audio for performances.
So a few others have done related things, but it’s rare. As for this idea of two networks interacting in a feedback loop, cooperating and competing, that’s something I haven’t seen elsewhere.
At the time, most artists were focused on curating personal datasets, building handcrafted systems for expressive outputs. What I was doing felt closer to traditional generative art, writing code, not fully knowing what it would do, but experimenting until something interesting emerged.
LF: And what’s strange is that you made this in 2019, yet only recently have articles started circulating about it.
TB: Exactly. It’s been a bit surreal. I wasn’t actively promoting the work when those articles came out. In fact, Luba and I were already talking before the press appeared. It’s odd that these pieces, made six years ago, are only now getting media attention.
LF: Especially in tech journalism, which is usually obsessed with what’s new.
TB: Yes, and I think it’s because of the current conversations around data. There’s a lot of controversy around how AI models are trained, especially around copyright, scraping, and data ownership. This project didn’t use any data, which suddenly feels very relevant.
But I also think it helps people realise that AI doesn’t have to work one way. There are other creative possibilities.
JG: Did you see these outputs as artworks at the time? NFTs weren’t common back then, so what did you imagine they would be?
TB: Yes, I saw them as artworks. I’d already sold Blade Runner – Autoencoded to a museum as a video edition, not as an NFT, but through a traditional acquisition with a legal contract.
With (un)stable equilibrium, someone did approach me about making NFTs. This was in 2019, before there were established platforms. I tried to figure it out, but I couldn’t. I had these huge, 40GB ProRes video files and no clear way to link them to the blockchain. So it never happened. I still kick myself a little, it was very early days.
LF: And how did you originally present the work?
TB: In 2019, I uploaded short, looping videos to YouTube, one-minute versions, as well as hour-long, meditative sequences. I imagined them as immersive installations. I remember seeing early experimental films by Oskar Fischinger at the Whitney and thinking this could work in a similar space - slow, abstract, ambient.
The hour-long version was shown for a couple of days in 2020, but the pandemic hit and the exhibition was shut down. After that, I started producing stills and selling them as aluminium prints, single images from one network, not the paired video outputs. With this Verse release, I’m going back to the original dual-network video presentation.
JG: And the short video versions now?
TB: They’re 12 seconds long, shorter but they capture the essence. The quality’s also better than YouTube, which heavily compresses files. YouTube compression doesn’t suit this kind of synthetic imagery.
LF: And how do the loops work? It feels like there’s some co-creation between the system and you.
TB: In 2019, before prompt-based systems like text-to-image emerged, I was working with traditional GANs. The input is a latent vector – a multi-dimensional number. You can think of it as a point in a very high-dimensional space. Every point corresponds to a different visual output.
To create a loop, I move from one point to another, tracing a path through this space, and eventually return to the starting point. That creates a seamless loop. It’s called latent space interpolation – it was quite a popular technique in 2018. Modern diffusion models don’t really work like that anymore.
LF: Do you think you’ve pushed this system as far as it can go? Or is there more to explore?
TB: When I made the first six experiments, I saw them as the beginning of an ongoing series. Then I got busy with my PhD and life.
But recently, I started a second series using a different approach. It’s on my YouTube – a single network trained without data. Instead of generating outputs after training, it shows the training process itself as animation.
I’m definitely returning to this territory. I probably won’t keep using the exact same setup – two networks competing to produce more colours – but I want to explore new arrangements. For example, taking already-trained networks and combining them in unexpected ways. Can their weights be merged or cross-pollinated? Can they be remixed creatively?
There’s a lot of potential, especially now that I’ve finished my PhD. I can focus fully on creative practice without tying everything back to a research thesis.
JG: Do you still have the models from 2019?
TB: Yes, I do. And they still run fine. The code’s stable, so there’s lots of scope to do new things with them.
LF: When you started the project, did you envision selling them? Did you think they could be owned as digital artworks?
TB: I always thought of them as artworks. With Blade Runner, I’d already sold a video edition to a museum, so I was thinking in those terms. But NFTs weren’t part of that picture. I did try to explore it, but the infrastructure wasn’t there. Someone contacted me in 2019, asking if I could make it into an NFT, and I was open to it, but I couldn’t figure out how to do it. The file sizes were huge, and I didn’t know how to integrate that into a system designed for permanence and immutability. So it didn’t happen.
Now, of course, it's obvious how you'd adapt it. But back then, it wasn’t.
LF: Well Terry, Luba, thank you both so much for joining us today, it’s been an absolute pleasure to hear more about the work.
Terence Broad is an artist and researcher working in London. His research-led practice takes a hacking approach to working with generative AI systems that treats them as artistic materials. Through practice, he interrogates and makes visible the complex web of computational contingencies that underlie contemporary generative AI systems.
Terence Broad has a PhD in computational arts from...
Luba Elliott is a curator and researcher specialising in AI art. She works to educate and engage the broader public about the developments in AI art through talks and exhibitions at venues across the art, business and technology spectrum including The Serpentine Galleries, arebyte, ZKM, V&A Museum, Feral File, CVPR and NeurIPS. She is an Honorary Senior Research Fellow at the UCL Centre for...