verse
INTERVIEW

Mario Klingemann on 'Teratoma': An Exploration of AI and the Human Form

Journal article cover

Mario Klingemann in conversation with Mimi Nguyen, foreword by Luba Elliott.

Klingemann's Teratoma series is a haunting exploration of the human form through the lens of early generative adversarial networks (GANs), AI models that the artist trained on a dataset of portraits and naked human bodies. These works are a form of ‘neurography’, a term coined by the artist to describe this form of image-making with neural networks instead of the camera. The black and white colour scheme lends the pieces a timeless quality, associating them with classical photography, whilst the GAN models used for their creation situate the works at a particular point in time in the development of AI technology. Early GAN models such as those used in this piece are known for making mistakes in their depiction of facial features, frequently misplacing the ears and eyes. Klingemann uses these to his advantage, creating an abstraction of the human face and body amongst fluid textures reminiscent of flesh, an image that takes the viewer time to assemble and comprehend. The title of Teratoma solidifies the link to the processes of the human body, drawing parallels between tumours made up of different tissues and the way GAN models process the dataset to generate images that mix up various body parts.

- Luba Elliott

Mimi Nguyen: AI - is it a tool or future artist?

Mario Klingemann: AI is an instrument. A "parascope"—an instrument of inspection—that allows us to observe and discover things which are beyond our reach, like the microscope or the telescope did before. But at the same time an instrument of creation and expression, like the piano. Can such an instrument eventually emancipate itself from its creators and become an artist itself? I do believe so, which is why I created "Botto" two years ago in order to find out. But having the technical potential to be an artist and becoming one is dependent on so many complex factors that it is still too early to tell.

Mimi Nguyen: How did your interest in utilising machine learning first emerge?

Mario Klingemann:  I am driven by curiosity but at the same time I am also getting bored very quickly. What bores me most is if I reach the point in a process when I believe that I have understood the parameters that define a system, which makes the execution of that system just a matter of repetitively following a recipe. Which is when I look at ways to automate these processes so I don't have to waste my own time on them but can delegate these steps to a machine. 

Coming from generative art I faced the problem that whilst I enjoyed the process of inventing, tweaking and exploring art-generating algorithms, the final step of having to curate the outputs that these systems gave me started to feel tedious at some point, since with any system that is worth exploring the ratio between "good" and "bad" results should be some somewhere in the range of 1% to 5%. And if you think that with such a low ratio maybe I am just very bad at writing my algorithms, I would say that if one finds 50% of a model's or an algorithm's output to be interesting then this is rather a problem of an underdeveloped taste. 

This ratio means that one has to look at 99% mediocre results all the time in order to find the few pearls in the mud. So I started to look into ways how I could teach a machine to "see" like me and learn my aesthetic preferences. This was still back in 2007 when "machine learning" existed, but "deep learning" as we know it now was still not around. Given my background as an autodidact I had to first learn the very basics of image analysis and the ways images could be turned into meaningful numbers that would allow a machine to make informed decisions whether an input had some aesthetic value. My early results were of course far away from what is possible now, but it did work and the little successes along the way got me hooked in trying to ever improve the ways I could make my machines "see" more and deeper.

And of course once deep learning came onto the stage I was hungry to discover all the new possibilities neural networks would give me - at first for classification only but soon afterwards for generation, too.

Mario Klingemann with 'A.I.C.C.A.', 2023. Courtesy of ONKAOS.

Mimi Nguyen: How do you manage to maintain a harmonious balance between human artistic intent and the generative capabilities of AI in your projects?

Mario Klingemann: Achieving that balance is quite the tightrope act. I like to think of it as conducting a sophisticated orchestra, where the AI and its generative abilities are the virtuoso players. You don't dictate each and every note, instead, you give them the score, the mood, and the tempo. They interpret it, add their nuances and virtuosity, and you shape the collective sound. You listen attentively, guide gently, and every now and then, when needed, assert your direction more sharply. It’s a dance, really. One where the conductor needs to respect the orchestra just as much as the orchestra respects the conductor.

The role of the human artist in this context shifts from the creator to the curator, the director of the semi-autonomous processes. But these processes are to some extent shaped by the my ideas, preferences, and biases which makes every AI—based work just as much about me as about my systems.

Mimi Nguyen: What is "neurography" and how does it encapsulate your unique approach to working with neural networks and machine learning algorithms?

I like to see myself as an explorer of new worlds and ideas—some people have called me a pioneer, which of course is very flattering—I guess in the end I am just not a big fan of crowds and like to go to places where I hope to find some solitude for a while. One perk (or maybe I should rather call it "urge") of the explorer-type of person is that they can try to give things that they encounter on their journeys and which they believe to be unknown a new name. Someone at some point invented the term "Photography" which as we know now is the name that stuck, but back in those days there were a quite few contenders, I believe "Lumography" was one of them and of course Monsieur Daguerre managed to make us remember his name forever by naming his technique "Daguerreotype".

So "Neurography" was my attempt to plant my flag in this new territory, by giving this new way of creating artificial images a name. It was in January 2017 when I realised that the concept of using the latent spaces of neural networks for generating images was a new paradigm which had the potential to become an entirely new medium of expression. My original definition was "Neurography - the process of framing and capturing images in latent spaces. The Neurographer controls locations, subjects and parameters". One of the core concepts of neural networks is that of multidimensionality, but in the end this is not really a new thing - also classic photography can be framed as a multidimensional process that could be broken down in a list of parameters or variables: the time and location of where a photo is taken, the type of camera, lens, aperture, exposure time etc.  - in the end could probably encode all these values into a 100 or 200-dimensional feature vector which - if we were able to rewind time - would allow us to take exactly the same photo again. But obviously that is impossible since traditional photographers (at least those who use a camera) are eventually bound by the natural laws. The Neurographer on the other hand is unbound by them, or at least not bound by the same ones. Also and I think that is the most fascinating part about it - Neurographers can create their entirely new universes, each with their own rules and properties and each single model and its latent space comes with its own behaviors and character.

Mimi Nguyen: Can you tell us more about the Teratoma Series, that will be exhibited at the Paris Photo Art Fair this year?

Mario Klingemann: Teratoma are forms of tumours that produce different forms of human tissue in places where they do not belong: hair, skin, teeth etc. and like with any cancer it is our own mutated cells that do that which is why our immune-system does not perceive them as intruders. The models I employed for this series are trained on details of human body parts and since that is all they have learned and know about the world they inadvertently transform any kind of input into what is in their "DNA": clusters of pixels that appear to us like deformed uncanny close-ups of human tissue. At moments they seem to make sense but then again they don't and eventually land us in a perceptual limbo.

Mimi Nguyen: Your use of neural networks to create cameraless photography is fascinating. Can you walk us through the technical aspects of how you train your networks and what sources you feed into them to generate your images? What are the challenges and ethical considerations that you encounter while employing AI in your creative process?

Whilst I understand the general curiosity to learn more about how the sausage is made—in particular if it is a technological one—I don't think it is really something that I want to dive into too deeply. The same way it does not really help me to paint like Frida Kahlo if I knew which type of brush she used or to make photos like Robert Maplethorpe if I bought the same camera and lenses that he used will it not really be of much use for anyone if I outline the exact models and frameworks I use to make my art. In the end working with neural networks is a process of transformation, just like in any other art form. As in other art forms it is practice and experience that allows you to see the possibilities and limitations of your instruments and of course it is a learning process that never ends.

There is this idea of "first word art / last word art" which applies to any medium. First word art happens in the early phase of when a new medium is discovered and being explored. It often focuses on the technical novelties of that medium and rarely tries to make the medium become invisible again and only use its new possibilities to tell a story. It is also often about overcoming the technical challenges that a new medium might impose since it has not yet been turned into a convenience tool that everyone can operate without having to understand its inner workings. Last word art on the other hand is made by people who have mastered the medium and doesn't need to fixate on the techniques—or gimmicks—any more, but can use it to make artworks that pay more attention to delivering a message, conveying an emotion or telling a tale.

"Teratoma'' clearly belongs into the first-word art category since for me it was the result of trying to overcome several technical hurdles that the early generative networks put in my way: first of all was the problem that the model architectures and GPU in 2017 (at least the ones available to me) were by default outputting bitmaps of 256x256 pixels which as you can guess was not really sufficient to output anything that could be printed out at large size and hung on a wall. Actually, some artists did that nevertheless and surprisingly to me they got away with it since many people thought that blotchy pixellated look is what AI art has to look like. For me that kind of quality was unacceptable so I spent months and months on research trying to improve model architectures to get bigger outputs with detailed textures. 

A second problem was that all the publicly available pre-trained models that were out there at that time were kind of trained on "test data'' which consisted of categories that from an artistic perspective did not really seem very interesting: things like digital watches, fishes, cars or a never ending list of different dog breeds. What was almost entirely missing from these models were humans and high quality photography. Humans were in there almost only by accident: a guy holding a fish, a hand next to a digital watch or an eye behind sunglasses. So in order to get humans into these models I had to create my own datasets and train my own models. Getting lots of images was of course not a problem, all it took was writing some web-scrapers. Curating the images was already a bit more involved and initially a manual process, since in order to create categories of images to train my models on I first had to find images that fit into those categories. But the great thing about AI is that after a while it can learn what it is that I am looking for if I give it enough representative examples and can then find me more of the same on its own. It also helped me identifying and discovering other categories that were outside of what I was initially interested in by showing me thematic clusters based on the features of those images. 

Once I had enough training material the next challenge was the training. It turned out that training a model is not as straightforward as it sounds like. It often felt like trying to put down a too-large carpet into a room that is too small: there was always some bubble popping up somewhere and if you flattened it then it popped up in a different corner. In practice this meant for example that I had managed to get my face model to generate very convincing eyes, but then it created nasty artefacts in the smoother skin areas. Trying to fix that by training more or changing the architecture resulted in nice skin details, but suddenly the eyes got smudged again. There seemed to be a limit as for how much such a model could learn which is for example the reason why the "Teratoma" series is black-and-white: by not having to learn colour the model had more capacity to focus on textures.

When it comes to ethics and AI I am quite uncompromising: models can be trained on any data that is publicly available to humans. If you do not want your data to be seen or read then do not put it into the public view. When it comes to what to do with the output of these models I believe that is in the responsibility of the person or artist discovering them: everyone intending to call an AI generated work "theirs" should exercise due diligence and research first how close it is to existing work out there and then decide if it encroaches on their territory or even worse accidentally plagiarises it.

Mimi Nguyen: As technology continues to advance, what do you envision the convergence of AI, photography, and art? What do you think of the off-the-shelf AI tools and how will they affect what we consider as art?

Mario Klingemann: My hope is that the shock and awe of the AI novelty effect will wane soon. After almost 10 years the story that something was "made with AI" should not really be the focus of our interest anymore but we should rather go back to the question of what the artist was trying to say with their work.

As for off-the-shelf AI tools, their proliferation was inevitable, just like it happened with the camera or the personal computer. When cameras were invented, not everyone could afford or operate them. Over time, they became accessible to the masses and today, we all have a camera in our pocket. This has not devalued the craft of photography, on the contrary, it has made us more aware of the value of truly skilled photographers. I believe the same is already happening to AI art. Having access to the tools does not make one an artist automatically. Artistry lies not just in the craft, but in the unique ideas, the curiosity, the experimentation, the critique, and the perceived beauty or emotion. And we should never forget humour - something that we are pretty safe machines will not understand for quite a while, since it is one of the most sophisticated skills our wonderful human brain possesses.

One problem is see is the increasing fatigue we experience with the avalanche of creative work being unleashed upon us through all our channels. And I blame the off-the-shelf tools somewhat for worsening this effect since they are very biased towards certain "pretty" aesthetics and make it rather difficult to escape them in the streamlined and guard-railed creation process, which again gives the world the impression that AI art has to always look this way. But then again maybe this is exactly the chance for those who dare to create outside the common denominators to still find their niché for at least a while before it becomes popular.

Mimi Nguyen

Mimi is a Creative Director at verse. She is a assistant professor at Central Saint Martins, University of Arts London where she leads the CSM NFT Lab. Her background is New Media Art, having previously studied at the Berlin University of the Arts (UdK) and Academy of Fine Arts in Warsaw. She now also teaches at Imperial College London, Faculty of Engineering, where she leads Mana Lab - a “Future...
View profile

Mario Klingemann

Mario Klingemann (born 1970) is a German artist who uses algorithms and artificial intelligence to create and investigate systems. He is particularly interested in human perception of art and creativity, researching methods in which machines can augment or emulate these processes. Thus his artistic research spans a wide range of areas like neurography, generative art, cybernetic aesthetics...
View profile

Subscribe to get the latest on 
artists, exhibitions and more.