The Past and Present of AI Art
Let’s take a look at a work of art.
[ Archive Dreaming ]
This is a work by Refik Anadol using machine learning to go through and sort out 1.7 million files. The interactive relationship of all their multidimensional data was actualized into an immersive media format. This experiment was done as part of a Google project called Artists and Machine Intelligence, which gathers artists and engineers together to explore the possibilities of machine intelligence. This new collaborative way of art creation could pioneer brand new use scenarios for AI systems.
When art meets big data, we are sure to witness the birth of new ways of artistic expression no one could ever dream of. Are technologies tools for artists, or are they part of the art? History might shed some light.
The Creative Power of AI
Despite all the hype about AI, it is still far from being capable of thinking and learning on its own as depicted in sci-fi novels. But in the past several decades, AI has already integrated into every aspect of our lives. For one reason or another, more and more artists have decided to incorporate AI into their works.
- Some like to try out the most futuristic technologies;
- Some experiment with AI to explore new ways to create art;
- Some believe that AI can greatly enhance their current workflow.
Can AI truly have creative power?
- Opinion One
Machines can’t truly possess creative power.
There is a stereotypical belief about creativity thinking that it is unique to human beings. In the Origins of Creativity, Edward Wilson proposed that creativity is born from an impulse. Machines are lesser than humans in terms of creativity, because machines don’t ever make mistakes. Even by introducing chance into AI, it won’t deviate much from the input value, provided that the program is running as it’s supposed to do. Humans, on the other hand, are totally different. After all, to err is human.
- Opinion Two
Whether AI can be creative actually depends on what we mean by creativity.
In Minds, Brains, and Programs, John Searle introduced a thought experiment: the Chinese Room. A native English speaker who doesn’t know Chinese at all sits in a room with only two windows. He has an English instruction manual with him in the room, as well as a bunch of written Chinese characters. Someone outside the room keeps passing in questions written in Chinese, and the guy in the room tries to arrange the Chinese characters in the room into an answer to give back, using the instruction manual for guidance.
John thinks that in the ideal situation the guy in the room can totally pass as a Chinese native, even though he doesn’t understand Chinese. In this thought experiment, the guy outside the room is like a programmer, and the guy inside is like a computer, while the manual is like a computer program.
Just as the guy in the room doesn’t have to truly understand Chinese, but only relies on the manual to complete tasks. Computers won’t truly understand human thoughts, but they might appear to humans as if they did.
Therefore, it’s actually pointless to require machines to be creative in a true philosophical sense. As long as machines can “fake” a semblance of that, it would make no difference.
If by creativity we mean the ability to constantly improve, then with adequate access to quality data, AI will be able to self-improve.
A Brief History of AI Art
AI wasn’t born out of nothing in the 21st century. It grew from a number of roots.
3000 BC - the Talking Nodes of Quipu
Quipu is a method for keeping and passing information used by the Inca people in ancient times. Quipu uses nodes of different colours and materials. Sometimes there can be hundreds of nodes that were tied in different ways and at different heights to express differences in meaning. Quipu can not only record date, number and account balance but also serve as the medium for abstract thoughts and record the folklore and poetry of local cultures.
Without resorting to an alphabetic writing system, Quipu can record large amounts of information with decent accuracy and flexibility. When nodes of different colours are added to make even more combinations possible, the potential is limitless.
1842 - Poetical Science
Analytical engines can execute mathematical calculation orders on punctured cards. In 1836, a portrait of Joseph Marie Jacquard, inventor of the first weaving machine, was woven using 24,000 punctured cards. In a sense, this portrait is the world’s first digital image.
In 1842, Ada Lovelace wrote the first computer program in history. Her friend Charles Babbage designed the very first system of algorithms for his analytical engine inventions.
The first published computer algorithms are Charles Babbage’s Analytical Engine Sketches (1842).
The way analytical engines weave algebraic patterns is just like how a weaving machine weaves flowers and leaves. Ada called this poetical science. Imagine if a computer can do more than computing. Can computers be used to make art? At a time when art and science are in two totally separate camps, her goal was to integrate the accuracy of science and the flexibility of art.
The basis of this belief is that our world is built upon mathematical laws. Here is an example:
Rose curve formula: ρ=a*sin(nθ), a being a fixed length, n being an integer. We can use this formula to draw a rose.
Mathematics holds the key to a lot of visually pleasing shapes and patterns. Since AI is based on mathematics, the beauty it can create is limitless.
1929 - A Machine That Could See
In 1929, Austrian engineer Gustav Tauschek invented the photoelectric reading machine.
This contraption has letter/number-shaped holes on a wheel. When light reflecting off a letter or number passes through a lens and hits the wheel, the wheel will receive the most amount of light if the shape of the letter/number matches that of the hole. When this happens, the photosensitive module on the wheel will send a signal, and the reading machine will register the correct letter or number.
The photoelectric machine is human beings’ first attempt at giving machines the power to recognize things. The method it adopted is called “template matching”, which is also the first practical recognition method.
What does it mean for humans to see through the eyes of a machine? And what does the machine actually see?
1950 - The Imitation Game
In 1943, the concept of an AI neural network was first raised. Combining algorithms and mathematics, it uses “threshold logic” to simulate the thought process of human beings. The gate to AI was thus pushed open.
Alan Turing developed the Turing test, also called the imitation game. This is a benchmark test used to show that machines can conduct intelligent behaviour indistinguishable from humans. Can machines think? Under the influence of Turing, many artists began designing similar “games” during this period of time.
Turing Test
Self-destructive Art
Jean Tinguely used one piano, two electric power engines, one balloon and over 20 bicycle wheels to make a chaotic structure. This contraption is capable of a series of unpredictable actions that will eventually lead to its own destruction.
1953 - Reactive Machines
Gordon Pask was one of the early pioneers of cybernetics. As a psychologist and educator, his research involved bio-computing, AI, cognitive science, logic, linguistics, psychology and artificial life. He incorporated information from various media sources into his cybernetic theories and expanded the scope of the study.
In a famous exhibition: Surprising Discoveries of Cybernetics, Gordon Pask displayed a work titled The Colloquy of Mobiles. This is a type of reactive machinery capable of emitting lights in response to sounds made by human performers.
In 2020, a duplicate of Gordon Pask’s 1968 The Colloquy of Mobiles was displayed at Center Pompidou Gallery.
1968 - Cybernetic Serendipity
In the 1960s, under the influence of cybernetic art, many artists started creating “artificial life” artworks drawing their inspiration from animal behaviour, or viewing the system itself as art.
The exhibition Cybernetic Serendipity was held in the Institute of Contemporary Arts in 1968. In the exhibition, Jean Tinguely displayed two of his drawing machines. These are dynamic sculptures. Visitors can choose a pen of any colour or length and at any location, and the machines can create a brand new piece of abstract art with it.
Cybernetic Serendipity (1968)
1973 - An Autonomous Picture Machine
More artists began to try using machines to create art. Some of their ideas still influence present-day art.
Picture: Untitled 1972, an early example of computer art, made by the first code-based drawing program developed by Vera Molnar
In 1973, Harold Gohen developed an algorithm named AARON, which allowed machines to draw irregular drawings like the human hand. Although AARON is only limited to a certain drawing style coded by Cohen, the fact that it can create a limitless number of works in this style makes its works the first-generation computer art.
040502 by AARON (2004)
The Age of Deep Learning
By the late 20th century, thanks to the popularization of personal computers, this industry began its fast development. More and more artists started using software programs. Into the 21st century, the availability of free learning materials for programming in combination with open-source projects on Github helped the industry grow.
In addition, researchers are building and publishing large databases, such as ImageNet, which can be used for training algorithms in categorizing and recognizing images. AI visual programs such as DeepDream are easily available to artists and the general public so that everyone can experiment with how well computers can understand different forms of visual expressions.
With all these innovations, AI art to this day has gone through three stages:
- Chatting bots
- Generative art
- Beyond generative art
Chatting bots
1995 - A.L.I.C.E
Released in 1995, Richard Wallace’s famous chatting bot A.L.I.C.E was able to learn how to speak from a corpus of natural human language gathered from the Internet.
2001 - Agent Ruby
Lynn Hershman Leeson created Agent Ruby.
agentruby.sfmoma.org
2020 - Expanded Art
Since then, many more artists have created chatting bots. Martine Rothblatt modelled his bot Bina48 after his wife’s personality. Martine Syms developed an interactive chatting bot to be her digital avatar. The bot is named Mythicbeing and was designed to be a black, upward, violent, solipsistic, anti-social and gender neutral woman.
A virtual friend in your iPhone
Generative Art
Artists can collaborate with AI in various ways, using neural networks and machine learning to create artworks. Examples of this include Neural Style Transfer, Pix2Pix, CycleGAN and Deep Dream. So far generative adversarial network is the technology most relevant to AI art.
2014 - Generative Adversarial Network
Generative adversarial network is a term coined by Ian Goodfellow in a 2014 thesis. He believes that GAN will be the next step for neural network, as the technology can be used to create visual effects that were possible before.
Anna Ridler, Tulips from Mosaic Virus (2018)
GAN used to generate landscape paintings
2017 - GANism was born
Around 2017, artists began testing this technology.
GANism Is The New Art Trend
2018 - A Milestone of Auctioning
The most well-known example of GAN-generated art is a portrait by Obvious which sold for 432,000 at Christie’s in 2018.
Obvious Art’s [ ())] + [( − (()))], Portrait of Edmond de Belamy, Generative Adversarial Network print on canvas (2018).
Two types of algorithms are used here to create these high-resolution artworks: GAN and super-resolution algorithms. More interestingly, the authors have used arithmetic functions to sign their works.
Beyond Generative Art
In recent years, more and more artists see the purpose of AI as not merely to create images, but also to put an end to the innate prejudice people hold against AI. How this movement is carried forward will have long-lasting influences on social justice, equality, tolerance and many other important issues.
Last updated