To see if people could easily differentiate computer-generated art from that produced by humans, we asked 20 people at the St. Gallen Symposium to look at a series of four portraits. Three were painted by humans; one was digitally generated and applied the style of an artist to a photo, using machine learning and robotic arms. Could our participants see the difference between the human and computerized touch?
As they examined the four paintings, the uncertainty was unanimous: How could they tell for certain that one looked more “human” than the other? “There are so many types of art. We have seen a hundred different styles through the centuries, so the robot could have imitated pretty much anything,” author and publisher Gerhard Schwarz pointed out.
In the end, only four out of twenty participants managed to spot the “robot” painting, the pink portrait of a woman. That’s a worse outcome than if participants had selected a painting at random.
The survey highlighted a challenge: What does robotic art look like? The answer is unsettling: Anything. Machines can copy any style you want them to, if you can program them to learn.
The way the actual art piece is made represents another challenge, though. Robots can add filters to your photos. They can learn to apply an artist’s style to a photo, or mix it with a scribbling. They can also push the deceit a step further and, with a robotic arm, brush, and paint, create a painting visually similar to a human’s. Furthermore, your robot can develop its own approach using machine learning, and develop a style by analysing the data from hundreds of human paintings.