WikiArt (formerly known as WikiPaintings) is a visual art wiki, active since 2010.
The developers are based in Ukraine.[1] Since 2010, the Editor in Chief of WikiArt is the Ukrainian art critic Kseniia Bilash.[2]
In April 2022, access to WikiArt was restricted in Russia, by request of the General Prosecutor’s office, according to Roskomsvoboda.[3]
WikiArt is often used by scientists who study AI. They train AI on WikiArt data trying to discover its ability to recognize, classify, and generate art.
In 2015, computer scientists Babak Saleh and Ahmed Egammal of Rutgers University used images from WikiArt in training an algorithm to look at paintings and detect the works’ genre, style and artist.[4] Later, researchers from Rutgers University, the College of Charleston and Facebook's AI Lab collaborated on a generative adversarial network (GAN), training it on WikiArt data to tell the difference between a piece of art versus a photograph or diagram, and to identify different styles of art.[5] Then, they designed a creative adversarial network (CAN), also trained on WikiArt dataset, to generate new works that does not fit known artistic styles.[6]
In 2016, Chee Seng Chan (Associate Professor at University of Malaya) and his co-researchers trained a convolutional neural network (CNN) on WikiArt datasets and presented their paper "Ceci n’est pas une pipe: A Deep Convolutional Network for Fine-art Paintings Classification".[7] They released ArtGAN to explore the possibilities of AI in its relation to art. In 2017, a new study and improved ArtGAN was published: "Improved ArtGAN for Conditional Synthesis of Natural Image and Artwork".[8]
In 2018, an Edmond de Belamy portrait produced by a GAN was sold for $432,500 at a Christie's auction. The algorithm was trained on a set of 15,000 portraits from WikiArt, spanning the 14th to the 19th century.[9]
In 2019, Eva Cetinic, a researcher at the Rudjer Boskovic Institute in Croatia, and her colleagues, used images from WikiArt in training machine-learning algorithms to explore the relationship between the aesthetics, sentimental value, and memorability of fine art.[10]
In 2020, Panos Achlioptas, a researcher at Stanford University and his co-researchers collected 439,121 affective annotations involving emotional reactions and written explanations of those, for 81 thousand artworks of WikiArt. Their study involved 6,377 human annotators and it resulted in the first neural-based speaker model that showed non-trivial Turing test performance in emotion-explanation tasks.[11]