CIFAR-10 explained

The CIFAR-10 dataset (Canadian Institute For Advanced Research) is a collection of images that are commonly used to train machine learning and computer vision algorithms. It is one of the most widely used datasets for machine learning research.[1] [2] The CIFAR-10 dataset contains 60,000 32x32 color images in 10 different classes.[3] The 10 different classes represent airplanes, cars, birds, cats, deer, dogs, frogs, horses, ships, and trucks. There are 6,000 images of each class.[4]

Computer algorithms for recognizing objects in photos often learn by example. CIFAR-10 is a set of images that can be used to teach a computer how to recognize objects. Since the images in CIFAR-10 are low-resolution (32x32), this dataset can allow researchers to quickly try different algorithms to see what works.

CIFAR-10 is a labeled subset of the 80 Million Tiny Images dataset from 2008, published in 2009. When the dataset was created, students were paid to label all of the images.[5]

Various kinds of convolutional neural networks tend to be the best at recognizing the images in CIFAR-10.

Research papers claiming state-of-the-art results on CIFAR-10

This is a table of some of the research papers that claim to have achieved state-of-the-art results on the CIFAR-10 dataset. Not all papers are standardized on the same pre-processing techniques, like image flipping or image shifting. For that reason, it is possible that one paper's claim of state-of-the-art could have a higher error rate than an older state-of-the-art claim but still be valid.

Paper title Error rate (%) Publication date
Convolutional Deep Belief Networks on CIFAR-10[6] 21.1August, 2010
Maxout Networks[7] 9.38
Wide Residual Networks[8] 4.0
Neural Architecture Search with Reinforcement Learning[9] 3.65
Fractional Max-Pooling[10] 3.47
Densely Connected Convolutional Networks[11] 3.46
Shake-Shake regularization[12] 2.86
Coupled Ensembles of Neural Networks[13] 2.68
ShakeDrop regularization[14] 2.67Feb 7, 2018
Improved Regularization of Convolutional Neural Networks with Cutout[15] 2.56Aug 15, 2017
Regularized Evolution for Image Classifier Architecture Search[16] 2.13Feb 6, 2018
Rethinking Recurrent Neural Networks and other Improvements for Image Classification[17] 1.64July 31, 2020
AutoAugment: Learning Augmentation Policies from Data[18] 1.48May 24, 2018
A Survey on Neural Architecture Search[19] 1.33May 4, 2019
GPipe: Efficient Training of Giant Neural Networks using Pipeline Parallelism[20] 1.00Nov 16, 2018
Reduction of Class Activation Uncertainty with Background Information[21] 0.95May 5, 2023
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale[22] 0.52021

Benchmarks

CIFAR-10 is also used as a performance benchmark for teams competing to run neural networks faster and cheaper. DAWNBench has benchmark data on their website.

See also

External links

Similar datasets

Notes and References

  1. News: 2017-06-12 . AI Progress Measurement . Electronic Frontier Foundation . 2017-12-11.
  2. Web site: Popular Datasets Over Time Kaggle . www.kaggle.com . 2017-12-11.
  3. Book: Hope . Tom . Resheff . Yehezkel S. . Lieder . Itay . 2017-08-09 . Learning TensorFlow: A Guide to Building Deep Learning Systems . O'Reilly Media, Inc. . 9781491978481 . 64– . 22 January 2018.
  4. Book: Angelov . Plamen . Gegov . Alexander . Jayne . Chrisina . Qiang . Shen . 2016-09-06 . Advances in Computational Intelligence Systems: Contributions Presented at the 16th UK Workshop on Computational Intelligence, September 7–9, 2016, Lancaster, UK . Springer International Publishing . 9783319465623 . 441– . 22 January 2018.
  5. Web site: Krizhevsky . Alex . Alex Krizhevsky . 2009 . Learning Multiple Layers of Features from Tiny Images .
  6. Web site: Convolutional Deep Belief Networks on CIFAR-10.
  7. Ian J.. Goodfellow. David. Warde-Farley. Mehdi. Mirza. Aaron. Courville. Yoshua. Bengio. 2013-02-13 . Maxout Networks. 1302.4389. stat.ML.
  8. Zagoruyko. Sergey. Komodakis. Nikos. 2016-05-23. Wide Residual Networks. 1605.07146. cs.CV.
  9. Zoph. Barret. Le. Quoc V.. 2016-11-04. Neural Architecture Search with Reinforcement Learning. 1611.01578. cs.LG.
  10. Graham. Benjamin. 2014-12-18. Fractional Max-Pooling. 1412.6071. cs.CV.
  11. Huang. Gao. Liu. Zhuang. Weinberger. Kilian Q.. van der Maaten. Laurens. 2016-08-24. Densely Connected Convolutional Networks. 1608.06993. cs.CV.
  12. Gastaldi. Xavier. 2017-05-21. Shake-Shake regularization. 1705.07485. cs.LG.
  13. Dutt. Anuvabh. 2017-09-18. Coupled Ensembles of Neural Networks. 1709.06053. cs.CV.
  14. Yamada. Yoshihiro. Iwamura. Masakazu. Kise. Koichi. 2018-02-07. Shakedrop Regularization for Deep Residual Learning. IEEE Access. 7. 186126–186136. 10.1109/ACCESS.2019.2960566. 1802.02375 . 54445621.
  15. Terrance. DeVries. W.. Taylor, Graham. 2017-08-15. Improved Regularization of Convolutional Neural Networks with Cutout. 1708.04552. en. cs.CV.
  16. Real. Esteban. Aggarwal. Alok. Huang. Yanping. Le. Quoc V.. 2018-02-05. Regularized Evolution for Image Classifier Architecture Search with Cutout. 1802.01548 . cs.NE.
  17. Nguyen. Huu P.. Ribeiro. Bernardete. 2020-07-31. Rethinking Recurrent Neural Networks and other Improvements for Image Classification. 2007.15161. cs.CV.
  18. Cubuk. Ekin D.. Zoph. Barret. Mane. Dandelion. Vasudevan. Vijay. Le. Quoc V.. 2018-05-24. AutoAugment: Learning Augmentation Policies from Data. 1805.09501. cs.CV.
  19. Wistuba. Martin. Rawat. Ambrish. Pedapati. Tejaswini. 2019-05-04. A Survey on Neural Architecture Search. 1905.01392. cs.LG.
  20. Huang. Yanping. Cheng. Yonglong. Chen. Dehao. Lee. HyoukJoong. Ngiam. Jiquan. Le. Quoc V.. Zhifeng. Zhifeng. 2018-11-16. GPipe: Efficient Training of Giant Neural Networks using Pipeline Parallelism. 1811.06965. cs.CV.
  21. Kabir. Hussain. 2023-05-05. Reduction of Class Activation Uncertainty with Background Information. 2305.03238. cs.CV.
  22. Dosovitskiy . Alexey . Beyer . Lucas . Kolesnikov . Alexander . Weissenborn . Dirk . Zhai . Xiaohua . Unterthiner . Thomas . Dehghani . Mostafa . Minderer . Matthias . Heigold . Georg . Gelly . Sylvain . Uszkoreit . Jakob . Houlsby . Neil . 2021 . An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale . . 2010.11929 . en.