Self-supervised learning explained

Self-supervised learning (SSL) is a paradigm in machine learning where a model is trained on a task using the data itself to generate supervisory signals, rather than relying on external labels provided by humans. In the context of neural networks, self-supervised learning aims to leverage inherent structures or relationships within the input data to create meaningful training signals. SSL tasks are designed so that solving it requires capturing essential features or relationships in the data. The input data is typically augmented or transformed in a way that creates pairs of related samples. One sample serves as the input, and the other is used to formulate the supervisory signal. This augmentation can involve introducing noise, cropping, rotation, or other transformations. Self-supervised learning more closely imitates the way humans learn to classify objects.[1]

The typical SSL method is based on an artificial neural network or other model such as a decision list.[2] The model learns in two steps. First, the task is solved based on an auxiliary or pretext classification task using pseudo-labels which help to initialize the model parameters.[3] [4] Second, the actual task is performed with supervised or unsupervised learning.[5] [6] [7] Other auxiliary tasks involve pattern completion from masked input patterns (silent pauses in speech or image portions masked in black).

Self-supervised learning has produced promising results in recent years and has found practical application in audio processing and is being used by Facebook and others for speech recognition.[8]

Types

Autoassociative self-supervised learning

Autoassociative self-supervised learning is a specific category of self-supervised learning where a neural network is trained to reproduce or reconstruct its own input data.[9] In other words, the model is tasked with learning a representation of the data that captures its essential features or structure, allowing it to regenerate the original input.

The term "autoassociative" comes from the fact that the model is essentially associating the input data with itself. This is often achieved using autoencoders, which are a type of neural network architecture used for representation learning. Autoencoders consist of an encoder network that maps the input data to a lower-dimensional representation (latent space), and a decoder network that reconstructs the input data from this representation.

The training process involves presenting the model with input data and requiring it to reconstruct the same data as closely as possible. The loss function used during training typically penalizes the difference between the original input and the reconstructed output. By minimizing this reconstruction error, the autoencoder learns a meaningful representation of the data in its latent space.

Contrastive self-supervised learning

For a binary classification task, training data can be divided into positive examples and negative examples. Positive examples are those that match the target. For example, if you're learning to identify birds, the positive training data are those pictures that contain birds. Negative examples are those that do not.[10] Contrastive self-supervised learning uses both positive and negative examples. Contrastive learning's loss function minimizes the distance between positive sample pairs while maximizing the distance between negative sample pairs.

Non-contrastive self-supervised learning

Non-contrastive self-supervised learning (NCSSL) uses only positive examples. Counterintuitively, NCSSL converges on a useful local minimum rather than reaching a trivial solution, with zero loss. For the example of binary classification, it would trivially learn to classify each example as positive. Effective NCSSL requires an extra predictor on the online side that does not back-propagate on the target side.

Comparison with other forms of machine learning

SSL belongs to supervised learning methods insofar as the goal is to generate a classified output from the input. At the same time, however, it does not require the explicit use of labeled input-output pairs. Instead, correlations, metadata embedded in the data, or domain knowledge present in the input are implicitly and autonomously extracted from the data. These supervisory signals, generated from the data, can then be used for training.[1]

SSL is similar to unsupervised learning in that it does not require labels in the sample data. Unlike unsupervised learning, however, learning is not done using inherent data structures.

Semi-supervised learning combines supervised and unsupervised learning, requiring only a small portion of the learning data be labeled.[4]

In transfer learning a model designed for one task is reused on a different task.[11]

Training an autoencoder intrinsically constitutes a self-supervised process, because the output pattern needs to become an optimal reconstruction of the input pattern itself. However, in current jargon, the term 'self-supervised' has become associated with classification tasks that are based on a pretext-task training setup. This involves the (human) design of such pretext task(s), unlike the case of fully self-contained autoencoder training.[9]

In reinforcement learning, self-supervising learning from a combination of losses can create abstract representations where only the most important information about the state are kept in a compressed way.[12]

Examples

Self-supervised learning is particularly suitable for speech recognition. For example, Facebook developed wav2vec, a self-supervised algorithm, to perform speech recognition using two deep convolutional neural networks that build on each other.[8]

Google's Bidirectional Encoder Representations from Transformers (BERT) model is used to better understand the context of search queries.[13]

OpenAI's GPT-3 is an autoregressive language model that can be used in language processing. It can be used to translate texts or answer questions, among other things.[14]

Bootstrap Your Own Latent (BYOL) is a NCSSL that produced excellent results on ImageNet and on transfer and semi-supervised benchmarks.[15]

The Yarowsky algorithm is an example of self-supervised learning in natural language processing. From a small number of labeled examples, it learns to predict which word sense of a polysemous word is being used at a given point in text.

DirectPred is a NCSSL that directly sets the predictor weights instead of learning it via gradient update.

Self-GenomeNet is an example of self-supervised learning in genomics.[16]

Further reading

External links

Notes and References

  1. Web site: Bouchard. Louis. 2020-11-25. What is Self-Supervised Learning? Will machines ever be able to learn like humans?. 2021-06-09. Medium. en.
  2. Yarowsky. David. 1995. Unsupervised Word Sense Disambiguation Rivaling Supervised Methods. Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics. 189–196. 10.3115/981658.981684. Cambridge, MA. Association for Computational Linguistics. 1 November 2022. free.
  3. Book: Doersch. Carl. Zisserman. Andrew. 2017 IEEE International Conference on Computer Vision (ICCV) . Multi-task Self-Supervised Visual Learning . October 2017. http://dx.doi.org/10.1109/iccv.2017.226. 2070–2079. IEEE. 10.1109/iccv.2017.226. 1708.07860. 978-1-5386-1032-9. 473729.
  4. Book: Beyer. Lucas. Zhai. Xiaohua. Oliver. Avital. Kolesnikov. Alexander. 2019 IEEE/CVF International Conference on Computer Vision (ICCV) . S4L: Self-Supervised Semi-Supervised Learning . October 2019. http://dx.doi.org/10.1109/iccv.2019.00156. 1476–1485. IEEE. 10.1109/iccv.2019.00156. 1905.03670. 978-1-7281-4803-8. 167209887.
  5. Book: Doersch. Carl. Gupta. Abhinav. Efros. Alexei A.. 2015 IEEE International Conference on Computer Vision (ICCV) . Unsupervised Visual Representation Learning by Context Prediction . December 2015. http://dx.doi.org/10.1109/iccv.2015.167. 1422–1430. IEEE. 10.1109/iccv.2015.167. 1505.05192. 978-1-4673-8391-2. 9062671.
  6. Zheng. Xin. Wang. Yong. Wang. Guoyou. Liu. Jianguo. April 2018. Fast and robust segmentation of white blood cell images by self-supervised learning. Micron. 107. 55–71. 10.1016/j.micron.2018.01.010. 29425969. 3796689 . 0968-4328.
  7. Book: Gidaris. Spyros. Bursuc. Andrei. Komodakis. Nikos. Perez. Patrick Perez. Cord. Matthieu. 2019 IEEE/CVF International Conference on Computer Vision (ICCV) . Boosting Few-Shot Visual Learning with Self-Supervision . October 2019. http://dx.doi.org/10.1109/iccv.2019.00815. 8058–8067. IEEE. 10.1109/iccv.2019.00815. 1906.05186. 978-1-7281-4803-8. 186206588.
  8. Web site: Wav2vec: State-of-the-art speech recognition through self-supervision. 2021-06-09. ai.facebook.com. en.
  9. 10.1002/aic.690370209. Nonlinear principal component analysis using autoassociative neural networks. AIChE Journal. 37. 2. 233–243. 1991. Kramer. Mark A.. 1991AIChE..37..233K .
  10. Web site: Demystifying a key self-supervised learning technique: Non-contrastive learning. 2021-10-05. ai.facebook.com. en.
  11. Book: Littwin. Etai. Wolf. Lior. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) . The Multiverse Loss for Robust Transfer Learning . June 2016. http://dx.doi.org/10.1109/cvpr.2016.429. 3957–3966. IEEE. 10.1109/cvpr.2016.429. 1511.09033. 978-1-4673-8851-1. 6517610.
  12. Francois-Lavet. Vincent. Bengio. Yoshua. Precup. Doina. Pineau. Joelle. 2019. Combined Reinforcement Learning via Abstract Representations. Proceedings of the AAAI Conference on Artificial Intelligence. 1809.04506.
  13. Web site: Open Sourcing BERT: State-of-the-Art Pre-training for Natural Language Processing. 2021-06-09. Google AI Blog. 2 November 2018 . en.
  14. Book: Wilcox. Ethan. Qian. Peng. Futrell. Richard. Kohita. Ryosuke. Levy. Roger. Ballesteros. Miguel. Structural Supervision Improves Few-Shot Learning and Syntactic Generalization in Neural Language Models . 2020. Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). http://dx.doi.org/10.18653/v1/2020.emnlp-main.375. 4640–4652. Stroudsburg, PA, USA. Association for Computational Linguistics. 10.18653/v1/2020.emnlp-main.375. 2010.05725. 222291675.
  15. Grill. Jean-Bastien. Strub. Florian. Altché. Florent. Tallec. Corentin. Richemond. Pierre H.. Buchatskaya. Elena. Doersch. Carl. Pires. Bernardo Avila. Guo. Zhaohan Daniel. Azar. Mohammad Gheshlaghi. Piot. Bilal. 2020-09-10. Bootstrap your own latent: A new approach to self-supervised Learning. cs.LG. 2006.07733.
  16. Gündüz . Hüseyin Anil . Binder . Martin . To . Xiao-Yin . Mreches . René . Bischl . Bernd . McHardy . Alice C. . Münch . Philipp C. . Rezaei . Mina . 2023-09-11 . A self-supervised deep learning method for data-efficient training in genomics . Communications Biology . en . 6 . 1 . 928 . 10.1038/s42003-023-05310-2 . 2399-3642. free . 37696966 . 10495322 .