Multimodal sentiment analysis explained

Multimodal sentiment analysis is a technology for traditional text-based sentiment analysis, which includes modalities such as audio and visual data.[1] It can be bimodal, which includes different combinations of two modalities, or trimodal, which incorporates three modalities.[2] With the extensive amount of social media data available online in different forms such as videos and images, the conventional text-based sentiment analysis has evolved into more complex models of multimodal sentiment analysis,[3] which can be applied in the development of virtual assistants,[4] analysis of YouTube movie reviews,[5] analysis of news videos,[6] and emotion recognition (sometimes known as emotion detection) such as depression monitoring,[7] among others.

Similar to the traditional sentiment analysis, one of the most basic task in multimodal sentiment analysis is sentiment classification, which classifies different sentiments into categories such as positive, negative, or neutral.[8] The complexity of analyzing text, audio, and visual features to perform such a task requires the application of different fusion techniques, such as feature-level, decision-level, and hybrid fusion. The performance of these fusion techniques and the classification algorithms applied, are influenced by the type of textual, audio, and visual features employed in the analysis.

Features

Feature engineering, which involves the selection of features that are fed into machine learning algorithms, plays a key role in the sentiment classification performance.[9] In multimodal sentiment analysis, a combination of different textual, audio, and visual features are employed.

Textual features

Similar to the conventional text-based sentiment analysis, some of the most commonly used textual features in multimodal sentiment analysis are unigrams and n-grams, which are basically a sequence of words in a given textual document.[10] These features are applied using bag-of-words or bag-of-concepts feature representations, in which words or concepts are represented as vectors in a suitable space.[11] [12]

Audio features

Sentiment and emotion characteristics are prominent in different phonetic and prosodic properties contained in audio features.[13] Some of the most important audio features employed in multimodal sentiment analysis are mel-frequency cepstrum (MFCC), spectral centroid, spectral flux, beat histogram, beat sum, strongest beat, pause duration, and pitch. OpenSMILE[14] and Praat are popular open-source toolkits for extracting such audio features.[15]

Visual features

One of the main advantages of analyzing videos with respect to texts alone, is the presence of rich sentiment cues in visual data.[16] Visual features include facial expressions, which are of paramount importance in capturing sentiments and emotions, as they are a main channel of forming a person's present state of mind. Specifically, smile, is considered to be one of the most predictive visual cues in multimodal sentiment analysis. OpenFace is an open-source facial analysis toolkit available for extracting and understanding such visual features.[17]

Fusion techniques

Unlike the traditional text-based sentiment analysis, multimodal sentiment analysis undergo a fusion process in which data from different modalities (text, audio, or visual) are fused and analyzed together. The existing approaches in multimodal sentiment analysis data fusion can be grouped into three main categories: feature-level, decision-level, and hybrid fusion, and the performance of the sentiment classification depends on which type of fusion technique is employed.

Feature-level fusion

Feature-level fusion (sometimes known as early fusion) gathers all the features from each modality (text, audio, or visual) and joins them together into a single feature vector, which is eventually fed into a classification algorithm.[18] One of the difficulties in implementing this technique is the integration of the heterogeneous features.

Decision-level fusion

Decision-level fusion (sometimes known as late fusion), feeds data from each modality (text, audio, or visual) independently into its own classification algorithm, and obtains the final sentiment classification results by fusing each result into a single decision vector. One of the advantages of this fusion technique is that it eliminates the need to fuse heterogeneous data, and each modality can utilize its most appropriate classification algorithm.

Hybrid fusion

Hybrid fusion is a combination of feature-level and decision-level fusion techniques, which exploits complementary information from both methods during the classification process. It usually involves a two-step procedure wherein feature-level fusion is initially performed between two modalities, and decision-level fusion is then applied as a second step, to fuse the initial results from the feature-level fusion, with the remaining modality.[19] [20]

Applications

Similar to text-based sentiment analysis, multimodal sentiment analysis can be applied in the development of different forms of recommender systems such as in the analysis of user-generated videos of movie reviews and general product reviews,[21] to predict the sentiments of customers, and subsequently create product or service recommendations.[22] Multimodal sentiment analysis also plays an important role in the advancement of virtual assistants through the application of natural language processing (NLP) and machine learning techniques. In the healthcare domain, multimodal sentiment analysis can be utilized to detect certain medical conditions such as stress, anxiety, or depression. Multimodal sentiment analysis can also be applied in understanding the sentiments contained in video news programs, which is considered as a complicated and challenging domain, as sentiments expressed by reporters tend to be less obvious or neutral.[23]

Notes and References

  1. Soleymani . Mohammad . Garcia . David . Jou . Brendan . Schuller . Björn . Chang . Shih-Fu . Pantic . Maja . A survey of multimodal sentiment analysis . Image and Vision Computing . September 2017 . 65 . 3–14 . 10.1016/j.imavis.2017.08.003. 19491070 .
  2. Karray . Fakhreddine . Milad . Alemzadeh . Saleh . Jamil Abou . Mo Nours . Arab . Human-Computer Interaction: Overview on State of the Art . International Journal on Smart Sensing and Intelligent Systems . 1 . 137–159 . 2008 . 10.21307/ijssis-2017-283 . free .
  3. Poria . Soujanya . Cambria . Erik . Bajpai . Rajiv . Hussain . Amir . A review of affective computing: From unimodal analysis to multimodal fusion . Information Fusion . September 2017 . 37 . 98–125 . 10.1016/j.inffus.2017.02.003. 1893/25490 . 205433041 . free .
  4. Web site: Google AI to make phone calls for you . BBC News . 12 June 2018 . 8 May 2018.
  5. Wollmer . Martin . Weninger . Felix . Knaup . Tobias . Schuller . Bjorn . Sun . Congkai . Sagae . Kenji . Morency . Louis-Philippe . YouTube Movie Reviews: Sentiment Analysis in an Audio-Visual Context . IEEE Intelligent Systems . May 2013 . 28 . 3 . 46–53 . 10.1109/MIS.2013.34. 12789201 .
  6. Pereira . Moisés H. R. . Pádua . Flávio L. C. . Pereira . Adriano C. M. . Benevenuto . Fabrício . Dalip . Daniel H. . Fusing Audio, Textual and Visual Features for Sentiment Analysis of News Videos. 9 April 2016 . 1604.02612. cs.CL .
  7. Book: Zucco . Chiara . Calabrese . Barbara . Cannataro . Mario . 2017 IEEE International Conference on Bioinformatics and Biomedicine (BIBM) . Sentiment analysis and affective computing for depression monitoring . November 2017 . 1988–1995 . 10.1109/bibm.2017.8217966 . IEEE . en. 978-1-5090-3050-7 . 24408937 .
  8. Book: Pang . Bo . Lee . Lillian . Opinion mining and sentiment analysis . 2008 . Now Publishers . Hanover, MA . 978-1601981509.
  9. Sun . Shiliang . Luo . Chen . Chen . Junyu . A review of natural language processing techniques for opinion mining systems . Information Fusion . July 2017 . 36 . 10–25 . 10.1016/j.inffus.2016.10.004.
  10. Yadollahi . Ali . Shahraki . Ameneh Gholipour . Zaiane . Osmar R. . Current State of Text Sentiment Analysis from Opinion to Emotion Mining . ACM Computing Surveys . 25 May 2017 . 50 . 2 . 1–33 . 10.1145/3057270. 5275807 .
  11. Perez Rosas . Veronica . Mihalcea . Rada . Morency . Louis-Philippe . Multimodal Sentiment Analysis of Spanish Online Videos . IEEE Intelligent Systems . May 2013 . 28 . 3 . 38–45 . 10.1109/MIS.2013.9. 1132247 .
  12. Poria . Soujanya . Cambria . Erik . Hussain . Amir . Huang . Guang-Bin . Towards an intelligent framework for multimodal affective data analysis . Neural Networks . March 2015 . 63 . 104–116 . 10.1016/j.neunet.2014.10.005. 25523041 . 1893/21310 . 342649 . free .
  13. Chung-Hsien Wu . Wei-Bin Liang . Emotion Recognition of Affective Speech Based on Multiple Classifiers Using Acoustic-Prosodic Information and Semantic Labels . IEEE Transactions on Affective Computing . January 2011 . 2 . 1 . 10–21 . 10.1109/T-AFFC.2010.16. 52853112 .
  14. Book: Eyben . Florian . Wöllmer . Martin . Schuller . Björn . OpenEAR — Introducing the munich open-source emotion and affect recognition toolkit - IEEE Conference Publication . 1 . 2009 . 10.1109/ACII.2009.5349350 . 978-1-4244-4800-5 . OpenEAR — Introducing the munich open-source emotion and affect recognition toolkit . 2081569 .
  15. Book: Morency . Louis-Philippe . Mihalcea . Rada . Doshi . Payal . Towards multimodal sentiment analysis: harvesting opinions from the web . 14 November 2011 . 169–176 . 10.1145/2070481.2070509 . ACM. Towards multimodal sentiment analysis . 9781450306416 . 1257599 .
  16. Poria . Soujanya . Cambria . Erik . Hazarika . Devamanyu . Majumder . Navonil . Zadeh . Amir . Morency . Louis-Philippe . Context-Dependent Sentiment Analysis in User-Generated Videos . Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) . 873–883 . 2017 . 10.18653/v1/p17-1081 . free .
  17. Book: OpenFace: An open source facial behavior analysis toolkit - IEEE Conference Publication . March 2016. 10.1109/WACV.2016.7477553. 978-1-5090-0641-0. 1919851.
  18. Poria . Soujanya . Cambria . Erik . Howard . Newton . Huang . Guang-Bin . Hussain . Amir . Fusing audio, visual and textual clues for sentiment analysis from multimodal content . Neurocomputing . January 2016 . 174 . 50–59 . 10.1016/j.neucom.2015.01.095. 15287807 .
  19. Shahla . Shahla . Naghsh-Nilchi . Ahmad Reza . Exploiting evidential theory in the fusion of textual, audio, and visual modalities for affective music video retrieval - IEEE Conference Publication . 2017 . 10.1109/PRIA.2017.7983051 . 24466718 .
  20. Poria . Soujanya . Peng . Haiyun . Hussain . Amir . Howard . Newton . Cambria . Erik . Ensemble application of convolutional neural networks and multiple kernel learning for multimodal sentiment analysis . Neurocomputing . October 2017 . 261 . 217–230 . 10.1016/j.neucom.2016.09.117.
  21. Pérez-Rosas . Verónica . Mihalcea . Rada . Morency . Louis Philippe . Utterance-level multimodal sentiment analysis . Long Papers . 1 January 2013 . Association for Computational Linguistics (ACL).
  22. Web site: Chui . Michael . Manyika . James . Miremadi . Mehdi . Henke . Nicolaus . Chung . Rita . Nel . Pieter . Malhotra . Sankalp . Notes from the AI frontier. Insights from hundreds of use cases . McKinsey & Company . 13 June 2018 . en.
  23. Book: Ellis . Joseph G. . Jou . Brendan . Chang . Shih-Fu . Why We Watch the News: A Dataset for Exploring Sentiment in Broadcast Video News . 12 November 2014 . 104–111 . 10.1145/2663204.2663237 . ACM. Why We Watch the News . 9781450328852 . 14112246 .