Lexical entrainment explained

Lexical entrainment is the phenomenon in conversational linguistics of the process of the subject adopting the reference terms of their interlocutor. In practice, it acts as a mechanism of the cooperative principle in which both parties to the conversation employ lexical entrainment as a progressive system to develop "conceptual pacts"[1] (a working temporary conversational terminology) to ensure maximum clarity of reference in the communication between the parties; this process is necessary to overcome the ambiguity[2] inherent in the multitude of synonyms that exist in language.

Lexical entrainment arises by two cooperative mechanisms:[3]

Violation of Grice's maxim of quantity

Once lexical entrainment has come to determine the phrasing for a referent, both parties will use that terminology for the referent for the duration, even if it proceeds to violate the Gricean maxim of quantity. For example, if one wants to refer to a brown loafer out of a set of shoes that consist of: the loafer, a sneaker, and a high-heeled shoe, they will not use the shoe to describe the object as this phrasing does not unambiguously describe one item in the set under consideration. They will also not call the object the brown loafer which would violate Grice's maxim of quantity. The speaker will settle on using the term the loafer as it is just informative enough without giving too much information.[4]

Another important factor is lexical availability; the ease of conceptualizing a referent in a certain way and then retrieving and producing a label for it. For many objects, the most available labels are basic nouns; for example, the word "dog". Instead of saying animal or husky for the referent, most subjects will default to dog. If in a set of objects one is to refer to either a husky, a table, and a poster, people are still most likely to use the word "dog." This is technically a violation of Grice's maxim of quantity, as using the term animal is sufficient.

Applications

Lexical entrainment has applications in natural language processing in computers, as well as human–human interaction. Until recently, the adaptability of computers to modify their referencing to the terms of their human interlocutor has been limited, so the entrainment adaptation always relied on the human operator. However, the emergence of Large Language Models (LLMs) may change this drastically. There is now evidence emerging that LLMs such as GPT-4 demonstrate alignment capabilities similar to that of humans.[5]

References

  1. Brennan . Susan . 1996 . Lexical entrainment in spontaneous dialog . Proceedings, 1996 International Symposium on Spoken Dialogue . 96 . 41–44.
  2. Deutsch. Werner. Pechmann, Thomas . Social interaction and the development of definite descriptions. Cognition. 1982. 11. 2. 159–184. 10.1016/0010-0277(82)90024-5. 6976880.
  3. Garrod. Simon. Anderson, Anthony . Saying what you mean in dialogue: A study in conceptual and semantic co-ordination. Cognition. 1987. 27. 2. 181–218. 10.1016/0010-0277(87)90018-7. 3691025. 10.1.1.476.1791.
  4. Brennan. Susan. Clark, Herbert H. . Conceptual Pacts and Lexical Choice in Conversation. Journal of Experimental Psychology: Learning, Memory, and Cognition. 1996. 22. 6. 1482–1493. 10.1037/0278-7393.22.6.1482. 8921603 . 15405799 .
  5. Book: Wang . Boxuan . Theune . Mariët . Srivastava . Sumit . Examining Lexical Alignment in Human-Agent Conversations with GPT-3.5 and GPT-4 Models . Lecture Notes in Computer Science . 2024 . 14524 . Chatbot Research and Design . https://link.springer.com/chapter/10.1007/978-3-031-54975-5_6 . en . Cham . Springer Nature Switzerland . 94–114 . 10.1007/978-3-031-54975-5_6 . 978-3-031-54975-5.