Lexical entrainment is the phenomenon in conversational linguistics of the process of the subject adopting the reference terms of their interlocutor. In practice, it acts as a mechanism of the cooperative principle in which both parties to the conversation employ lexical entrainment as a progressive system to develop "conceptual pacts"[1] (a working temporary conversational terminology) to ensure maximum clarity of reference in the communication between the parties; this process is necessary to overcome the ambiguity[2] inherent in the multitude of synonyms that exist in language.
Lexical entrainment arises by two cooperative mechanisms:[3]
Once lexical entrainment has come to determine the phrasing for a referent, both parties will use that terminology for the referent for the duration, even if it proceeds to violate the Gricean maxim of quantity. For example, if one wants to refer to a brown loafer out of a set of shoes that consist of: the loafer, a sneaker, and a high-heeled shoe, they will not use the shoe to describe the object as this phrasing does not unambiguously describe one item in the set under consideration. They will also not call the object the brown loafer which would violate Grice's maxim of quantity. The speaker will settle on using the term the loafer as it is just informative enough without giving too much information.[4]
Another important factor is lexical availability; the ease of conceptualizing a referent in a certain way and then retrieving and producing a label for it. For many objects, the most available labels are basic nouns; for example, the word "dog". Instead of saying animal or husky for the referent, most subjects will default to dog. If in a set of objects one is to refer to either a husky, a table, and a poster, people are still most likely to use the word "dog." This is technically a violation of Grice's maxim of quantity, as using the term animal is sufficient.
Lexical entrainment has applications in natural language processing in computers, as well as human–human interaction. Until recently, the adaptability of computers to modify their referencing to the terms of their human interlocutor has been limited, so the entrainment adaptation always relied on the human operator. However, the emergence of Large Language Models (LLMs) may change this drastically. There is now evidence emerging that LLMs such as GPT-4 demonstrate alignment capabilities similar to that of humans.[5]