Dynamic semantics is a framework in logic and natural language semantics that treats the meaning of a sentence as its potential to update a context. In static semantics, knowing the meaning of a sentence amounts to knowing when it is true; in dynamic semantics, knowing the meaning of a sentence means knowing "the change it brings about in the information state of anyone who accepts the news conveyed by it."[1] In dynamic semantics, sentences are mapped to functions called context change potentials, which take an input context and return an output context. Dynamic semantics was originally developed by Irene Heim and Hans Kamp in 1981 to model anaphora, but has since been applied widely to phenomena including presupposition, plurals, questions, discourse relations, and modality.[2]
See also: Discourse representation theory and Donkey anaphora.
The first systems of dynamic semantics were the closely related File Change Semantics and discourse representation theory, developed simultaneously and independently by Irene Heim and Hans Kamp. These systems were intended to capture donkey anaphora, which resists an elegant compositional treatment in classic approaches to semantics such as Montague grammar.[2] [3] Donkey anaphora is exemplified by the infamous donkey sentences, first noticed by the medieval logician Walter Burley and brought to modern attention by Peter Geach.[4] [5]
Donkey sentence (relative clause): Every farmer who owns a donkey beats it.
Donkey sentence (conditional): If a farmer owns a donkey, he beats it.
To capture the empirically observed truth conditions of such sentences in first order logic, one would need to translate the indefinite noun phrase "a donkey" as a universal quantifier scoping over the variable corresponding to the pronoun "it".
FOL translation of donkey sentence: :
\forallx\forally((farmer(x)\landdonkey(y)\landown(x,y)) → beat(x,y))
While this translation captures (or approximates) the truth conditions of the natural language sentences, its relationship to the syntactic form of the sentence is puzzling in two ways. First, indefinites in non-donkey contexts normally express existential rather than universal quantification. Second, the syntactic position of the donkey pronoun would not normally allow it to be bound by the indefinite.
To explain these peculiarities, Heim and Kamp proposed that natural language indefinites are special in that they introduce a new discourse referent that remains available outside the syntactic scope of the operator that introduced it. To cash this idea out, they proposed their respective formal systems that capture donkey anaphora because they validate Egli's theorem and its corollary.[6]
Egli's theorem:
(\existsx\varphi)\land\psi\Leftrightarrow\existsx(\varphi\land\psi)
Egli's corollary:
(\existsx\phi → \psi)\Leftrightarrow\forallx(\phi → \psi)
Update semantics is a framework within dynamic semantics that was developed by Frank Veltman.[1] [7] In update semantics, each formula
\varphi
[\varphi]
C
C[\varphi]
C
\varphi
An update with
\varphi
\varphi
\varphi
[[\varphi]]
\varphi
\varphi
C
C[\varphi]=C\cap[[\varphi]]
Intersective update was proposed by Robert Stalnaker in 1978 as a way of formalizing the speech act of assertion.[9] [8] In Stalnaker's original system, a context (or context set) is defined as a set of possible worlds representing the information in the common ground of a conversation. For instance, if
C=\{w,v,u\}
w
v
u
[[\varphi]]=\{w,v\}
C
\varphi
C[\varphi]=\{w,v\}
\varphi
u
From a formal perspective, intersective update can be taken as a recipe for lifting one's preferred static semantics to dynamic semantics. For instance, if we take classical propositional semantics as our starting point, this recipe delivers the following intersective update semantics.[8]
C[P]=\{w\inC\midw(P)=1\}
C[\neg\varphi]=C-C[\varphi]
C[\varphi\land\psi]=C[\varphi]\capC[\psi]
C[\varphi\lor\psi]=C[\varphi]\cupC[\psi]
The notion of intersectivity can be decomposed into the two properties known as eliminativity and distributivity. Eliminativity says that an update can only ever remove worlds from the context—it can't add them. Distributivity says that updating
C
\varphi
C
\varphi
\varphi
C[\varphi]\subseteqC
C
\varphi
C[\varphi]=cup\{\{w\}[\varphi]\midw\inC\}
Intersectivity amounts to the conjunction of these two properties, as proven by Johan van Benthem.[8] [10]
The framework of update semantics is more general than static semantics because it is not limited to intersective meanings. Nonintersective meanings are theoretically useful because they contribute different information depending on what information is already present in the context. For instance, if
\varphi
[[\varphi]]
\varphi
[[\varphi]]
Many natural language expressions have been argued to have nonintersective meanings. The nonintersectivity of epistemic modals can be seen in the infelicity of epistemic contradictions.[11] [8]
Epistemic contradiction: #It's raining and it might not be raining.
These sentences have been argued to be bona fide logical contradictions, unlike superficially similar examples such as Moore sentences, which can be given a pragmatic explanation.[12] [8]
Epistemic contradiction principle:
\varphi\land\Diamond\neg\varphi\models\bot
These sentences cannot be analysed as logical contradictions within purely intersective frameworks such as the relational semantics for modal logic. The Epistemic Contradiction Principle only holds on the class of relational frames such that
Rwv ⇒ (w=v)
\Diamond\varphi
\varphi
\Diamond\neg\varphi
\varphi
C[\Diamond\varphi]=\begin{cases} C&ifC[\varphi] ≠ \varnothing\\ \varnothing&otherwise\end{cases}
On this semantics,
\Diamond\varphi
\varphi
\varphi
\Diamond\neg\varphi
\neg
\land