Dependence logic explained

Dependence logic is a logical formalism, created by Jouko Väänänen,[1] which adds dependence atoms to the language of first-order logic. A dependence atom is an expression of the form

=(t1\ldotstn)

, where

t1\ldotstn

are terms, and corresponds to the statement that the value of

tn

is functionally dependent on the values of

t1,\ldots,tn-1

.

Dependence logic is a logic of imperfect information, like branching quantifier logic or independence-friendly logic (IF logic): in other words, its game-theoretic semantics can be obtained from that of first-order logic by restricting the availability of information to the players, thus allowing for non-linearly ordered patterns of dependence and independence between variables. However, dependence logic differs from these logics in that it separates the notions of dependence and independence from the notion of quantification.

Syntax

The syntax of dependence logic is an extension of that of first-order logic. For a fixed signature σ = (Sfunc, Srel, ar), the set of all well-formed dependence logic formulas is defined according to the following rules:

Terms

Terms in dependence logic are defined precisely as in first-order logic.

Atomic formulas

There are three types of atomic formulas in dependence logic:

  1. A relational atom is an expression of the form

Rt1\ldotstn

for any n-ary relation

R

in our signature and for any n-tuple of terms

(t1,\ldots,tn)

;
  1. An equality atom is an expression of the form

t1=t2

, for any two terms

t1

and

t2

;
  1. A dependence atom is an expression of the form

=(t1\ldotstn)

, for any

n\inN

and for any n-tuple of terms

(t1,\ldots,tn)

.

Nothing else is an atomic formula of dependence logic.

Relational and equality atoms are also called first-order atoms.

Complex formulas and sentences

For a fixed signature σ, the set of all formulas

\phi

of dependence logic and their respective sets of free variables

Free(\phi)

are defined as follows:
  1. Any atomic formula

\phi

is a formula, and

Free(\phi)

is the set of all variables occurring in it;
  1. If

\phi

is a formula, so is

lnot\phi

and

Free(lnot\phi)=Free(\phi)

;
  1. If

\phi

and

\psi

are formulas, so is

\phi\vee\psi

and

Free(\phi\vee\psi)=Free(\phi)\cupFree(\psi)

;
  1. If

\phi

is a formula and

x

is a variable,

\existsx\phi

is also a formula and

Free(\existsv\phi)=Free(\phi)\backslash\{v\}

.

Nothing is a dependence logic formula unless it can be obtained through a finite number of applications of these four rules.

A formula

\phi

such that

Free(\phi)=\emptyset

is a sentence of dependence logic.

Conjunction and universal quantification

In the above presentation of the syntax of dependence logic, conjunction and universal quantification are not treated as primitive operators; rather, they are defined in terms of negation and, respectively, disjunction and existential quantification, by means of De Morgan's Laws.

Therefore,

\phi\wedge\psi

is taken as a shorthand for

lnot(lnot\phi\veelnot\psi)

, and

\forallx\phi

is taken as a shorthand for

lnot(\existsx(lnot\phi))

.

Semantics

The team semantics for dependence logic is a variant of Wilfrid Hodges' compositional semantics for IF logic.[2] [3] There exist equivalent game-theoretic semantics for dependence logic, both in terms of imperfect information games and in terms of perfect information games.

Teams

Let

lA=(A,\sigma,I)

be a first-order structure and let

V=\{v1,\ldots,vn\}

be a finite set of variables. Then a team over with domain is a set of assignments over with domain, that is, a set of functions from to .

It may be helpful to visualize such a team as a database relation with attributes

v1,\ldots,vn

and with only one data type, corresponding to the domain of the structure: for example, if the team consists of four assignments

\mu1,\ldots,\mu4

with domain

\{v1,v2,v3\}

then one may represent it as the relation

\begin{array}{c|ccc} &v1&v2 &v3 \\ \hline \mu1& \mu1(v1)&\mu1(v2)&\mu1(v3) \\ \mu2& \mu2(v1)&\mu2(v2)&\mu2(v3) \\ \mu3& \mu3(v1)&\mu3(v2)&\mu3(v3) \\ \mu4& \mu4(v1)&\mu4(v2)&\mu4(v3) \end{array}

Positive and negative satisfaction

Team semantics can be defined in terms of two relations

lT

and

lC

between structures, teams and formulas.

Given a structure

lA

, a team

X

over it and a dependence logic formula

\phi

whose free variables are contained in the domain of

X

, if

(lA,X,\phi)\inlT

we say that

X

is a trump for

\phi

in

lA

, and we write that

lA

+
\models
X

\phi

; and analogously, if

(lA,X,\phi)\inlC

we say that

X

is a cotrump for

\phi

in

lA

, and we write that

lA

-
\models
X

\phi

.

If

lA

+
\models
X

\phi

one can also say that

\phi

is positively satisfied by

X

in

lA

, and if instead

lA

-
\models
X

\phi

one can say that

\phi

is negatively satisfied by

X

in

lA

.

The necessity of considering positive and negative satisfaction separately is a consequence of the fact that in dependence logic, as in the logic of branching quantifiers or in IF logic, the law of the excluded middle does not hold; alternatively, one may assume that all formulas are in negation normal form, using De Morgan's relations in order to define universal quantification and conjunction from existential quantification and disjunction respectively, and consider positive satisfaction alone.

Given a sentence

\phi

, we say that

\phi

is true in

lA

if and only if

lA\models\{\emptyset\

}^+ \phi, and we say that

\phi

is false in

lA

if and only if

lA\models\{\emptyset\

}^- \phi.

Semantic rules

As for the case of Alfred Tarski's satisfiability relation for first-order formulas, the positive and negative satisfiability relations of the team semantics for dependence logic are defined by structural induction over the formulas of the language. Since the negation operator interchanges positive and negative satisfiability, the two inductions corresponding to

\models+

and

\models-

need to be performed simultaneously:

Positive satisfiability

lA

+
\models
X

Rt1\ldotstn

if and only if

R

is a n-ary symbol in the signature of

lA

;
    1. All variables occurring in the terms

t1,\ldots,tn

are in the domain of

X

;
    1. For every assignment

\mu\inX

, the evaluation of the tuple

(t1,\ldots,tn)

according to

\mu

is in the interpretation of

R

in

lA

;

lA

+
\models
X

t1=t2

if and only if
    1. All variables occurring in the terms

t1

and

t2

are in the domain of

X

;
    1. For every assignment

\mu\inX

, the evaluations of

t1

and

t2

according to

lA

are the same;

lA

+
\models
X

=(t1\ldotstn)

if and only if any two assignments

\mu,\mu'\inX

whose evaluations of the tuple

(t1,\ldots,tn-1)

coincide assign the same value to

tn

;

lA

+
\models
X

lnot\phi

if and only if

lA

-
\models
X

\phi

;

lA

+
\models
X

\phi\vee\psi

if and only if there exist teams

Y

and

Z

such that

X=Y\cupZ'

lA

+
\models
Y

\phi

;

lA

+
\models
Z

\psi

;

lA

+
\models
X

\existsx\phi

if and only if there exists a function

F

from

X

to the domain of

lA

such that

lA

+
\models
X[F/x]

\phi

, where

X[F/x]=\{\mu[F(\mu)/x]:\mu\inX\}

.

Negative satisfiability

lA

-
\models
X

Rt1\ldotstn

if and only if

R

is a n-ary symbol in the signature of

lA

;
    1. All variables occurring in the terms

t1,\ldots,tn

are in the domain of

X

;
    1. For every assignment

\mu\inX

, the evaluation of the tuple

(t1,\ldots,tn)

according to

\mu

is not in the interpretation of

R

in

lA

;

lA

-
\models
X

t1=t2

if and only if
    1. All variables occurring in the terms

t1

and

t2

are in the domain of

X

;
    1. For every assignment

\mu\inX

, the evaluations of

t1

and

t2

according to

lA

are different;

lA

-
\models
X

=(t1\ldotstn)

if and only if

X

is the empty team;

lA

-
\models
X

lnot\phi

if and only if

lA

+
\models
X

\phi

;

lA

-
\models
X

\phi\vee\psi

if and only if

lA

-
\models
X

\phi

and

lA

-
\models
X

\psi

;

lA

-
\models
X

\existsx\phi

if and only if

lA

-
\models
X[A/x]

\phi

, where

X[A/x]=\{\mu[a/x]:a\inA\}

and

A

is the domain of

lA

.

Dependence logic and other logics

Dependence logic and first-order logic

Dependence logic is a conservative extension of first-order logic:[4] in other words, for every first-order sentence

\phi

and structure

lA

we have that

lA\models\{\emptyset\

}^+ \phi if and only if

\phi

is true in

lA

according to the usual first-order semantics. Furthermore, for any first-order formula

\phi

,

lA

+
\models
X

\phi

if and only if all assignments

\mu\inX

satisfy

\phi

in

lA

according to the usual first-order semantics.

However, dependence logic is strictly more expressive than first-order logic:[5] for example, the sentence

\existsz\forallx1\forallx2\existsy1\existsy2(=(x1,y1)\wedge=(x2,y2)\wedge(x1=x2\leftrightarrowy1=y2)\wedgey1\not=z)

is true in a model

lA

if and only if the domain of this model is infinite, even though no first-order formula

\phi

has this property.

Dependence logic and second-order logic

Every dependence logic sentence is equivalent to some sentence in the existential fragment of second-order logic,[6] that is, to some second-order sentence of the form

\existsR1\ldots\existsRn\psi(R1,\ldots,Rn)

where

\psi(R1,\ldots,Rn)

does not contain second-order quantifiers. Conversely, every second-order sentence in the above form is equivalent to some dependence logic sentence.[7]

As for open formulas, dependence logic corresponds to the downwards monotone fragment of existential second-order logic, in the sense that a nonempty class of teams is definable by a dependence logic formula if and only if the corresponding class of relations is downwards monotone and definable by an existential second-order formula.[8]

Dependence logic and branching quantifiers

Branching quantifiers are expressible in terms of dependence atoms: for example, the expression

(QHx1,x2,y1,y2)\phi(x1,x2,y1,y2)\equiv\begin{pmatrix}\forallx1\existsy1\\forallx2\existsy2\end{pmatrix}\phi(x1,x2,y1,y2)

is equivalent to the dependence logic sentence

\forallx1\existsy1\forallx2\existsy2(=(x1,y1)\wedge=(x2,y2)\wedge\phi)

, in the sense that the former expression is true in a model if and only if the latter expression is true.

Conversely, any dependence logic sentence is equivalent to some sentence in the logic of branching quantifiers, since all existential second-order sentences are expressible in branching quantifier logic.[9] [10]

Dependence logic and IF logic

Any dependence logic sentence is logically equivalent to some IF logic sentence, and vice versa.[11]

However, the issue is subtler when it comes to open formulas. Translations between IF logic and dependence logic formulas, and vice versa, exist as long as the domain of the team is fixed: in other words, for all sets of variables

V=\{v1\ldotsvn\}

and all IF logic formulas

\phi

with free variables in

V

there exists a dependence logic formula

\phiD

such that

lA

+
\models
X

\phi\LeftrightarrowlA

+
\models
X

\phiD

for all structures

lA

and for all teams

X

with domain

V

, and conversely, for every dependence logic formula

\psi

with free variables in

V

there exists an IF logic formula

\psiI

such that

lA

+
\models
X

\psi\LeftrightarrowlA

+
\models
X

\psiI

for all structures

lA

and for all teams

X

with domain

V

. These translations cannot be compositional.[12]

Properties

Dependence logic formulas are downwards closed: if

lA\modelsX\phi

and

Y\subseteqX

then

lA\modelsY\psi

. Furthermore, the empty team (but not the team containing the empty assignment) satisfies all formulas of dependence logic, both positively and negatively.

The law of the excluded middle fails in dependence logic: for example, the formula

\existsy(=(y)\wedgey=x)

is neither positively nor negatively satisfied by the team

X=\{(x:0),(x:1)\}

. Furthermore, disjunction is not idempotent and does not distribute over conjunction.[13]

Both the compactness theorem and the Löwenheim–Skolem theorem are true for dependence logic. Craig's interpolation theorem also holds, but, due to the nature of negation in dependence logic, in a slightly modified formulation: if two dependence logic formulas

\phi

and

\psi

are contradictory, that is, it is never the case that both

\phi

and

\psi

hold in the same model, then there exists a first-order sentence

\theta

in the common language of the two sentences such that

\phi

implies

\theta

and

\theta

is contradictory with

\psi

.[14]

As for IF logic,[15] dependence logic can define its own truth operator:[16] more precisely, there exists a formula

\tau(x)

such that for every sentence

\phi

of dependence logic and all models

lM\omega

which satisfy Peano's axioms, if

'\phi'

is the Gödel number of

\phi

then

lM\omega

+
\models
\{\emptyset\
} \!\phi if and only if

lM\omega

+
\models
\{\emptyset\
} \tau('\phi').

This does not contradict Tarski's undefinability theorem, since the negation of dependence logic is not the usual contradictory one.

Complexity

As a consequence of Fagin's theorem, the properties of finite structures definable by a dependence logic sentence correspond exactly to NP properties. Furthermore, Durand and Kontinen showed that restricting the number of universal quantifiers or the arity of dependence atoms in sentences gives rise to hierarchy theorems with respect to expressive power.[17]

The inconsistency problem of dependence logic is semidecidable, and in fact equivalent to the inconsistency problem for first-order logic. However, the decision problem for dependence logic is non-arithmetical, and is in fact complete with respect to the

\Pi2

class of the Lévy hierarchy.[18]

Variants and extensions

Team logic

Team logic[19] extends dependence logic with a contradictory negation

\sim\phi

. Its expressive power is equivalent to that of full second-order logic.[20]

Modal dependence logic

The dependence atom, or a suitable variant thereof, can be added to the language of modal logic, thus obtaining modal dependence logic.[21] [22] [23]

Intuitionistic dependence logic

As it is, dependence logic lacks an implication. The intuitionistic implication

\phi\psi

, whose name derives from the similarity between its definition and that of the implication of intuitionistic logic, can be defined as follows:[24]

lA\modelsX\phi\psi

if and only if for all

Y\subseteqX

such that

lA\modelsY\phi

it holds that

lA\modelsY\psi

.

Intuitionistic dependence logic, that is, dependence logic supplemented with the intuitionistic implication, is equivalent to second-order logic.[25]

Independence logic

Instead of dependence atoms, independence logic adds to the language of first-order logic independence atoms

\vec{t1}\bot

\vec{t3
} \vec where

\vec{t1}

,

\vec{t2}

and

\vec{t3}

are tuples of terms. The semantics of these atoms is defined as follows:

lA\modelsX\vec{t1}\bot

\vec{t3
} \vec if and only if for all

s,s'\inX

with

\vec{t3}\langles\rangle=\vec{t3}\langles'\rangle

there exists

s''\inX

such that

\vec{t3}\langles''\rangle=\vec{t3}\langles\rangle

,

\vec{t1}\langles''\rangle=\vec{t1}\langles\rangle

and

\vec{t2}\langles''\rangle=\vec{t2}\langles'\rangle

.Independence logic corresponds to existential second-order logic, in the sense that a non-empty class of teams is definable by an independence logic formula if and only if the corresponding class of relations is definable by an existential second-order formula.[26] Therefore, on the level of open formulas, independence logic is strictly stronger in expressive power than dependence logic. However, on the level of sentences these logics are equivalent.[27]

Inclusion/exclusion logic

Inclusion/exclusion logic extends first-order logic with inclusion atoms

\vec{t1}\subseteq\vec{t2}

and exclusion atoms

\vec{t1}\mid\vec{t2}

where in both formulas

\vec{t1}

and

\vec{t2}

are term tuples of the same length. The semantics of these atoms is defined as follows:

lA\modelsX\vec{t1}\subseteq\vec{t2}

if and only if for all

s\inX

there exists

s'\inX

such that

\vec{t1}\langles\rangle=\vec{t2}\langles'\rangle

;

lA\modelsX\vec{t1}\mid\vec{t2}

if and only if for all

s,s'\inX

it holds that

\vec{t1}\langles\rangle\vec{t2}\langles'\rangle

.

Inclusion/exclusion logic has the same expressive power as independence logic, already on the level of open formulas.[28] Inclusion logic and exclusion logic are obtained by adding only inclusion atoms or exclusion atoms to first-order logic, respectively. Inclusion logic sentences correspond in expressive power to greatest fixed-point logic sentences; hence inclusion logic captures (least) fixed-point logic on finite models, and PTIME over finite ordered models.[29] Exclusion logic in turn corresponds to dependence logic in expressive power.[30]

Generalized quantifiers

Another way of extending dependence logic is to add some generalized quantifiers to the language of dependence logic. Very recently there has been some study of dependence logic with monotone generalized quantifiers[31] and dependence logic with a certain majority quantifier, the latter leading to a new descriptive complexity characterization of the counting hierarchy.[32]

See also

External links

Notes and References

  1. [#refVaananen07|Väänänen 2007]
  2. [#refHodges97|Hodges 1997]
  3. [#refVaananen07|Väänänen 2007, §3.2]
  4. [#refVaananen07|Väänänen 2007, §3.2]
  5. [#refVaananen07|Väänänen 2007, §4]
  6. [#refVaananen07|Väänänen 2007, §6.1]
  7. [#refVaananen07|Väänänen 2007, §6.3]
  8. [#refKontVaan09|Kontinen and Väänänen 2009]
  9. [#refEnderton70|Enderton 1970]
  10. [#refWalkoe70|Walkoe 1970]
  11. [#refVaananen07|Väänänen 2007, §3.6]
  12. [#refKontVaan09b|Kontinen and Väänänen 2009 bis]
  13. [#refVaananen07|Väänänen 2007, §3]
  14. [#refVaananen07|Väänänen 2007, §6.2]
  15. [#refHintikka02|Hintikka 2002]
  16. [#refVaananen07|Väänänen 2007, §6.4]
  17. [#refDurKon12|Durand and Kontinen]
  18. [#refVaananen07|Väänänen 2007, §7]
  19. [#refVaananen07|Väänänen 2007, §8]
  20. [#refKontNurmi09|Kontinen and Nurmi 2009]
  21. [#refSevenster09|Sevenster 2009]
  22. [#refVaananen08|Väänänen 2008]
  23. [#refLohmVoll10|Lohmann and Vollmer 2010]
  24. [#refAbrVaan09|Abramsky and Väänänen 2009]
  25. [#refYang10|Yang 2010]
  26. [#refGal12|Galliani 2012]
  27. [#refGrVaan12|Grädel and Väänänen]
  28. [#refGal12|Galliani 2012]
  29. [#refGalHella13|Galliani and Hella 2013]
  30. [#refGal12|Galliani 2012]
  31. [#refEng12|Engström]
  32. [#refDurEbb11|Durand, Ebbing, Kontinen, Vollmer 2011]