In computer science, a computation is said to diverge if it does not terminate or terminates in an exceptional state.[1] Otherwise it is said to converge. In domains where computations are expected to be infinite, such as process calculi, a computation is said to diverge if it fails to be productive (i.e. to continue producing an action within a finite amount of time).
Various subfields of computer science use varying, but mathematically precise, definitions of what it means for a computation to converge or diverge.
In abstract rewriting, an abstract rewriting system is called convergent if it is both confluent and terminating.
The notation t ↓ n means that t reduces to normal form n in zero or more reductions, t↓ means t reduces to some normal form in zero or more reductions, and t↑ means t does not reduce to a normal form; the latter is impossible in a terminating rewriting system.
In the lambda calculus an expression is divergent if it has no normal form.
f:A\cup\{\perp\} → B\cup\{\perp\}
In the calculus of communicating sequential processes (CSP), divergence is a drastic situation where a process performs an endless series of hidden actions. For example, consider the following process, defined by CSP notation:
Clock=tick → Clock
\operatorname{traces}(Clock)=\{\langle\rangle,\langletick\rangle,\langletick,tick\rangle, … \}=\{tick\}*
P=Clock\backslashtick