Sample entropy explained

Sample entropy (SampEn) is a modification of approximate entropy (ApEn), used for assessing the complexity of physiological time-series signals, diagnosing diseased states.[1] SampEn has two advantages over ApEn: data length independence and a relatively trouble-free implementation. Also, there is a small computational difference: In ApEn, the comparison between the template vector (see below) and the rest of the vectors also includes comparison with itself. This guarantees that probabilities

Ci'm(r)

are never zero. Consequently, it is always possible to take a logarithm of probabilities. Because template comparisons with itself lower ApEn values, the signals are interpreted to be more regular than they actually are. These self-matches are not included in SampEn. However, since SampEn makes direct use of the correlation integrals, it is not a real measure of information but an approximation. The foundations and differences with ApEn, as well as a step-by-step tutorial for its application is available at.[2]

There is a multiscale version of SampEn as well, suggested by Costa and others.[3] SampEn can be used in biomedical and biomechanical research, for example to evaluate postural control.[4] [5]

Definition

m

, tolerance

r

and number of data points

N

, SampEn is the negative natural logarithm of the probability that if two sets of simultaneous data points of length

m

have distance

<r

then two sets of simultaneous data points of length

m+1

also have distance

<r

. And we represent it by

SampEn(m,r,N)

(or by

SampEn(m,r,\tau,N)

including sampling time

\tau

).

Now assume we have a time-series data set of length

N={\{x1,x2,x3,...,xN\}}

with a constant time interval

\tau

. We define a template vector of length

m

, such that

Xm(i)={\{xi,xi+1,xi+2,...,xi+m-1\}}

and the distance function

d[Xm(i),Xm(j)]

(i≠j) is to be the Chebyshev distance (but it could be any distance function, including Euclidean distance). We define the sample entropy to be

SampEn=-ln{A\overB}

Where

A

= number of template vector pairs having

d[Xm+1(i),Xm+1(j)]<r

B

= number of template vector pairs having

d[Xm(i),Xm(j)]<r

It is clear from the definition that

A

will always have a value smaller or equal to

B

. Therefore,

SampEn(m,r,\tau)

will be always either be zero or positive value. A smaller value of

SampEn

also indicates more self-similarity in data set or less noise.

Generally we take the value of

m

to be

2

and the value of

r

to be

0.2 x std

.Where std stands for standard deviation which should be taken over a very large dataset. For instance, the r value of 6 ms is appropriate for sample entropy calculations of heart rate intervals, since this corresponds to

0.2 x std

for a very large population.

Multiscale SampEn

The definition mentioned above is a special case of multi scale sampEn with

\delta=1

, where

\delta

is called skipping parameter. In multiscale SampEn template vectors are defined with a certain interval between its elements, specified by the value of

\delta

. And modified template vector is defined as

Xm,\delta(i)={xi,xi+\delta,xi+2 x \delta,...,xi+(m-1) x \delta}

and sampEn can be written as

SampEn\left(m,r,\delta\right)=-ln{A\delta\overB\delta}

And we calculate

A\delta

and

B\delta

like before.

Implementation

Sample entropy can be implemented easily in many different programming languages. Below lies an example written in Python. from itertools import combinationsfrom math import log

def construct_templates(timeseries_data: list, m: int = 2): num_windows = len(timeseries_data) - m + 1 return [timeseries_data[x : x + m] for x in range(0, num_windows)]

def get_matches(templates: list, r: float): return len(list(filter(lambda x: is_match(x[0], x[1], r), combinations(templates, 2))))

def is_match(template_1: list, template_2: list, r: float): return all([abs(x - y) < r for (x, y) in zip(template_1, template_2)])

def sample_entropy(timeseries_data: list, window_size: int, r: float): B = get_matches(construct_templates(timeseries_data, window_size), r) A = get_matches(construct_templates(timeseries_data, window_size + 1), r) return -log(A / B)An equivalent example in numerical Python. import numpy

def construct_templates(timeseries_data, m): num_windows = len(timeseries_data) - m + 1 return numpy.array([timeseries_data[x : x + m] for x in range(0, num_windows)])

def get_matches(templates, r): return len(list(filter(lambda x: is_match(x[0], x[1], r), combinations(templates))))

def combinations(x): idx = numpy.stack(numpy.triu_indices(len(x), k=1), axis=-1) return x[idx]

def is_match(template_1, template_2, r): return numpy.all([abs(x - y) < r for (x, y) in zip(template_1, template_2)])

def sample_entropy(timeseries_data, window_size, r): B = get_matches(construct_templates(timeseries_data, window_size), r) A = get_matches(construct_templates(timeseries_data, window_size + 1), r) return -numpy.log(A / B)

An example written in other languages can be found:

See also

Notes and References

  1. 10843903. 10.1152/ajpheart.2000.278.6.H2039 . 2000 . Richman . JS . Moorman . JR . Physiological time-series analysis using approximate entropy and sample entropy . 278 . 6 . H2039–49 . American Journal of Physiology. Heart and Circulatory Physiology.
  2. Delgado-Bonal. Alfonso. Marshak. Alexander. June 2019. Approximate Entropy and Sample Entropy: A Comprehensive Tutorial. Entropy. en. 21. 6. 541. 10.3390/e21060541. 33267255. 7515030. 2019Entrp..21..541D. free.
  3. 10.1103/PhysRevE.71.021906 . Multiscale entropy analysis of biological signals . 2005 . Costa . Madalena . Goldberger . Ary . Peng . C.-K. . Physical Review E . 71 . 2 . 15783351 . 021906. 2005PhRvE..71b1906C .
  4. Błażkiewicz. Michalina. Kędziorek. Justyna. Hadamus. Anna. March 2021. The Impact of Visual Input and Support Area Manipulation on Postural Control in Subjects after Osteoporotic Vertebral Fracture. Entropy. en. 23. 3. 375. 10.3390/e23030375. 33804770. 8004071 . 2021Entrp..23..375B. free.
  5. Hadamus. Anna. Białoszewski. Dariusz. Błażkiewicz. Michalina. Kowalska. Aleksandra J.. Urbaniak. Edyta. Wydra. Kamil T.. Wiaderna. Karolina. Boratyński. Rafał. Kobza. Agnieszka. Marczyński. Wojciech. February 2021. Assessment of the Effectiveness of Rehabilitation after Total Knee Replacement Surgery Using Sample Entropy and Classical Measures of Body Balance. Entropy. en. 23. 2. 164. 33573057. 10.3390/e23020164 . 7911395. 2021Entrp..23..164H. free.