In probability theory and statistics, a collection of random variables is independent and identically distributed if each random variable has the same probability distribution as the others and all are mutually independent.[1] This property is usually abbreviated as i.i.d., iid, or IID. IID was first defined in statistics and finds application in different fields such as data mining and signal processing.
Statistics commonly deals with random samples. A random sample can be thought of as a set of objects that are chosen randomly. More formally, it is "a sequence of independent, identically distributed (IID) random data points."
In other words, the terms random sample and IID are synonymous. In statistics, "random sample" is the typical terminology, but in probability, it is more common to say "IID."
Independent and identically distributed random variables are often used as an assumption, which tends to simplify the underlying mathematics. In practical applications of statistical modeling, however, this assumption may or may not be realistic.[3]
The i.i.d. assumption is also used in the central limit theorem, which states that the probability distribution of the sum (or average) of i.i.d. variables with finite variance approaches a normal distribution.[4]
The i.i.d. assumption frequently arises in the context of sequences of random variables. Then, "independent and identically distributed" implies that an element in the sequence is independent of the random variables that came before it. In this way, an i.i.d. sequence is different from a Markov sequence, where the probability distribution for the th random variable is a function of the previous random variable in the sequence (for a first-order Markov sequence). An i.i.d. sequence does not imply the probabilities for all elements of the sample space or event space must be the same.[5] For example, repeated throws of loaded dice will produce a sequence that is i.i.d., despite the outcomes being biased.
In signal processing and image processing, the notion of transformation to i.i.d. implies two specifications, the "i.d." part and the "i." part:
i.d. – The signal level must be balanced on the time axis.
i. – The signal spectrum must be flattened, i.e. transformed by filtering (such as deconvolution) to a white noise signal (i.e. a signal where all frequencies are equally present).
Suppose that the random variables
X
Y
I\subseteqR
FX(x)=\operatorname{P}(X\leqx)
FY(y)=\operatorname{P}(Y\leqy)
X
Y
FX,Y(x,y)=\operatorname{P}(X\leqx\landY\leqy)
Two random variables
X
Y
FX(x)=FY(x)\forallx\inI
Two random variables
X
Y
FX,Y(x,y)=FX(x) ⋅ FY(y)\forallx,y\inI
Two random variables
X
Y
The definition extends naturally to more than two random variables. We say that
n
X1,\ldots,Xn
where
F | |
X1,\ldots,Xn |
(x1,\ldots,xn)=\operatorname{P}(X1\leqx1\land\ldots\landXn\leqxn)
X1,\ldots,Xn
In probability theory, two events, and , are called independent if and only if . In the following, is short for .
Suppose there are two events of the experiment, and . If , there is a possibility . Generally, the occurrence of has an effect on the probability of — this is called conditional probability. Additionally, only when the occurrence of has no effect on the occurrence of , there is .
Note: If and , then and are mutually independent which cannot be established with mutually incompatible at the same time; that is, independence must be compatible and mutual exclusion must be related.
Suppose , , and are three events. If , , , and are satisfied, then the events , , and are mutually independent.
A more general definition is there are events, . If the probabilities of the product events for any events are equal to the product of the probabilities of each event, then the events are independent of each other.
A sequence of outcomes of spins of a fair or unfair roulette wheel is i.i.d. One implication of this is that if the roulette ball lands on "red", for example, 20 times in a row, the next spin is no more or less likely to be "black" than on any other spin (see the gambler's fallacy).
Toss a coin 10 times and record how many times the coin lands on heads.
Such a sequence of two possible i.i.d. outcomes is also called a Bernoulli process.
Roll a die 10 times and record how many times the result is 1.
Choose a card from a standard deck of cards containing 52 cards, then place the card back in the deck. Repeat this 52 times. Record the number of kings that appear.
Many results that were first proven under the assumption that the random variables are i.i.d. have been shown to be true even under a weaker distributional assumption.
See main article: Exchangeable random variables. The most general notion which shares the main properties of i.i.d. variables are exchangeable random variables, introduced by Bruno de Finetti. Exchangeability means that while variables may not be independent, future ones behave like past ones — formally, any value of a finite sequence is as likely as any permutation of those values — the joint probability distribution is invariant under the symmetric group.
This provides a useful generalization — for example, sampling without replacement is not independent, but is exchangeable.
See main article: Lévy process. In stochastic calculus, i.i.d. variables are thought of as a discrete time Lévy process: each variable gives how much one changes from one time to another. For example, a sequence of Bernoulli trials is interpreted as the Bernoulli process.
One may generalize this to include continuous time Lévy processes, and many Lévy processes can be seen as limits of i.i.d. variables—for instance, the Wiener process is the limit of the Bernoulli process.
Machine learning utilizes the vast amounts of data currently available to deliver faster and more accurate results.[6] To train machine learning models effectively, it is crucial to use historical data that is broadly generalizable. If the training data is not representative of the overall situation, the model's performance on new, unseen data may be inaccurate.
The i.i.d., or independent and identically distributed hypothesis, allows for a significant reduction in the number of individual cases required in the training sample.
This assumption simplifies mathematical maximization calculations. In optimization problems, the assumption of independent and identical distribution simplifies the calculation of the likelihood function. Due to the independence assumption, the likelihood function can be expressed as:
l(\theta)=P(x1,x2,x3,...,xn|\theta)=P(x1|\theta)P(x2|\theta)P(x3|\theta)...P(xn|\theta)
To maximize the probability of the observed event, the log function is applied to maximize the parameter . Specifically, it computes:
\rmargmax\limits\thetalog(l(\theta))
log(l(\theta))=log(P(x1|\theta))+log(P(x2|\theta))+log(P(x3|\theta))+...+log(P(xn|\theta))
Computers are very efficient at performing multiple additions, but not as efficient at performing multiplications. This simplification enhances computational efficiency. The log transformation, in the process of maximizing, converts many exponential functions into linear functions.
There are two main reasons why this hypothesis is practically useful with the central limit theorem:
__FORCETOC__