Apriori[1] is an algorithm for frequent item set mining and association rule learning over relational databases. It proceeds by identifying the frequent individual items in the database and extending them to larger and larger item sets as long as those item sets appear sufficiently often in the database. The frequent item sets determined by Apriori can be used to determine association rules which highlight general trends in the database: this has applications in domains such as market basket analysis.
The Apriori algorithm was proposed by Agrawal and Srikant in 1994. Apriori is designed to operate on databases containing transactions (for example, collections of items bought by customers, or details of a website frequentation or IP addresses[2]). Other algorithms are designed for finding association rules in data having no transactions (Winepi and Minepi), or having no timestamps (DNA sequencing). Each transaction is seen as a set of items (an itemset). Given a threshold
C
C
Apriori uses a "bottom up" approach, where frequent subsets are extended one item at a time (a step known as candidate generation), and groups of candidates are tested against the data. The algorithm terminates when no further successful extensions are found.
Apriori uses breadth-first search and a Hash tree structure to count candidate item sets efficiently. It generates candidate item sets of length
k
k-1
k
The pseudo code for the algorithm is given below for a transaction database
T
\varepsilon
T
Ck
k
count[c]
c
Apriori(T, ε) L1 ← k ← 2 while Lk−1 is not empty Ck ← Apriori_gen(Lk−1, k) for transactions t in T Dt ← for candidates c in Dt count[c] ← count[c] + 1 Lk ← k ← k + 1 return Union(Lk) Apriori_gen(L, k) result ← list for all p ∈ L, q ∈ L where p1 = q1, p2 = q2, ..., pk-2 = qk-2 and pk-1 < qk-1 c = p ∪ if u ∈ L for all u ⊆ c where |u| = k-1 result.add(c) return result
Consider the following database, where each row is a transaction and each cell is an individual item of the transaction:
alpha | beta | epsilon | |
alpha | beta | theta | |
alpha | beta | epsilon | |
alpha | beta | theta |
The association rules that can be determined from this database are the following:
we can also illustrate this through a variety of examples.
Assume that a large supermarket tracks sales data by stock-keeping unit (SKU) for each item: each item, such as "butter" or "bread", is identified by a numerical SKU. The supermarket has a database of transactions where each transaction is a set of SKUs that were bought together.
Let the database of transactions consist of following itemsets:
We will use Apriori to determine the frequent item sets of this database. To do this, we will say that an item set is frequent if it appears in at least 3 transactions of the database: the value 3 is the support threshold.The first step of Apriori is to count up the number of occurrences, called the support, of each member item separately. By scanning the database for the first time, we obtain the following result
Item | Support | |
---|---|---|
3 | ||
6 | ||
4 | ||
5 |
All the itemsets of size 1 have a support of at least 3, so they are all frequent.
The next step is to generate a list of all pairs of the frequent items.
For example, regarding the pair : the first table of Example 2 shows items 1 and 2 appearing together in three of the itemsets; therefore, we say item has support of three.
Item | Support | |
---|---|---|
3 | ||
1 | ||
2 | ||
3 | ||
4 | ||
3 |
The pairs,,, and all meet or exceed the minimum support of 3, so they are frequent. The pairs and are not. Now, because and are not frequent, any larger set which contains or cannot be frequent. In this way, we can prune sets: we will now look for frequent triples in the database, but we can already exclude all the triples that contain one of these two pairs:
Item | Support | |
---|---|---|
2 |
in the example, there are no frequent triplets. is below the minimal threshold, and the other triplets were excluded because they were super sets of pairs that were already below the threshold.
We have thus determined the frequent sets of items in the database, and illustrated how some items were not counted because one of their subsets was already known to be below the threshold.
Apriori, while historically significant, suffers from a number of inefficiencies or trade-offs, which have spawned other algorithms. Candidate generation generates large numbers of subsets (The algorithm attempts to load up the candidate set, with as many as possible subsets before each scan of the database). Bottom-up subset exploration (essentially a breadth-first traversal of the subset lattice) finds any maximal subset S only after all
2|S|-1
The algorithm scans the database too many times, which reduces the overall performance. Due to this, the algorithm assumes that the database is permanently in the memory.
Also, both the time and space complexity of this algorithm are very high:
O\left(2|D|\right)
|D|
Later algorithms such as Max-Miner[3] try to identify the maximal frequent item sets without enumerating their subsets, and perform "jumps" in the search space rather than a purely bottom-up approach.