Apriori algorithm is a very basic and classical algorithm in association rules mining, I think many tutorials appear a lot of formulas are not very suitable for a beginner to understand. Therefore, this article lists a simple example to illustrate the entire step of the next Apriori algorithm.
The following table represents a transactional database D, where the minimum support level is 50% and the minimum confidence level is 70%, and the frequent association rules in the transactional database are obtained.
Tid |
Project set |
1 |
Bread, milk, beer, diapers |
2 |
Bread, milk, beer |
3 |
Beer, diaper |
4 |
Bread, milk, peanuts |
The steps for the Apriori algorithm are as follows:
(1) Generate candidate frequent 1-item set c1={{bread},{milk},{beer},{peanut},{Diaper}}.
(2) Scan transaction database D to calculate the degree of support for each set of items in C1 in D. From transaction database D, you can conclude that the number of support per project set is 3,3,3,1,2, and that the total number of project sets for transaction database D is 4, so the support for each project set in C1 is 75%,75%,75%,25%,50%. Depending on the minimum support level of 50%, you can draw frequent 1-item set l1={{bread},{milk},{beer},{diapers}.
(3) According to L1 generation candidate frequent 2-item set c2={{bread, milk},{bread, beer},{bread, diaper},{milk, beer},{milk, diaper},{beer, diaper}.
(4) Scan transaction database D to calculate the degree of support for each set of items in C2 in D. From transaction database D, you can conclude that the number of support per project set is 3,2,1,2,1,2, and that the total number of project sets for transaction database D is 4, so the support for each project set in C2 is 75%,50%,25%,50%,25%,50%. According to the minimum support degree of 50%, can be drawn frequently 2-item set l2={{bread, milk},{bread, beer},{milk, beer},{beer, diapers}.
(5) According to L2 generation candidate frequent 3-item set c3={{bread, milk, beer},{bread, milk, diaper},{bread, beer, diaper},{milk, beer, diaper}, because C3 in project set {bread, milk, diapers} A subset of {milk, diapers} is not present in L2, So it can be removed. Similarly, the project set {bread, beer, diapers}, {milk, beer, diapers} can also be removed. So c3={bread, milk, beer}.
(6) Scan transaction database D to calculate the degree of support for each set of items in C3 in D. From transaction database D, you can conclude that the number of support for each project set is 2, and that the total number of project sets for transaction database D is 4, so that the support for each set of items in C2 is 50%, respectively. According to the minimum support degree of 50%, can be drawn frequently 3-item set l3={{bread, milk, beer}.
(7) l=l1ul2ul3={{bread},{Milk},{beer},{peanut},{diaper},{bread, milk},{bread, beer},{milk, beer},{beer, diaper},{bread, milk, beer}.
(8) We only consider the project set length greater than 1, such as {bread, milk, beer}, all its non-true subset {bread},{milk},{beer},{bread, milk},{bread, beer},{milk, beer}, calculate association rules separately {bread}->{milk, beer},{milk} ->{bread, Beer},{beer}->{bread, milk},{bread, milk}->{beer},{bread, beer}->{milk},{milk, beer}->{bread} confidence level, the value is 67%,67%,67%,67%,100% , 100%. Since the minimum confidence level is 70%, you can get},{bread, beer}->{milk},{milk, beer}->{bread} as frequent association rules. In other words, when you buy bread and beer, you buy milk, milk and beer, and you buy bread.
A simple example of the Apriori algorithm