0% found this document useful (0 votes)
83 views

Association Analysis: Basic Concepts and Algorithms: Market-Basket Transactions

Uploaded by

anagha2982
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
83 views

Association Analysis: Basic Concepts and Algorithms: Market-Basket Transactions

Uploaded by

anagha2982
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 42

Association Analysis: Basic Concepts and Algorithms

© Tan,Steinbach, KumarIntroduction to Data Mining4/18/20041

Association Rule Mining

ransactions, find rules that will predict the occurrence of an item based on the occurrences of other items in the

Market-Basket transactions
Example of Association Rules

{Diaper}  {Beer},
{Milk, Bread}  {Eggs,Coke},
{Beer, Bread}  {Milk},

Implication means co-occurrence, not causality!

© Tan,Steinbach, KumarIntroduction to Data Mining4/18/20042


TID Items
1 Bread, Milk
2 Bread, Diaper, Beer, Eggs
3 Milk, Diaper, Beer, Coke
4 Bread, Milk, Diaper, Beer
5 Bread, Milk, Diaper, Coke
Definition: Frequent Itemset
Itemset
A collection of one or more items
Example: {Milk, Bread, Diaper}
k-itemset
An itemset that contains k items TID Items
Support count () 1 Bread, Milk
Frequency of occurrence of an itemset 2 Bread, Diaper, Beer, Eggs
E.g. ({Milk, Bread,Diaper}) = 2 3 Milk, Diaper, Beer, Coke
Support 4 Bread, Milk, Diaper, Beer
Fraction of transactions that contain an itemset
5 Bread, Milk, Diaper, Coke
E.g. s({Milk, Bread, Diaper}) = 2/5
Frequent Itemset
An itemset whose support is greater than or equal to a minsup threshold

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 3

Definition: Association Rule


Association Rule
TIDitemsets
An implication expression of the form X  Y, where X and Y are Items
Example: 1 Bread, Milk
{Milk, Diaper}  {Beer} 2 Bread, Diaper, Beer, Eggs
3 Milk, Diaper, Beer, Coke
4 Bread, Milk, Diaper, Beer
5 Bread, Milk, Diaper, Coke
Rule Evaluation Metrics
Support (s)
Fraction of transactions that contain both X and Y Example:
Confidence (c) {Milk, Diaper}  Beer
Measures how often items in Y appear in transactions that contain X

s   (Milk, Diaper, Beer)  2  0.4


|T| 5
c   (Milk,Diaper,Beer)  2  0.67
 (Milk, Diaper)3
© Tan,Steinbach, KumarIntroduction to Data Mining4/18/20044
Association Rule Mining Task

Given a set of transactions T, the goal of association rule mining is to find all ru
support ≥ minsup threshold
confidence ≥ minconf threshold

Brute-force approach:
List all possible association rules
Compute the support and confidence for each rule
Prune rules that fail the minsup and minconf
thresholds
 Computationally prohibitive!

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 5

Mining Association Rules

Example of Rules:
TID Items
1 Bread, Milk {Milk,Diaper}  {Beer} (s=0.4, c=0.67)
2 {Milk,Beer}  {Diaper} (s=0.4, c=1.0)
Bread, Diaper, Beer, Eggs
3 Milk, Diaper, Beer, Coke {Diaper,Beer}  {Milk} (s=0.4, c=0.67)
4 Bread, Milk, Diaper, Beer {Beer}  {Milk,Diaper} (s=0.4, c=0.67)
5 Bread, Milk, Diaper, Coke
{Diaper}  {Milk,Beer} (s=0.4, c=0.5)
{Milk}  {Diaper,Beer} (s=0.4, c=0.5)

Observations:
• All the above rules are binary partitions of the same itemset:
{Milk, Diaper, Beer}
• Rules originating from the same itemset have identical support but
can have different confidence
• Thus, we may decouple the support and confidence requirements
© Tan,Steinbach, KumarIntroduction to Data Mining4/18/20046
Mining Association Rules

Two-step approach:
Frequent Itemset Generation
Generate all itemsets whose support  minsup

Rule Generation
Generate high confidence rules from each frequent itemset, where each rule is a binary partitioning of a freq

Frequent itemset generation is still computationally expensive

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 7

Frequent Itemset Generation


null

A B C D E

AB ACAD AE BC BD BE CD CEDE

ABCABDABEACDACEADEBCDBCEBDECDE

ABCD ABCE ABDE ACDE BCDE


Given d items, there are 2d possible candidate itemsets

ABCDE

© Tan,Steinbach, KumarIntroduction to Data Mining4/18/20048


Frequent Itemset Generation
Brute-force approach:
Each itemset in the lattice is a candidate frequent itemset
Count the support of each candidate by scanning the database
Transactions List of
Candidates

TID Items
1 Bread, Milk
2 Bread, Diaper, Beer, Eggs
3 Milk, Diaper, Beer, Coke M
4 N Diaper, Beer
Bread, Milk,
5 Bread, Milk, Diaper, Coke

w
Match each transaction against every candidate
Complexity ~ O(NMw) => Expensive since M = 2d !!!
© Tan,Steinbach, KumarIntroduction to Data Mining4/18/20049

Computational Complexity
Given d unique items:
Total number of itemsets = 2d
Total number of possible association rules:

d  d  k 

R       j 
d 1 d k

 k  
k 1  j1

 3  2 1 d d 1

If d=6, R = 602 rules

© Tan,Steinbach, KumarIntroduction to Data Mining4/18/200410


Frequent Itemset Generation Strategies
Reduce the number of candidates (M)
Complete search: M=2d
Use pruning techniques to reduce M
Reduce the number of transactions (N)
Reduce size of N as the size of itemset increases
Used by DHP and vertical-based mining algorithms
Reduce the number of comparisons (NM)
Use efficient data structures to store the candidates or transactions
No need to match every candidate against every transaction

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 11

Reducing Number of Candidates


Apriori principle:
If an itemset is frequent, then all of its subsets must also be frequent

Apriori principle holds due to the following property of the support measure:
X ,Y : ( X  Y )  s( X )  s(Y )
Support of an itemset never exceeds the support of its subsets
This is known as the anti-monotone property of support

© Tan,Steinbach, KumarIntroduction to Data Mining4/18/200412


Illustrating Apriori Principle

null

A B C D E

ABACAD AE BCBD BECD CEDE

Found to be Infrequent
ABCABDABEACDACEADEBCDBCEBDECDE

ABCD ABCE ABDE ACDE BCDE

Pruned supersets
ABCDE

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 13

Illustrating Apriori Principle

Item Count Items (1-itemsets)


Bread 4
Coke 2
Milk 4 Itemset Count
Pairs (2-itemsets)
Beer 3 {Bread,Milk} 3
Diaper 4 {Bread,Beer} 2
Eggs 1 (No need3to generate candidates involving Coke or Eggs)
{Bread,Diaper}
{Milk,Beer} 2
{Milk,Diaper} 3
{Beer,Diaper} 3
Minimum Support = 3
Triplets (3-itemsets)
If every subset is considered, Ite m s e t Co u n t
{ B re a d ,M ilk , D ia p e r} 3
6C1 + 6C2 + 6C3 = 41
With support-based pruning,
6 + 6 + 1 = 13

© Tan,Steinbach, KumarIntroduction to Data Mining4/18/200414


Apriori Algorithm

Method:

Let k=1
Generate frequent itemsets of length 1
Repeat until no new frequent itemsets are identified
Generate length (k+1) candidate itemsets from length k frequent itemsets
Prune candidate itemsets containing subsets of length k that are infrequent
Count the support of each candidate by scanning the DB
Eliminate candidates that are infrequent, leaving only those that are frequent

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 15

Reducing Number of Comparisons


Candidate counting:
Scan the database of transactions to determine the support of each candidate itemset
To reduce the number of comparisons, store the candidates in a hash structure
Instead of matching each transaction against every candidate, match it against candidates contained in the hashed buckets

Transactions Hash Structure


TID Items
1 Bread, Milk
2 Bread, Diaper, Beer, Eggs
3 Milk, Diaper, Beer, Coke k
N 4 Bread, Milk, Diaper, Beer
5 Bread, Milk, Diaper, Coke

Buckets
© Tan,Steinbach, KumarIntroduction to Data Mining4/18/200416
Generate Hash Tree
Suppose you have 15 candidate itemsets of length 3:
{1 4 5}, {1 2 4}, {4 5 7}, {1 2 5}, {4 5 8}, {1 5 9}, {1 3 6}, {2 3 4}, {5 6 7}, {3 4 5},
{3 5 6}, {3 5 7}, {6 8 9}, {3 6 7}, {3 6 8}
You need:
Hash function
Max leaf size: max number of itemsets stored in a leaf node (if number of candidate itemsets exceeds max leaf size, sp

Hash function 234


3,6,9 567
1,4,7 145 136345 356 367
2,5,8 357 368
124 689
457 125 159
458
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 17

Association Rule Discovery: Hash tree

Hash Function Candidate Hash Tree

1,4,7 3,6,9

2,5,8
234
567

14 5 136
345 356 367
Hash on 1, 4 or 7 357 368
689
124 125 159
457 458

© Tan,Steinbach, KumarIntroduction to Data Mining4/18/200418


Association Rule Discovery: Hash tree

Hash Function Candidate Hash Tree

1,4,7 3,6,9

2,5,8

234
567

145 136
345 356 367
Hash on 2, 5 or 8 357 368
689
124 159
1 25
457 4 58

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 19

Association Rule Discovery: Hash tree

Hash Function Candidate Hash Tree

1,4,7 3,6,9

2,5,8

234
567

145 136
345 356 36 7
Hash on 3, 6 or 9 357 36 8

124 159 689


125
457 458

© Tan,Steinbach, KumarIntroduction to Data Mining4/18/200420


Subset Operation
Given a transaction t, what are the possible subsets of size 3?
Transaction, t
12356

Level 1
1 2356 2 356 3 56

Level 2

12 356 13 56 156 23 56 25 6 35 6

123
125 135 156 235 256 356
126 136 236

Level 3Subsets of 3 items


© Tan,Steinbach, KumarIntroduction to Data Mining
4/18/2004 21

Subset Operation Using Hash Tree

1 2 3 5 6 transaction Hash Function

1+2356
2+356 1,4,73,6,9
2,5,8

3+56

234
567

145 136
345 356 367
357 368
689
124125 159
457 458

© Tan,Steinbach, KumarIntroduction to Data Mining4/18/200422


Subset Operation Using Hash Tree

1 2 3 5 6 transaction Hash Function

1+2356
2+356 1,4,73,6,9
12+356 2,5,8

3+56
13+56
234
15+6 567

145 136
345 356 367
357 368
689
124125 159
457 458

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 23

Subset Operation Using Hash Tree

1 2 3 5 6 transaction Hash Function

1+2356
2+356 1,4,73,6,9
12+356 2,5,8

3+56
13+56
234
15+6 567

145 136
345 356 367
357 368
689
124 125 159
457 458
Match transaction against 11 out of 15 candidates
© Tan,Steinbach, KumarIntroduction to Data Mining4/18/200424
Factors Affecting Complexity
Choice of minimum support threshold
lowering support threshold results in more frequent itemsets
this may increase number of candidates and max length of frequent itemsets
Dimensionality (number of items) of the data set
more space is needed to store support count of each item
if number of frequent items also increases, both computation and I/O costs may also increase
Size of database
since Apriori makes multiple passes, run time of algorithm may increase with number of transactions
Average transaction width
transaction width increases with denser data sets
This may increase max length of frequent itemsets and traversals of hash tree (number of subsets in a transac

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 25

Compact Representation of Frequent Itemsets

Some itemsets are redundant because they have identical support as their supersets
TID A1 A2 A3 A4 A5 A6 A7 A8 A9 A10 B1 B2 B3 B4 B5 B6 B7 B8 B9 B10 C1 C2 C3 C4 C5 C6 C7 C8 C9 C10
1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
3 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
4 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
5 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
6 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
7 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
8 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
9 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
10 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
11 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1
12 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1
13 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1
14 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1
15 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1

10
Number of frequent itemsets  3  
10

kk 1

Need a compact representation


© Tan,Steinbach, KumarIntroduction to Data Mining4/18/200426
Maximal Frequent Itemset
An itemset is maximal frequent if none of its immediate supersets
is frequent null

Maximal Itemsets A B C D E

AB AC AD AE BC BD BE CD CEDE

ABCABDABEACDACEADEBCDBCEBDECDE

ABCD ABCE ABDE ACDE BCDE

Infrequent Itemsets
ABCD
E Border
© Tan,Steinbach, KumarIntroduction to Data Mining

4/18/2004 27

Closed Itemset

An itemset is closed if none of its immediate supersets has the same support as the itemset

Itemset Support
{A} 4
TID Items Itemset Support
{B} 5
1 {A,B} {A,B,C} 2
{C} 3
2 {B,C,D} {A,B,D} 3
{D} 4
3 {A,B,C,D} {A,C,D} 2
{A,B} 4
4 {A,B,D} {B,C,D} 3
{A,C} 2
5 {A,B,C,D} {A,B,C,D} 2
{A,D} 3
{B,C} 3
{B,D} 4
{C,D} 3

© Tan,Steinbach, KumarIntroduction to Data Mining4/18/200428


Maximal vs Closed Itemsets
Transaction Ids
TID Items null

1 ABC 124 123 1234 245 345


A B C
2 ABCD D E

3 BCE
4 ACDE 12124 24 4 123 2 3 24 34 45
ABAC AD AE BC BD BE CD CEDE
5 DE

12 2 24 4 4 2 3 4
ABC ABD ABE ACD ACE ADE BCD BCEBDECDE

24
ABCDABCEABDEACDE BCDE

Not supported by any transactions


ABCDE

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 29

Maximal vs Closed Frequent Itemsets


Minimum support = 2 null Closed but not maximal

124123 1234 245 345


AB C D E
Closed and maximal

12124 24 41232 3 24 34 45
ABAC AD AE BC BDBECD CEDE

12 2 24 4 4 2 3 4
ABC ABD ABE ACD ACE ADEBCDBCE BDECDE

2 4
ABCD ABCE ABDEACDE BCDE # Closed = 9
# Maximal = 4

ABCDE

© Tan,Steinbach, KumarIntroduction to Data Mining4/18/200430


Maximal vs Closed Itemsets

Frequent Itemsets

Closed Frequent Itemsets

Maximal Frequent Itemsets

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 31

Alternative Methods for Frequent Itemset Generation

Traversal of Itemset Lattice


– General-to-specific vs Specific-to-general
Frequent

itemset border Frequent


null null itemsetnull
border

......
......
Frequent itemset border
{a1,a2,...,an} {a1,a2,...,an} {a1,a2,...,an}

(a) General-to-specific (b) Specific-to-general (c) Bidirectional

© Tan,Steinbach, KumarIntroduction to Data Mining4/18/200432


Alternative Methods for Frequent Itemset Generation

Traversal of Itemset Lattice


– Equivalent Classes
null
null

A B C D ABC D

ABAC ADBC BDCD ABAC BCAD BD CD

ABC ABDACDBCD ABCABDACD BCD

ABCD ABCD

(a) Prefix tree (b) Suffix tree

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 33

Alternative Methods for Frequent Itemset Generation

Traversal of Itemset Lattice


– Breadth-first vs Depth-first

(a) Breadth first (b) Depth first

© Tan,Steinbach, KumarIntroduction to Data Mining4/18/200434


Alternative Methods for Frequent Itemset Generation

Representation of Database
– horizontal vs vertical data layout
Horizontal
Data LayoutVertical Data Layout

TID Items
1 A,B,E
2 B,C,D
3 C,E A B C D E
4 A,C,D 1 1 2 2 1
5 A,B,C,D 4 2 3 4 3
6 A,E 5 5 4 5 6
7 A,B 6 7 8 9
8 A,B,C 7 8 9
9 A,C,D 8 10
10 B 9

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 35

wth Algorithm
pressed representation of the database using an FP-tree

-tree has been constructed, it uses a recursive divide-and-conquer approach to mine the freque

© Tan,Steinbach, KumarIntroduction to Data Mining4/18/200436


FP-tree construction
null
After reading TID=1:
A:1

TID Items
1 {A,B}
2 {B,C,D} B:1
3 {A,C,D,E}
4 {A,D,E}
After reading TID=2: null
5 {A,B,C}
6 {A,B,C,D}
A:1
7 {B,C} B:1
8 {A,B,C}
9 {A,B,D}
B:1 C:1
10 {B,C,E}

D:1
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 37

FP-Tree Construction
TID Items
1 {A,B}
2 {B,C,D}
Transaction Database
3 {A,C,D,E}
4 {A,D,E}
null
5 {A,B,C}
6 {A,B,C,D}
7 {B,C} A:7 B:3
8 {A,B,C}
9 {A,B,D}
10 {B,C,E}
B:5 C:3
C:1 D:1

Header table D:1


C:3
Item Pointer D:1 E:1E:1
A D:1
B E:1
C D:1
Pointers are used to assist frequent itemset generation
D
E

© Tan,Steinbach, KumarIntroduction to Data Mining4/18/200438


FP-growth

Conditional Pattern base for D:


null
P = {(A:1,B:1,C:1), (A:1,B:1),
(A:1,C:1), (A:1), (B:1,C:1)}
A:7 B:1 Recursively apply FP- growth on P
Frequent Itemsets found (with sup > 1):
AD, BD, CD, ACD, BCD
B:5 C:1
C:1 D:1

C:3 D:1
D:1
D:1

D:1

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 39

Tree Projection
Set enumeration tree: null

A B C D E
Possible Extension: E(A) = {B,C,D,E}

AB AC AD AE BCBD BE CD CEDE

ABCABDABEACDACEADEBCDBCEBDECDE

Possible Extension: E(ABC) = {D,E}


ABCDABCEABDEACDEBCDE

ABCDE

© Tan,Steinbach, KumarIntroduction to Data Mining4/18/200440


Tree Projection

Items are listed in lexicographic order


Each node P stores the following information:
Itemset for node P
List of possible lexicographic extensions of P: E(P)
Pointer to projected database of its ancestor node
Bitvector containing information about which transactions in the projected database conta

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 41

Projected Database

Projected Database for node A:


Original Database:
TID Items TID Items
1 {A,B} 1 {B}
2 {B,C,D} 2 {}
3 {A,C,D,E} 3 {C,D,E}
4 {A,D,E} 4 {D,E}
5 {A,B,C} 5 {B,C}
6 {A,B,C,D} 6 {B,C,D}
7 {B,C} 7 {}
8 {A,B,C} 8 {B,C}
9 {A,B,D} 9 {B,D}
10 {B,C,E} 10 {}

For each transaction T, projected transaction at node A is T  E(A)


© Tan,Steinbach, KumarIntroduction to Data Mining4/18/200442
ECLAT

For each item, store a list of transaction ids (tids)


Horizontal
Data LayoutVertical Data Layout

TID Items
1 A,B,E
2 B,C,D
3 C,E A B C D E
4 A,C,D 1 1 2 2 1
5 A,B,C,D 4 2 3 4 3
6 A,E 5 5 4 5 6
7 A,B 6 7 8 9
8 A,B,C 7 8 9
9 A,C,D 8 10
10 B 9
TID-list

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 43

ECLAT
Determine support of any k-itemset by intersecting tid-lists of two of its (k-1) subsets.

A B AB
1 1 1
4 2 5
5
6
 5
7  7
8
7 8
8 10
9

3 traversal approaches:
– top-down, bottom-up and hybrid
Advantage: very fast support counting
Disadvantage: intermediate tid-lists may become too large for memory

© Tan,Steinbach, KumarIntroduction to Data Mining4/18/200444


Rule Generation

itemset L, find all non-empty subsets f  L such that f  L – f satisfies the minimum confidence
quent itemset, candidate rules:

ABC D, ABD C, ACD B, BCD A,


A BCD, B ACD, C ABD, D ABC
AB CD, AC  BD, AD  BC, BC AD,
BD AC, CD AB,

If |L| = k, then there are 2k – 2 candidate association rules (ignoring L   and   L)

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 45

Rule Generation
How to efficiently generate rules from frequent itemsets?
In general, confidence does not have an anti- monotone property
c(ABC D) can be larger or smaller than c(AB D)

But confidence of rules generated from the same itemset has an anti-monotone property
– e.g., L = {A,B,C,D}:
c(ABC  D)  c(AB  CD)  c(A  BCD)

Confidence is anti-monotone w.r.t. number of items on the RHS of the rule

© Tan,Steinbach, KumarIntroduction to Data Mining4/18/200446


Rule Generation for Apriori Algorithm

Lattice of rules
Low ABCD=>{ }
Confidence Rule
BCD=>A

ACD=>B ABD=>C ABC=>D

CD=>AB BD=>AC BC=>AD AD=>BC AC=>BD AB=>CD

D=>ABC C=>ABD B=>ACD A=>BCD


Pruned Rules
© Tan,Steinbach, KumarIntroduction to Data Mining4/18/200447

Rule Generation for Apriori Algorithm


Candidate rule is generated by merging two rules that share the same prefix
in the rule consequent
CD=>ABBD=>AC

join(CD=>AB,BD=>AC) would produce the candidate rule D => ABC


D=>ABC
Prune rule D=>ABC if its
subset AD=>BC does not have high confidence

© Tan,Steinbach, KumarIntroduction to Data Mining4/18/200448


Effect of Support Distribution

Many real data sets have skewed support distribution

Support distribution of a retail data set

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 49

Effect of Support Distribution


How to set the appropriate minsup threshold?
If minsup is set too high, we could miss itemsets involving interesting rare items (e.g., expensive products)

If minsup is set too low, it is computationally expensive and the number of itemsets is very large

Using a single minimum support threshold may not be effective

© Tan,Steinbach, KumarIntroduction to Data Mining4/18/200450


Multiple Minimum Support

How to apply multiple minimum supports?


MS(i): minimum support for item i
– e.g.:MS(Milk)=5%,MS(Coke) = 3%,
MS(Broccoli)=0.1%,MS(Salmon)=0.5%
MS({Milk, Broccoli}) = min (MS(Milk), MS(Broccoli))
= 0.1%

Challenge: Support is no longer anti-monotone


Suppose:Support(Milk, Coke) = 1.5% and
Support(Milk, Coke, Broccoli) = 0.5%

{Milk,Coke} is infrequent but {Milk,Coke,Broccoli} is frequent

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 51

Multiple Minimum Support


AB
Item MS(I) Sup(I) ABC

AC ABD
A 0 .10% 0.25% A
AD ABE

B 0 .20% 0.26% B AE ACD

BC ACE
C 0 .30% 0.29% C
BD ADE

D 0 .50% 0.05% D BE BCD

CD BCE
E 3% 4 .20% E
CE BDE

DE CDE
© Tan,Steinbach, KumarIntroduction to Data Mining4/18/200452
Multiple Minimum Support
AB
ABC
Item MS(I) Sup(I)

AC ABD
A
A 0 .10 % 0.25% AD ABE

B AE ACD
B 0 .20 % 0.26%
BC ACE
C
C 0 .30 % 0.29% BD ADE

D BE BCD
D 0 .50 % 0.05%
CD BCE
E
E 3% 4 .20% CE BDE

DE CDE

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 53

Multiple Minimum Support (Liu 1999)


Order the items according to their minimum support (in ascending order)
– e.g.:MS(Milk)=5%,MS(Coke) = 3%,
MS(Broccoli)=0.1%,MS(Salmon)=0.5%
Ordering: Broccoli, Salmon, Coke, Milk

Need to modify Apriori such that:


L1 : set of frequent items
F1 : set of items whose support is  MS(1) where MS(1) is mini( MS(i) )
C2 : candidate itemsets of size 2 is generated from F1 instead of L1

© Tan,Steinbach, KumarIntroduction to Data Mining4/18/200454


Multiple Minimum Support (Liu 1999)

Modifications to Apriori:
In traditional Apriori,
A candidate (k+1)-itemset is generated by merging two frequent itemsets of size k
The candidate is pruned if it contains any infrequent subsets of size k
Pruning step has to be modified:
Prune only if subset contains the first item
e.g.: Candidate={Broccoli, Coke, Milk} (ordered according to
minimum support)
{Broccoli, Coke} and {Broccoli, Milk} are frequent but
{Coke, Milk} is infrequent
– Candidate is not pruned because {Coke,Milk} does not contain the first item, i.e., Broccoli.

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 55

Pattern Evaluation
Association rule algorithms tend to produce too many rules
many of them are uninteresting or redundant
Redundant if {A,B,C}  {D} and {A,B}  {D} have same support & confidence

nterestingness measures can be used to prune/rank the derived patterns

n the original formulation of association rules, support & confidence are the only measures used

© Tan,Steinbach, KumarIntroduction to Data Mining4/18/200456


Application of Interestingness Measure
Knowledge
Interestingness Measures
Patterns
Postprocessing

Preprocessed Data
Featur Feea tur Feea tur Feea tur
Fea tur

Prod Purcotd Purcotd Purcotd


Purcotd
Purcotd P urcotd Purcotd

Feae
Feae tur
Feae tur
Feae tur
Feae tur
Mining
etur e

Selected Data

Data Preprocessing

Selection

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 57

Computing Interestingness Measure

en a rule X  Y, information needed to compute rule interestingness can be obtained from a contingency table
tingency table for X  Y
upport of X and Y f10: support of X and Y f01: support of X and Y f00: support of X and Y

Y Y
X f11 f10 f1+
X f01 f00 fo+
f+1 f+0 |T|

Used to define various measures


support, confidence, lift, Gini, J-measure, etc.

© Tan,Steinbach, KumarIntroduction to Data Mining4/18/200458


Drawback of Confidence

Coffee Coffee
Tea 15 5 20
Tea 75 5 80
90 10 100

Association Rule: Tea  Coffee

Confidence= P(Coffee|Tea) = 0.75 but P(Coffee) = 0.9


 Although confidence is high, rule is misleading
 P(Coffee|Tea) = 0.9375

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 59

Statistical Independence
Population of 1000 students
600 students know how to swim (S)
700 students know how to bike (B)
420 students know how to swim and bike (S,B)

– P(SB) = 420/1000 = 0.42


– P(S)  P(B) = 0.6  0.7 = 0.42

P(SB) = P(S)  P(B) => Statistical independence


P(SB) > P(S)  P(B) => Positively correlated
P(SB) < P(S)  P(B) => Negatively correlated

© Tan,Steinbach, KumarIntroduction to Data Mining4/18/200460


Statistical-based Measures

Measures that take into account statistical dependence


Lift  P(Y | X )

P(Y )
P( X ,Y )
Interest 
P( X )P(Y )
PS  P( X ,Y )  P( X )P(Y )
P( X ,Y )  P( X )P(Y )
  coefficient 
P( X )[1  P( X )]P(Y )[1  P(Y )]
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 61

Example: Lift/Interest

Coffee Coffee
Tea 15 5 20
Tea 75 5 80
90 10 100

Association Rule: Tea  Coffee

Confidence= P(Coffee|Tea) = 0.75 but P(Coffee) = 0.9


 Lift = 0.75/0.9= 0.8333 (< 1, therefore is negatively associated)

© Tan,Steinbach, KumarIntroduction to Data Mining4/18/200462


Drawback of Lift & Interest

Y Y Y Y
X 10 0 10 X 90 0 90
X 0 90 90 X 0 10 10
10 90 100 90 10 100

Lift 0.1 10 Lift 0.9 1.11


(0.1)(0.1) (0.9)(0.9)

Statistical independence:
If P(X,Y)=P(X)P(Y) => Lift = 1

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 63

There are lots of measures proposed in the literature

Some measures are good for certain applications, but not for others

What criteria should we use to determine whether a measure is good or bad?

What about Apriori- style support based pruning? How does it affect these measures?
Properties of A Good Measure

Piatetsky-Shapiro:
3 properties a good measure M must satisfy:
– M(A,B) = 0 if A and B are statistically independent

– M(A,B) increase monotonically with P(A,B) when P(A) and P(B) remain unchan

– M(A,B) decreases monotonically with P(A) [or P(B)] when P(A,B) and P(B) [or P(A)] remain

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 65

Comparing Different Measures


Example f11 f10 f01 f00
10 examples of E1 8123 83 424 1370
contingency tables: E2
E3
8330
9481
2
94
622 1046
127 298
E4 3954 3080 5 2961
E5 2886 1363 1320 4431
E6 1500 2000 500 6000
E7 4000 2000 1000 3000
E8 4000 2000 2000 2000
ankings of contingency tables E9 1720 7121 5 1154
R
sing various measures:
u E10 61 2483 4 7452

© Tan,Steinbach, KumarIntroduction to Data Mining4/18/200466


Property under Variable Permutation

B B A A
A p q B p r
A r s B q s

Does M(A,B) = M(B,A)?

Symmetric measures:
support, lift, collective strength, cosine, Jaccard, etc Asymmetric measures:
confidence, conviction, Laplace, J-measure, etc

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 67

Property under Row/Column Scaling

Grade-Gender Example (Mosteller, 1968):

Male Female Male Female


High 2 3 5 High 4 30 34
Low 1 4 5 Low 2 40 42
3 7 10 6 70 76

2x 10x
Mosteller:
Underlying association should be independent of the relative number of male and female students in the sampl

© Tan,Steinbach, KumarIntroduction to Data Mining4/18/200468


Property under Inversion Operation

A B C D E F
1 0 0 1 0 0
Trans .
action 1
0
0
0
0
1
1
1
1
1
1
0
0
. 0 0 1 1 1 0
. 0
0
1
0
1
1
0
1
1
1
1
0
. 0 0 1 1 1 0
.
Transaction N
0
0
0
0
1
1
1
1
1
1
0
0
1 0 0 1 0 0

(a) (b) (c)


© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 69

Example: -Coefficient
-coefficient is analogous to correlation coefficient for continuous variables

Y Y Y Y
X 60 10 70 X 20 10 30
X 10 20 30 X 10 60 70
70 30 100 30 70 100

 0.6  0.7  0.7  0.2  0.3 0.3


0.7  0.3 0.7  0.3 0.7  0.3 0.7  0.3
 0.5238  0.5238
 Coefficient is the same for both tables
© Tan,Steinbach, KumarIntroduction to Data Mining4/18/200470
Property under Null Addition

B B B B
A p q A p q
A r s A r s+k

Invariant measures:
support, cosine, Jaccard, etc Non-invariant measures:
correlation, Gini, mutual information, odds ratio, etc

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 71

Different Measures have Different Properties


Symbol Measure Range P1 P2 P3 O1 O2 O3 O3' O4
 Correlation -1 … 0 … 1 Yes Yes Yes Yes No Yes Yes No
 Lambda 0…1 Yes No No Yes No No* Yes No
 Odds ratio 0…1… Yes* Yes Yes Yes Yes Yes* Yes No
Q Yule's Q -1 … 0 … 1 Yes Yes Yes Yes Yes Yes Yes No
Y Yule's Y -1 … 0 … 1 Yes Yes Yes Yes Yes Yes Yes No
 Cohen's -1 … 0 … 1 Yes Yes Yes Yes No No Yes No
M Mutual Information 0…1 Yes Yes Yes Yes No No* Yes No
J J-Measure 0…1 Yes No No No No No No No
G Gini Index 0…1 Yes No No No No No* Yes No
s Support 0…1 No Yes No Yes No No No No
c Confidence 0…1 No Yes No Yes No No No Yes
L Laplace 0…1 No Yes No Yes No No No No
V Conviction 0.5 … 1 …  No Yes No Yes** No No Yes No
I Interest 0…1… Yes* Yes Yes Yes No No No No
IS IS (cosine) 0 .. 1 No Yes Yes Yes No No No Yes
PS Piatetsky-Shapiro's -0.25 … 0 … 0.25 Yes Yes Yes Yes No Yes Yes No
F Certainty factor -1 … 0 … 1 Yes Yes Yes No No No Yes No
AV Added value 0.5 … 1 … 1 Yes Yes Yes No No No No No
S Collective strength 0…1… No Yes Yes Yes No Yes* Yes No
 Jaccard 0 .. 1 No Yes Yes Yes No No No Yes

 2 1 2 1 2
K Klosgen's 

3 K0K Yes Yes Yes No No No No No
 3   3 3



© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 72


Support-based Pruning

Most of the association rule mining algorithms use support measure to prune

Study effect of support pruning on correlation of itemsets


Generate 10000 random contingency tables
Compute support and pairwise correlation for each table
Apply support-based pruning and examine the tables that are removed

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 73

Effect of Support-based Pruning


All Itempairs

1000
900
800
700
600
500
400
300
200
100
0

Correlation

© Tan,Steinbach, KumarIntroduction to Data Mining4/18/200474


Effect of Support-based Pruning
Support < 0.01 Support < 0.03

300 300

250 250

200 200

150 150

100 100

50 50

0 0

Correlation Correlation

Support < 0.05


300

Support-based pruning eliminates mostly negatively correlated itemsets 250


200

150

100

50
Correlation

© Tan,Steinbach, Kumar Introduction to Data Mining


0 4/18/2004 75

Effect of Support-based Pruning


Investigate how support-based pruning affects other measures

Steps:
Generate 10000 contingency tables
Rank each table according to the different measures
Compute the pair-wise correlation between the measures

© Tan,Steinbach, KumarIntroduction to Data Mining4/18/200476


Effect of Support-based Pruning

Without Support Pruning (All Pairs)


All P a irs (4 0.14% )
C onviction Odds ratio
1
Col Streng th C orrelation Interes t
PS
CF 0.9

Yule Y 0.8
Re liability Kappa
Klos g en Yule Q
Confidence Laplace
IS

J accard
Support J accard Lambda Gini
J-meas ure 0.7
Mutual Info

0.6

0.5

0.4

0.3

Scatter Plot between Correlation & Jaccard Measure


0.2

0.1

0
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
Correlation

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 77

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21

Effect of Support-based Pruning


Red cells indicate correlation between the pair of measures > 0.85
40.14% pairs have correlation > 0.85
0.5%  support  50%
0.00 5 <= s upp ort <= 0 .5 00 (61.45 % )
Interes t C onviction Odds ratio
Col Streng th Laplace
Confidence C orrelation Klos g en
Re liability
PS
1

Yule Q C F
Yule Y Kappa 0.9
IS
J accard Support Lambda Gini
J-meas ure
Mutual Info

0.8
J accard

0.7

0.6

0.5

0.4

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
Scatter Plot between Correlation & Jaccard Measure:
0.3

61.45% pairs have correlation > 0.85 0.2

0.1

0
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
© Tan,Steinbach, Kumar Introduction to Data Mining Corre la tion 4/18/2004 78
Effect of Support-based Pruning
0.5%  support  30%
0.00 5 <= s upp ort <= 0 .3 00 (76.42 % )
Support Interes t
Re liability C onviction
Yule Q
Odds ratio Confidence

0.9

0.8

CF
Yule Y Kappa

J accard
0.7
0.5

C orrelation Col Streng th 0.4


IS
J accard Laplace PS
Klos g en 0.6

0.3

0.2 0
Lambda
Mutual Info Gini -0.4-0.2 0 0.2 0.4 0.6 0.8 1
J-meas ure Corre la tion

0.1
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21

Scatter Plot between Correlation & Jaccard Measure

76.42% pairs have correlation > 0.85

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 79

© Tan,Steinbach, KumarIntroduction to Data Mining4/18/200480


Interestingness via Unexpectedness
Need to model expectation of users (domain knowledge)

+ Pattern expected to be frequent


- Pattern expected to be infrequent Pattern found to be frequent
Pattern found to be infrequent

+ - Expected Patterns
-+ Unexpected Patterns

Need to combine expectation of users with evidence from data (i.e., extracted patterns)
© Tan,Steinbach, KumarIntroduction to Data Mining4/18/200481

Interestingness via Unexpectedness

Web Data (Cooley et al 2001)


Domain knowledge in the form of site structure
Given an itemset F = {X1, X2, …, Xk} (Xi : Web pages)
L: number of links connecting the pages
lfactor = L / (k  k-1)
cfactor = 1 (if graph is connected), 0 (disconnected graph)
Structure evidence = cfactor  lfactor

– Usage evidence  P( X I X I ... I X ) 1 2 k

P( X  X  ...  X ) 12 k

– Use Dempster-Shafer theory to combine domain knowledge and evidence from data

© Tan,Steinbach, KumarIntroduction to Data Mining4/18/200482

You might also like