0% found this document useful (0 votes)
2 views

Unit5 Clustering

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Unit5 Clustering

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 74

5.

Cluster Analysis (9 hours)

5.1 Basics and Algorithms


5.2 K-means Clustering
5.3 Hierarchical Clustering
5.4 DBSCAN Clustering
5.5 Issues : Evaluation, Scalability,
Comparison
1
5.1 Basics and Algorithms

2
What is Cluster Analysis?
n Cluster: A collection of data objects
n similar (or related) to one another within the same group

n dissimilar (or unrelated) to the objects in other groups

n Cluster analysis (or clustering, data segmentation, …)


n Finding similarities between data according to the

characteristics found in the data and grouping similar


data objects into clusters
n Unsupervised learning: no predefined classes (i.e., learning
by observations vs. learning by examples: supervised)
n Typical applications
n As a stand-alone tool to get insight into data distribution

n As a preprocessing step for other algorithms

3
n Cluster analysis is a multivariate method which aims
to classify a sample of subjects (or objects) on the
basis of a set of measured variables into a number of
different groups such that similar subjects are placed
in the same group.
n An example where this might be used is in the field of
psychiatry, where the characterization of patients on
the basis of clusters of symptoms can be useful in the
identification of an appropriate form of therapy.
n In marketing, it may be useful to identify distinct
groups of potential customers so that, for example,
advertising can be appropriately targeted. 4
WARNING ABOUT CLUSTER ANALYSIS

n Cluster analysis has no mechanism for


differentiating between relevant and irrelevant
variables.
n Therefore the choice of variables included in a
cluster analysis must be underpinned by
conceptual considerations.
n This is very important because the clusters
formed can be very dependent on the variables
included.

5
What is Clustering in Data Mining?
Clustering is a process of partitioning a set of data (or objects) in a set of
meaningful sub-classes, called clusters

Helps users understand the natural grouping or structure in a data set

n Cluster:
n a collection of data

objects that are “similar”


to one another and thus
can be treated
collectively as one group
n but as a collection, they

are sufficiently different


6
from other groups
Applications of Cluster Analysis
n Data reduction
n Summarization: Preprocessing for regression, PCA,

classification, and association analysis


n Compression: Image processing: vector quantization
n Hypothesis generation and testing
n Prediction based on groups
n Cluster & find characteristics/patterns for each group

n Finding K-nearest Neighbors


n Localizing search to one or a small number of clusters
n Outlier detection: Outliers are often viewed as those “far
away” from any cluster

7
Clustering for Data Understanding and
Applications
n Biology: taxonomy of living things: kingdom, phylum, class, order,
family, genus and species
n Information retrieval: document clustering
n Land use: Identification of areas of similar land use in an earth
observation database
n Marketing: Help marketers discover distinct groups in their customer
bases, and then use this knowledge to develop targeted marketing
programs
n City-planning: Identifying groups of houses according to their house
type, value, and geographical location
n Earth-quake studies: Observed earth quake epicenters should be
clustered along continent faults
n Climate: understanding earth climate, find patterns of atmospheric
and ocean
n Economic Science: market resarch
8
Clustering as a Preprocessing Tool (Utility)

n Summarization:
n Preprocessing for regression, PCA, classification, and
association analysis
n Compression:
n Image processing: vector quantization
n Finding K-nearest Neighbors
n Localizing search to one or a small number of clusters
n Outlier detection
n Outliers are often viewed as those “far away” from any
cluster

9
Basic Steps to Develop a Clustering Task
n Feature selection / Preprocessing
n Select info concerning the task of interest
n Minimal information redundancy
n May need to do normalization/standardization
n Distance/Similarity measure
n Similarity of two feature vectors
n Clustering criterion
n Expressed via a cost function or some rules
n Clustering algorithms
n Choice of algorithms
n Validation of the results
10
n Interpretation of the results with applications
Distance or Similarity Measures
n Common Distance Measures:
n Manhattan distance:

n Euclidean distance:

n Cosine similarity: å ( xi ´ yi )
dist ( X , Y ) = 1 - sim( X , Y ) sim( X , Y ) = i

å xi ´ å yi
2 2

i i

11
Quality: What Is Good Clustering?

n A good clustering method will produce high quality


clusters
n high intra-class similarity: cohesive within clusters
n low inter-class similarity: distinctive between clusters
n The quality of a clustering method depends on
n the similarity measure used
n its implementation, and
n Its ability to discover some or all of the hidden patterns

12
Approaches to cluster analysis
n There are a number of different methods that can be
used to carry out a cluster analysis; these methods
can be classified as follows:
n Non-hierarchical methods

- Partitioning approach (k-means)


- Density-based approach. (DBSCAN)
n Hierarchical methods
– Agglomerative methods, in which subjects start
in their own separate cluster.
– Divisive methods, in which all subjects start in
the same cluster and the above strategy is applied in
reverse until every subject is in a separate cluster. 13
Major Clustering Approaches
n Partitioning approach:
n Construct various partitions and then evaluate them by some criterion,

e.g., minimizing the sum of square errors


n Typical methods: k-means, k-medoids, CLARANS

n Density-based approach:
n Based on connectivity and density functions

n Typical methods: DBSCAN, OPTICS, DenClue

n Hierarchical approach:
n Create a hierarchical decomposition of the set of data (or objects) using

some criterion
n Typical methods: Diana, Agnes, BIRCH, CAMELEON

n Model-based:
n A model is hypothesized for each of the clusters and tries to find the best

fit of that model to each other


n Typical methods: EM, SOM, COBWEB

n Grid-based approach:
n based on a multiple-level granularity structure

14 n Typical methods: STING, WaveCluster, CLIQUE


Measure the Quality of Clustering
n Dissimilarity/Similarity metric
n Similarity is expressed in terms of a distance function,
typically metric: d(i, j)
n The definitions of distance functions are usually rather
different for interval-scaled, boolean, categorical,
ordinal ratio, and vector variables
n Weights should be associated with different variables
based on applications and data semantics
n Quality of clustering:
n There is usually a separate “quality” function that
measures the “goodness” of a cluster.
n It is hard to define “similar enough” or “good enough”
n The answer is typically highly subjective
15
Considerations for Cluster Analysis
n Partitioning criteria
n Single level vs. hierarchical partitioning (often, multi-level
hierarchical partitioning is desirable)
n Separation of clusters
n Exclusive (e.g., one customer belongs to only one region) vs. non-
exclusive (e.g., one document may belong to more than one
class)
n Similarity measure
n Distance-based (e.g., Euclidian, road network, vector) vs.
connectivity-based (e.g., density or contiguity)
n Clustering space
n Full space (often when low dimensional) vs. subspaces (often in
high-dimensional clustering)
16
Requirements and Challenges
n Scalability
n Clustering all the data instead of only on samples

n Ability to deal with different types of attributes


n Numerical, binary, categorical, ordinal, linked, and mixture of

these
n Constraint-based clustering
n User may give inputs on constraints
n Use domain knowledge to determine input parameters
n Interpretability and usability
n Others
n Discovery of clusters with arbitrary shape

n Ability to deal with noisy data

n Incremental clustering and insensitivity to input order

n High dimensionality

17
5.2 K-means Clustering

18
Partitioning Algorithms: Basic Concept
n Partitioning method: Partitioning a database D of n objects into a set of k
clusters, such that the sum of squared distances is minimized (where ci is
the centroid or medoid of cluster Ci)

E = S ik=1S pÎCi ( p - ci ) 2
n Given k, find a partition of k clusters that optimizes the chosen partitioning
criterion
n Global optimal: exhaustively enumerate all partitions
n Heuristic methods: k-means and k-medoids algorithms
n k-means (MacQueen’67, Lloyd’57/’82): Each cluster is represented by
the center of the cluster
n k-medoids or PAM (Partition around medoids) (Kaufman &
Rousseeuw’87): Each cluster is represented by one of the objects in
the cluster

19
The K-Means Clustering Method

n Given k, the k-means algorithm is implemented in four


steps:
n Partition objects into k nonempty subsets
n Compute seed points as the centroids of the
clusters of the current partitioning (the centroid is
the center, i.e., mean point, of the cluster)
n Assign each object to the cluster with the nearest
seed point
n Go back to Step 2, stop when the assignment does
not change

20
21
An Example of K-Means Clustering

K=2

Arbitrarily Update the


partition cluster
objects into centroids
k groups

The initial data set Loop if Reassign objects


needed
n Partition objects into k nonempty
subsets
n Repeat
n Compute centroid (i.e., mean Update the
cluster
point) for each partition
centroids
n Assign each object to the
cluster of its nearest centroid
n Until no change
22
23
Exercise 1. K-means clustering
Use the k-means algorithm and Euclidean distance to cluster the following 8
examples into 3 clusters:
A1=(2,10), A2=(2,5), A3=(8,4), A4=(5,8), A5=(7,5), A6=(6,4), A7=(1,2),
A8=(4,9).
The distance matrix based on the Euclidean distance is given below:

Suppose that the initial seeds (centers of each cluster) are A1, A4 and A7.
Run the k-means algorithm for
1 epoch only. At the end of this epoch show:
a) The new clusters (i.e. the examples belonging to each cluster)
b) The centers of the new clusters 24
Solution:
a)
d(a,b) denotes the Eucledian distance between a and b.
It is obtained directly from the distance matrix or
calculated as follows:
d(a,b)=sqrt((xb-xa)2+(yb-ya)2))
seed1=A1=(2,10), seed2=A4=(5,8), seed3=A7=(1,2)
epoch1 – start:
26
new clusters: 1: {A1}, 2: {A3, A4, A5, A6, A8}, 3: {A2, A7}

b) centers of the new clusters:


C1= (2, 10), C2= ((8+5+7+6+4)/5, (4+8+5+4+9)/5) = (6, 6),
C3= ((2+1)/2, (5+2)/2) = (1.5, 3.5)
28
c) Draw a 10 by 10 space with all the 8 points and show the
clusters after the first epoch and the new
centroids.

29
d) How many more iterations are needed to converge?
Draw the result for each epoch.
Example

As a simple illustration of a k-means algorithm, consider the


following data set consisting of the scores of two variables
on each of seven individuals. Find 2 clusters from this.

Subject A B
1 1.0 1.0
2 1.5 2.0
3 3.0 4.0
4 5.0 7.0
5 3.5 5.0
6 4.5 5.0
7 3.5 4.5

31
32
K-Means Example: Document Clustering
T1 T2 T3 T4 T5
Initial arbitrary assignment D1 0 3 3 0 2
D2 4 1 0 1 2
(k=3):
D3 0 4 0 0 2
C1 = {D1,D2}, D4 0 3 0 3 3
C2 = {D3,D4}, D5 0 1 3 0 1
C3 = {D5,D6} D6 2 2 0 0 4
D7 1 0 3 2 0
D8 3 1 0 0 2
C1 4/2 4/2 3/2 1/2 4/2
C2 0/2 7/2
Cluster0/2 3/2
Centroids 5/2
C3 2/2 3/2 3/2 0/2 5/2

Now compute the similarity (or distance) of each item to each cluster,
resulting a cluster-document similarity matrix (here we use dot product as
the similarity measure).
D1 D2 D3 D4 D5 D6 D7 D8
C1 29/2 29/2 24/2 27/2 17/2 32/2 15/2 24/2
C2 31/2 20/2 38/2 45/2 12/2 34/2 6/2 17/2
C3 28/2 21/2 22/2 24/2 17/2 30/2 11/2 19/2

33
Example (Continued)
D1 D2 D3 D4 D5 D6 D7 D8
C1 29/2 29/2 24/2 27/2 17/2 32/2 15/2 24/2
C2 31/2 20/2 38/2 45/2 12/2 34/2 6/2 17/2
C3 28/2 21/2 22/2 24/2 17/2 30/2 11/2 19/2

For each document, reallocate the document to the cluster to which it has
the highest similarity (shown in red in the above table). After the
reallocation we have the following new clusters. Note that the previously
unassigned D7 and D8 have been assigned, and that D1 and D6 have been
reallocated from their original assignment.

C1 = {D2,D7,D8}, C2 = {D1,D3,D4,D6}, C3 = {D5}

This is the end of first iteration (i.e., the first


reallocation). Next, we repeat the process for another
reallocation…

34
Example (Continued)
Now compute new C1 = {D2,D7,D8}, C2 = {D1,D3,D4,D6}, C3 = {D5}
cluster centroids
using the original T1 T2 T3 T4 T5
document-term D1 0 3 3 0 2
D2 4 1 0 1 2
matrix
D3 0 4 0 0 2
D4 0 3 0 3 3
D5 0 1 3 0 1
This will lead to a new D6 2 2 0 0 4
cluster-doc similarity D7 1 0 3 2 0
matrix similar to D8 3 1 0 0 2
previous slide. Again, C1 8/3 2/3 3/3 3/3 4/3
the items are C2 2/4 12/4 3/4 3/4 11/4
reallocated to clusters C3 0/1 1/1 3/1 0/1 1/1
with highest similarity.
D1 D2 D3 D4 D5 D6 D7 D8
C1 7.67 15.01 5.34 9.00 5.00 12.00 7.67 11.34
C2 16.75 11.25 17.50 19.50 8.00 6.68 4.25 10.00
C3 14.00 3.00 6.00 6.00 11.00 9.34 9.00 3.00

New assignment à C1 = {D2,D6,D8}, C2 = {D1,D3,D4}, C3 = {D5,D7}

Note: This process is now repeated with new clusters. However, the next iteration in this example
Will show no change to the clusters, thus terminating the algorithm.

35
K-Means Algorithm
n Strength of the k-means:
n Relatively efficient: O(tkn), where n is # of objects, k is # of clusters, and
t is # of iterations. Normally, k, t << n
n Often terminates at a local optimum

n Weakness of the k-means:


n Applicable only when mean is defined; what about categorical data?
n Need to specify k, the number of clusters, in advance
n Unable to handle noisy data and outliers

n Variations of K-Means usually differ in:


n Selection of the initial k means
n Distance or similarity measures used
n Strategies to calculate cluster means

36
Comments on the K-Means Method

n Strength: Efficient: O(tkn), where n is # objects, k is # clusters, and t is


# iterations. Normally, k, t << n.
n Comparing: PAM: O(k(n-k)2 ), CLARA: O(ks2 + k(n-k))
n Comment: Often terminates at a local optimal.
n Weakness
n Applicable only to objects in a continuous n-dimensional space
n Using the k-modes method for categorical data
n In comparison, k-medoids can be applied to a wide range of
data
n Need to specify k, the number of clusters, in advance (there are
ways to automatically determine the best k (see Hastie et al., 2009)
n Sensitive to noisy data and outliers
n Not suitable to discover clusters with non-convex shapes
37
Variations of the K-Means Method

n Most of the variants of the k-means which differ in

n Selection of the initial k means

n Dissimilarity calculations

n Strategies to calculate cluster means

n Handling categorical data: k-modes

n Replacing means of clusters with modes

n Using new dissimilarity measures to deal with categorical objects

n Using a frequency-based method to update modes of clusters

n A mixture of categorical and numerical data: k-prototype method

38
What Is the Problem of the K-Means Method?

n The k-means algorithm is sensitive to outliers !

n Since an object with an extremely large value may substantially


distort the distribution of the data

n K-Medoids: Instead of taking the mean value of the object in a cluster


as a reference point, medoids can be used, which is the most
centrally located object in a cluster

10 10
9 9
8 8
7 7
6 6
5 5
4 4
3 3
2 2
1 1
0 0
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10

39
PAM: A Typical K-Medoids Algorithm
Total Cost = 20
10 10 10

9 9 9

8 8 8

Arbitrary Assign
7 7 7

6 6 6

5
choose k 5 each 5

4 object 4 remainin 4

3 as initial 3
g object 3

2
medoids 2
to 2

nearest
1 1 1

0 0 0
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10
medoids 0 1 2 3 4 5 6 7 8 9 10

K=2 Randomly select a


Total Cost = 26 nonmedoid object,Oramdom
10 10

Do loop 9

8
Compute
9

8
Swapping O total cost of
Until no
7 7

and Oramdom 6
swapping 6

change
5 5

If quality is 4 4

improved. 3 3

2 2

1 1

0 0
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10

40
The K-Medoid Clustering Method

n K-Medoids Clustering: Find representative objects (medoids) in clusters

n PAM (Partitioning Around Medoids, Kaufmann & Rousseeuw 1987)

n Starts from an initial set of medoids and iteratively replaces one


of the medoids by one of the non-medoids if it improves the total
distance of the resulting clustering

n PAM works effectively for small data sets, but does not scale
well for large data sets (due to the computational complexity)

n Efficiency improvement on PAM

n CLARA (Kaufmann & Rousseeuw, 1990): PAM on samples

n CLARANS (Ng & Han, 1994): Randomized re-sampling

41
A Disk Version of k-means

n k-means can be implemented with data on disk


n In each iteration, it scans the database once

n The centroids are computed incrementally

n It can be used to cluster large datasets that do


not fit in main memory
n We need to control the number of iterations
n In practice, a limited is set (< 50)

n There are better algorithms that scale up for large


data sets, e.g., BIRCH

42
BIRCH

n Designed for very large data sets


n Time and memory are limited
n Incremental and dynamic clustering of incoming
objects
n Only one scan of data is necessary
n Does not need the whole data set in advance
n Two key phases:
n Scans the database to build an in-memory tree
n Applies clustering algorithm to cluster the leaf nodes

43
5.3 Hierarchical Clustering

44
Hierarchical Clustering
n Use distance matrix as clustering criteria. This method
does not require the number of clusters k as an input, but
needs a termination condition
Step 0 Step 1 Step 2 Step 3 Step 4
agglomerative
(AGNES)
a ab
b abcde
c
cde
d
de
e
divisive
Step 4 Step 3 Step 2 Step 1 Step 0 (DIANA)
45
Hierarchical Clustering Algorithms
• Two main types of hierarchical clustering
– Agglomerative:
• Start with the points as individual clusters
• At each step, merge the closest pair of clusters until only one cluster

(or k clusters) left

– Divisive:
• Start with one, all-inclusive cluster
• At each step, split a cluster until each cluster contains a point (or there

are k clusters)

• Traditional hierarchical algorithms use a similarity or


distance matrix
– Merge or split one cluster at a time
Hierarchical Clustering Algorithms
n Use dist / sim matrix as clustering criteria
n does not require the no. of clusters as input, but needs a termination condition

Step 0 Step 1 Step 2 Step 3 Step 4


Agglomerative
a
ab
b
abcde
c
cd
d
cde
e
Divisive
Step 4 Step 3 Step 2 Step 1 Step 0

47
AGNES (Agglomerative Nesting)
n Introduced in Kaufmann and Rousseeuw (1990)
n Implemented in statistical packages, e.g., Splus
n Use the single-link method and the dissimilarity matrix
n Merge nodes that have the least dissimilarity
n Go on in a non-descending fashion
n Eventually all nodes belong to the same cluster
10 10 10

9 9 9

8 8 8

7 7 7

6 6 6

5 5 5

4 4 4

3 3 3

2 2 2

1 1 1

0 0 0
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10

48
Hierarchical Agglomerative Clustering
:: Example

4 1
2 5 0.4

0.35
5
2 0.3

0.25

3 6
0.2

3 0.15
1 0.1

4 0.05

0
3 6 4 1 2 5

Nested Clusters Dendrogram


Dendrogram: Shows How Clusters are Merged

Decompose data objects into a several levels of nested


partitioning (tree of clusters), called a dendrogram

A clustering of the data objects is obtained by cutting


the dendrogram at the desired level, then each
connected component forms a cluster

50
DIANA (Divisive Analysis)

n Introduced in Kaufmann and Rousseeuw (1990)


n Implemented in statistical analysis packages, e.g., Splus
n Inverse order of AGNES
n Eventually each node forms a cluster on its own

10 10
10

9 9
9

8 8
8

7 7
7

6 6
6

5 5
5

4 4
4

3 3
3

2 2
2

1 1
1

0 0
0
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10
0 1 2 3 4 5 6 7 8 9 10

51
Distance between Clusters X X

n Single link: smallest distance between an element in one cluster


and an element in the other, i.e., dist(Ki, Kj) = min(tip, tjq)

n Complete link: largest distance between an element in one cluster


and an element in the other, i.e., dist(Ki, Kj) = max(tip, tjq)

n Average: avg distance between an element in one cluster and an


element in the other, i.e., dist(Ki, Kj) = avg(tip, tjq)

n Centroid: distance between the centroids of two clusters, i.e.,


dist(Ki, Kj) = dist(Ci, Cj)

n Medoid: distance between the medoids of two clusters, i.e., dist(Ki,


Kj) = dist(Mi, Mj)
n Medoid: a chosen, centrally located object in the cluster
52
Distance Between Two Clusters
NOTE: These are distance between clusters not point
n The basic procedure varies based on the method used
to determine inter-cluster distances or similarities

n Different methods results in different variants of the


algorithm
n Single link
n Complete link
n Average link
n Ward’s method
n Etc.

53
Single Link Method
n The distance between
two clusters is the
distance between two
closest data points in
the two clusters, one
data point from each
cluster Two natural clusters are
split into two
n It can find arbitrarily
shaped clusters, but
n It may cause the
undesirable “chain effect”
due to noisy points

54
Distance between two clusters

n Single-link distance between clusters Ci and Cj


is the minimum distance between any object in
Ci and any object in Cj
n The distance is defined by the two most

similar objects

{
Dsl (Ci , C j ) = min x, y d ( x, y) x Î Ci , y Î C j }
Complete Link Method
n The distance between two clusters is the distance
of two furthest data points in the two clusters
n It is sensitive to outliers because they are far
away

56
Distance between two clusters

n Complete-link distance between clusters Ci and


Cj is the maximum distance between any object
in Ci and any object in Cj
n The distance is defined by the two least

similar objects

{
Dcl (Ci , C j ) = maxx, y d ( x, y) x Î Ci , y Î C j }
Average link and centroid methods
n Average link: A compromise between
n the sensitivity of complete-link clustering to outliers and
n the tendency of single-link clustering to form long chains that do
not correspond to the intuitive notion of clusters as compact,
spherical objects
n In this method, the distance between two clusters is the average
distance of all pair-wise distances between the data points in two
clusters.

58
Extensions to Hierarchical Clustering
n Major weakness of agglomerative clustering methods

n Can never undo what was done previously

n Do not scale well: time complexity of at least O(n2),


where n is the number of total objects
n Integration of hierarchical & distance-based clustering

n BIRCH (1996): uses CF-tree and incrementally adjusts


the quality of sub-clusters

n CHAMELEON (1999): hierarchical clustering using


dynamic modeling
59
5.4 DBSCAN Clustering

60
Density-Based Methods

n Partitioning and hierarchical methods are


designed to find spherical-shaped clusters.
n They have difficulty finding clusters of arbitrary
shape such as the “S” shape and oval clusters
n To find clusters of arbitrary shape, alternatively,
we can model clusters as dense regions in the
data space, separated by sparse regions.
n This is the main strategy behind density-based
clustering methods, which can discover clusters
of nonspherical shape.

61
Density-Based Clustering Methods

n Clustering based on density (local cluster criterion), such


as density-connected points
n Major features:
n Discover clusters of arbitrary shape

n Handle noise

n One scan

n Need density parameters as termination condition

n Several interesting studies:


n DBSCAN: Ester, et al. (KDD’96)

n OPTICS: Ankerst, et al (SIGMOD’99).

n DENCLUE: Hinneburg & D. Keim (KDD’98)

n CLIQUE: Agrawal, et al. (SIGMOD’98) (more grid-based)

62
n “How can we find dense regions in density-based
clustering?”
n The density of an object o can be measured by the

number of objects close to o


n core objects : objects that have dense neighborhoods
n “How does DBSCAN quantify the neighborhood of an
object?”
n A user-specified para- meter ε > 0 is used to specify

the radius of a neighborhood we consider for every


object.
n The ε-neighborhood of an object o is the space
63
n Given a set, D, of objects, we can identify all core
objects with respect to the given parameters, ε and
MinPts.
n The clustering task is therein reduced to using core
objects and their neighborhoods to form dense
regions, where the dense regions are clusters.
n For a core object q and an object p, we say that p is
directly density-reachable from q (with respect to ε
and MinPts) if p is within the ε-neighborhood of q.
n Clearly, an object p is directly density-reachable from
another object q if and only if q is a core object and p
is in the ε-neighborhood of q.
n Using the directly density-reachable relation, a core
object can “bring” all objects from its ε-neighborhood
into a dense region. 64
n To connect core objects as well as their neighbors
in a dense region, DBSCAN uses the notion of
density-connectedness.
n Two objects p1,p2 ∈ D are density-connected with
respect to ε and MinPts if there is an object q ∈ D
such that both p1 and p2 are density- reachable
from q with respect to ε and MinPts.
n Unlike density-reachability, density- connectedness
is an equivalence relation.
n It is easy to show that, for objects o1, o2, and o3, if
o1 and o2 are density-connected, and o2 and o3
are density-connected, then so are o1 and o3.

65
Density-Based Clustering: Basic Concepts
n Two parameters:
n Eps: Maximum radius of the neighbourhood
n MinPts: Minimum number of points in an Eps-
neighbourhood of that point
n NEps(p): {q belongs to D | dist(p,q) ≤ Eps}
n Directly density-reachable: A point p is directly density-
reachable from a point q w.r.t. Eps, MinPts if
n p belongs to NEps(q)
n core point condition: p MinPts = 5

|NEps (q)| ≥ MinPts Eps = 1 cm


q

66
Density-Reachable and Density-Connected

n Density-reachable:
n A point p is density-reachable from p
a point q w.r.t. Eps, MinPts if there p1
is a chain of points p1, …, pn, p1 = q
q, pn = p such that pi+1 is directly
density-reachable from pi
n Density-connected
n A point p is density-connected to a p q
point q w.r.t. Eps, MinPts if there is
a point o such that both, p and q o
are density-reachable from o w.r.t.
Eps and MinPts
67
DBSCAN: Density-Based Spatial Clustering of
Applications with Noise
n Relies on a density-based notion of cluster: A cluster is
defined as a maximal set of density-connected points
n Discovers clusters of arbitrary shape in spatial databases
with noise

Outlier

Border
Eps = 1cm
Core MinPts = 5

68
DBSCAN: The Algorithm
n Arbitrary select a point p

n Retrieve all points density-reachable from p w.r.t. Eps


and MinPts

n If p is a core point, a cluster is formed

n If p is a border point, no points are density-reachable


from p and DBSCAN visits the next point of the database

n Continue the process until all of the points have been


processed

69
5.5 Issues : Evaluation, Scalability, Comparison

70
Determine the Number of Clusters
n Empirical method
n # of clusters ≈√n/2 for a dataset of n points

n Elbow method
n Use the turning point in the curve of sum of within cluster variance

w.r.t the # of clusters


n Cross validation method
n Divide a given data set into m parts

n Use m – 1 parts to obtain a clustering model

n Use the remaining part to test the quality of the clustering

n E.g., For each point in the test set, find the closest centroid, and

use the sum of squared distance between all points in the test set
and the closest centroids to measure how well the model fits the
test set
n For any k > 0, repeat it m times, compare the overall quality measure

w.r.t. different k’s, and find # of clusters that fits the data the best
71
Measuring Clustering Quality

n Two methods: extrinsic vs. intrinsic


n Extrinsic: supervised, i.e., the ground truth is available
n Compare a clustering against the ground truth using
certain clustering quality measure
n Ex. BCubed precision and recall metrics
n Intrinsic: unsupervised, i.e., the ground truth is unavailable
n Evaluate the goodness of a clustering by considering
how well the clusters are separated, and how compact
the clusters are
n Ex. Silhouette coefficient

72
Measuring Clustering Quality: Extrinsic Methods

n Clustering quality measure: Q(C, Cg), for a clustering C


given the ground truth Cg.
n Q is good if it satisfies the following 4 essential criteria
n Cluster homogeneity: the purer, the better

n Cluster completeness: should assign objects belong to

the same category in the ground truth to the same


cluster
n Rag bag: putting a heterogeneous object into a pure

cluster should be penalized more than putting it into a


rag bag (i.e., “miscellaneous” or “other” category)
n Small cluster preservation: splitting a small category

into pieces is more harmful than splitting a large


category into pieces
73
Summary
n Cluster analysis groups objects based on their similarity and has
wide applications
n Measure of similarity can be computed for various types of data
n Clustering algorithms can be categorized into partitioning methods,
hierarchical methods, density-based methods, grid-based methods,
and model-based methods
n K-means and K-medoids algorithms are popular partitioning-based
clustering algorithms
n Birch and Chameleon are interesting hierarchical clustering
algorithms, and there are also probabilistic hierarchical clustering
algorithms
n DBSCAN, OPTICS, and DENCLU are interesting density-based
algorithms
n STING and CLIQUE are grid-based methods, where CLIQUE is also
a subspace clustering algorithm
n Quality of clustering results can be evaluated in various ways
74

You might also like