### Intution

Partitioning methods (K-means, PAM clustering) and hierarchical clustering work for finding spherical-shaped clusters or convex clusters. In other words, they are suitable only for compact and well-separated clusters. Moreover, they are also severely affected by the presence of noise and outliers in the data.

Real life data may contain irregularities, like:

• Clusters can be of arbitrary shape such as those shown in the figure below.
• Data may contain noise.

The figure below shows a data set containing nonconvex clusters and outliers/noises. Given such data, k-means algorithm has difficulties for identifying these clusters with arbitrary shapes.

### DBSCAN algorithm requires two parameters

1. eps: It defines the neighborhood around a data point i.e. if the distance between two points is lower or equal to eps then they are considered as neighbors. If the eps value is chosen too small then large part of the data will be considered as outliers. If it is chosen very large then the clusters will merge and majority of the data points will be in the same clusters. One way to find the eps value is based on the k-distance graph.
2. MinPts: Minimum number of neighbors (data points) within eps radius. Larger the dataset, the larger value of MinPts must be chosen. As a general rule, the minimum MinPts can be derived from the number of dimensions D in the dataset as, MinPts >= D+1. The minimum value of MinPts must be chosen at least 3.

### In this algorithm, we have 3 types of data points.

• Core Point: A point is a core point if it has more than MinPts points within eps.
• Border Point: A point which has fewer than MinPts within eps but it is in the neighborhood of a core point.
• Noise or outlier: A point which is not a core point or border point.

#### DBSCAN algorithm can be abstracted in the following steps

• Find all the neighbor points within eps and identify the core points or visited with more than MinPts neighbors.
• For each core point if it is not already assigned to a cluster, create a new cluster.
• Find recursively all its density connected points and assign them to the same cluster as the core point.
• A point a and b are said to be density connected if there exist a point c which has a sufficient number of points in its neighbors and both the points a and b are within the eps distance. This is a chaining process. So, if b is neighbor of c, c is neighbor of d, d is neighbor of e, which in turn is neighbor of a implies that b is neighbor of a.
• Iterate through the remaining unvisited points in the dataset. Those points that do not belong to any cluster are noise.