Authors
Nicholas Monath, Kumar Avinava Dubey, Guru Guruganesh, Manzil Zaheer, Amr Ahmed, Andrew McCallum, Gokhan Mergen, Marc Najork, Mert Terzihan, Bryon Tjanaka, Yuan Wang, Yuchen Wu
Publication date
2021/8/14
Book
Proceedings of the 27th ACM SIGKDD Conference on knowledge discovery & data mining
Pages
1245-1255
Description
The applicability of agglomerative clustering, for inferring both hierarchical and flat clustering, is limited by its scalability. Existing scalable hierarchical clustering methods sacrifice quality for speed and often lead to over-merging of clusters. In this paper, we present a scalable, agglomerative method for hierarchical clustering that does not sacrifice quality and scales to billions of data points. We perform a detailed theoretical analysis, showing that under mild separability conditions our algorithm can not only recover the optimal flat partition but also provide a two-approximation to non-parametric DP-Means objective. This introduces a novel application of hierarchical clustering as an approximation algorithm for the non-parametric clustering objective. We additionally relate our algorithm to the classic hierarchical agglomerative clustering method. We perform extensive empirical experiments in both hierarchical and flat …
Total citations
20212022202320245101413
Scholar articles
N Monath, KA Dubey, G Guruganesh, M Zaheer… - Proceedings of the 27th ACM SIGKDD Conference on …, 2021
N Monath, A Dubey, G Guruganesh, M Zaheer… - arXiv preprint arXiv:2010.11821, 2020