Authors
Dor Kedem, Stephen Tyree, Fei Sha, Gert Lanckriet, Kilian Q Weinberger
Publication date
2012
Journal
Advances in neural information processing systems
Volume
25
Description
In this paper, we introduce two novel metric learning algorithms, χ2-LMNN and GB-LMNN, which are explicitly designed to be non-linear and easy-to-use. The two approaches achieve this goal in fundamentally different ways: χ2-LMNN inherits the computational benefits of a linear mapping from linear metric learning, but uses a non-linear χ2-distance to explicitly capture similarities within histogram data sets; GB-LMNN applies gradient-boosting to learn non-linear mappings directly in function space and takes advantage of this approach's robustness, speed, parallelizability and insensitivity towards the single additional hyper-parameter. On various benchmark data sets, we demonstrate these methods not only match the current state-of-the-art in terms of kNN classification error, but in the case of χ2-LMNN, obtain best results in 19 out of 20 learning settings.
Total citations
201220132014201520162017201820192020202120222023202419264334263031241710127
Scholar articles
D Kedem, S Tyree, F Sha, G Lanckriet, KQ Weinberger - Advances in neural information processing systems, 2012