TY - GEN
T1 - Metric learning from relative comparisons by minimizing squared residual
AU - Liu, Eric Yi
AU - Guo, Zhishan
AU - Zhang, Xiang
AU - Jojic, Vladimir
AU - Wang, Wei
PY - 2012
Y1 - 2012
N2 - Recent studies [1]-[5] have suggested using constraints in the form of relative distance comparisons to represent domain knowledge: d(a, b) < d(c, d) where d(·) is the distance function and a, b, c, d are data objects. Such constraints are readily available in many problems where pairwise constraints are not natural to obtain. In this paper we consider the problem of learning a Mahalanobis distance metric from supervision in the form of relative distance comparisons. We propose a simple, yet effective, algorithm that minimizes a convex objective function corresponding to the sum of squared residuals of constraints. We also extend our model and algorithm to promote sparsity in the learned metric matrix. Experimental results suggest that our method consistently outperforms existing methods in terms of clustering accuracy. Furthermore, the sparsity extension leads to more stable estimation when the dimension is high and only a small amount of supervision is given.
AB - Recent studies [1]-[5] have suggested using constraints in the form of relative distance comparisons to represent domain knowledge: d(a, b) < d(c, d) where d(·) is the distance function and a, b, c, d are data objects. Such constraints are readily available in many problems where pairwise constraints are not natural to obtain. In this paper we consider the problem of learning a Mahalanobis distance metric from supervision in the form of relative distance comparisons. We propose a simple, yet effective, algorithm that minimizes a convex objective function corresponding to the sum of squared residuals of constraints. We also extend our model and algorithm to promote sparsity in the learned metric matrix. Experimental results suggest that our method consistently outperforms existing methods in terms of clustering accuracy. Furthermore, the sparsity extension leads to more stable estimation when the dimension is high and only a small amount of supervision is given.
UR - https://www.scopus.com/pages/publications/84874034396
UR - https://www.scopus.com/pages/publications/84874034396#tab=citedBy
U2 - 10.1109/ICDM.2012.38
DO - 10.1109/ICDM.2012.38
M3 - Conference contribution
AN - SCOPUS:84874034396
SN - 9780769549057
T3 - Proceedings - IEEE International Conference on Data Mining, ICDM
SP - 978
EP - 983
BT - Proceedings - 12th IEEE International Conference on Data Mining, ICDM 2012
T2 - 12th IEEE International Conference on Data Mining, ICDM 2012
Y2 - 10 December 2012 through 13 December 2012
ER -