[BibTeX] [RIS]
Learning Gradient Boosted Multi-label Classification Rules
Type of publication: Inproceedings
Citation: rapp20boomer
Booktitle: Machine Learning and Knowledge Discovery in Databases (ECML-PKDD)
Series: Lecture Notes in Computer Science
Volume: 12459
Year: 2020
Pages: 124--140
Publisher: Springer
URL: https://link.springer.com/chapter/10.1007/978-3-030-67664-3_8
DOI: https://doi.org/10.1007/978-3-030-67664-3_8
Abstract: In multi-label classification, where the evaluation of predic-tions is less straightforward than in single-label classification, variousmeaningful, though different, loss functions have been proposed. Ideally,the learning algorithm should be customizable towards a specific choiceof the performance measure. Modern implementations of boosting, mostprominently gradient boosted decision trees, appear to be appealing fromthis point of view. However, they are mostly limited to single-label clas-sification, and hence not amenable to multi-label losses unless these arelabel-wise decomposable. In this work, we develop a generalization of thegradient boosting framework to multi-output problems and propose analgorithm for learning multi-label classification rules that is able to min-imize decomposable as well as non-decomposable loss functions. Usingthe well-known Hamming loss and subset 0/1 loss as representatives, weanalyze the abilities and limitations of our approach on synthetic dataand evaluate its predictive performance on multi-label benchmarks.
Keywords: Gradient boosting, multilabel classification, Rule Learning
Authors Rapp, Michael
Loza Mencía, Eneldo
Fürnkranz, Johannes
Nguyen, Vu-Linh
Hüllermeier, Eyke
Editors Hutter, Frank
Kersting, Kristian
Lijffijt, Jefrey
Valera, Isabel
Topics