Preference Learning (PL-09)
ECML/PKDD-09 Workshop
11 September 2009, Bled, Slovenia
Background
The topic of preferences has recently attracted considerable attention in Artificial Intelligence (AI) research, notably in fields such as agents, non-monotonic reasoning, constraint satisfaction, planning, and qualitative decision theory. Preferences provide a means for specifying desires in a declarative way, which is a point of critical importance for AI. Drawing on past research on knowledge representation and reasoning, AI offers qualitative and symbolic methods for treating preferences that can reasonably complement hitherto existing approaches from other fields, such as decision theory and economic utility theory. Needless to say, however, the acquisition of preferences is not always an easy task. Therefore, not only are modeling languages and representation formalisms needed, but also methods for the automatic learning, discovery, and adaptation of preferences.
Methods for learning preference models and predicting preferences are among the very recent research trends in fields like machine learning and knowledge discovery. Approaches relevant to this area range from learning special types of preference models, such as lexicographic orders, over collaborative filtering techniques for recommender systems and ranking techniques for information retrieval, to generalizations of classification problems such as label ranking. Like other types of complex learning tasks that have recently entered the stage, preference learning deviates strongly from the standard problems of classification and regression. It is particularly challenging as it involves the prediction of complex structures, such as weak or partial order relations, rather than single values. Moreover, training input will not, as it is usually the case, be offered in the form of complete examples but may comprise more general types of information, such as relative preferences or different kinds of indirect feedback and implicit preference information.
Topics of Interest
This workshop is a follow-up activity of PL-08, the first workshop on Preference Learning that has been organized successfully as part of ECML/PKDD-2008 in Antwerp. It aims at providing a forum for the discussion of recent advances in the use of machine learning and data mining methods for problems related to the learning and discovery of preferences, and to offer an opportunity for researchers and practitioners to identify new promising research directions. Topics of interest include, but are not limited to
- quantitative and qualitative approaches to modeling preferences as well as different forms of feedback and training data;
- learning utility functions and related regression problems;
- preference mining and preference elicitation;
- learning relational preference models;
- embedding of other types of learning problems in the preference learning framework (such as label ranking, ordinal classification, or hierarchical classification);
- comparison of different preference learning paradigms (e.g., "big bang" approaches that use a single model vs. modular approaches that decompose the learning of preference models into subproblems);
- ranking problems, such as learning to rank objects or to aggregate rankings;
- scalability and efficiency of preference learning algorithms;
- methods for special application fields, such as web search, information retrieval, electronic commerce, games, personalization, or recommender systems;
- connections to other research fields, such as decision theory, operations research, and social choice theory.
Program
The program below contains download-links to individual papers. You can also download the entire workshop proceedings in one PDF-File.
11:00 - 11:10 | Welcome |
11:10 - 11:50 | Invited Presentation: Toshihiro Kamishima: Object Ranking (Slides, Video) |
11:50 - 12:20 | Tapio Pahikkala, Willem Waegeman, Evgeni Tsivtsivadze, Tapio Salakoski, Bernard De Baets: From ranking to intransitive preference learning: rock-paper-scissors and beyond (Slides,Video) |
12:20 - 12.50 | Evgeni Tsivtsivadze, Botond Cseke, and Tom Heskes: Kernel Principal Component Ranking: Robust Ranking on Noisy Data (Slides,Video) |
12:50 - 14:20 | Lunch |
14:20 - 14:50 | Hsuan-Tien Lin and Ling Li: Combining Ordinal Preferences by Boosting (Video) |
14:50 - 15:20 | Krzysztof Dembczynski and Wojciech Kotlowski: Decision Rule-based Algorithm for Ordinal Classification based on Rank Loss Minimization (Slides,Video) |
15:20 - 15:40 | Coffee break |
15:40 - 16:10 | Richard Booth, Yann Chevaleyre, Jerome Lang, Jerome Mengin, and Chattrakul Sombattheera: Learning various classes of models of lexicographic orderings (Slides,Video) |
16:10 - 16:30 | Grigorios Tsoumakas, Eneldo Loza Mencia, Ioannis Katakis, Sang-Hyeun Park, and Johannes Fürnkranz: On the combination of two decompositive multi-label classification methods (Slides,Video) |
16:30 - 16:50 | Weiwei Cheng and Eyke Hüllermeier: Label Ranking with Partial Abstention using Ensemble Learning (Video) |
16:50 - 17:10 | Break |
17:10 - 17:40 | Tomas Kliegr: UTA - NM: Explaining Stated Preferences with Additive Non-Monotonic Utility Functions (Slides,Video) |
17:40 - 18:10 | Marco de Gemmis, Leo Iaquinta, Pasquale Lops, Cataldo Musto, Fedelucio Narducci, and Giovanni Semeraro: Preference Learning in Recommender Systems (Video) |
Organizers
- Eyke Hüllermeier (Universität Marburg)
- Johannes Fürnkranz (TU Darmstadt)