Background
The topic of preferences has recently attracted considerable attention in Artificial Intelligence (AI) research, notably in fields such as agents, non-monotonic reasoning, constraint satisfaction, planning, and qualitative decision theory. Preferences provide a means for specifying desires in a declarative way, which is a point of critical importance for AI. Drawing on past research on knowledge representation and reasoning, AI offers qualitative and symbolic methods for treating preferences that can reasonably complement hitherto existing approaches from other fields, such as decision theory and economic utility theory. Needless to say, however, the acquisition of preferences is not always an easy task. Therefore, not only are modeling languages and representation formalisms needed, but also methods for the automatic learning, discovery, and adaptation of preferences. It is hence hardly surprising that methods for learning and predicting preference models from explicit or implicit preference information and feedback are among the very recent research trends in machine learning and related areas.
The goal of the proposed workshop is on the one hand to continue a series of successful workshops (PL-08, PL-09, PL-10), but, more importantly, also to expand the scope by drawing the attention to a broader AI audience. In particular, we seek to figure out new problems and applications of preference learningnatural language processing, game playing, decision making and planning. Indeed, we believe that there is a strong potential for preference learning techniques in these areas, which has not yet been fully explored.
Topics of Interest
Topics of interest include, but are not limited to
- applications of preference learning in all areas of Artificial Intelligence
- interaction of preference learning with reasoning and decision making
- preference learning methods for special application fields
- practical preference learning algorithms and techniques
- theoretical contributions that are of practical interest
We particularly solicit descriptions of challenge problems, i.e., problems in Artificial Intelligence for which preference learning methods are useful. Descriptions of challenge problems will be given limited space in the proceedings (2-4 pages, depending on the format) and a presentation in the workshop that encourages interaction with the audience (e.g., 3 min. spot-light talk and poster presentation if feasible).
Proceedings
Individual papers are linked in the program below, a pdf-file with the entire proceedings can be downloaded here.
Program
Long presentations are allotted 20 mins (15+5) including discussion. Short presentations are allotted 15 mins (10+5). In addition, we have reserved 10 mins per session for final discussions.
08.30-08.35: Opening of Workshop
08.35-10.00: Model-Based Preference Learning
- M. Grbovic and N. Djuric and S. Vuceti. Learning from Pairwise Preference Data using Gaussian Mixture Model
- M. Bräuning and E. Hüllermeier. Learning Conditional Lexicographic Preference Trees
- D. Bigot, H. Fargier, J. Mengin, B. Zanuttini. Using and Learning GAI-Decompositions for Representing Ordinal Rankings
- Short Paper A. Eckhardt, T. Kliegr. Preprocessing Algorithm for Handling Non-Monotone Attributes in the UTA method
10.00-10.20: Coffee Break
10.20-11.25: Preference Learning in Recommender Systems
- E. Castillejo and A. Almeida and D. Lopez-de-Ipina. Alleviating Cold-User Start Problem with Users' Social Network Data in Recommendation Systems
- F. Aiolli. A Preliminary Study on a Recommender System for the Million Songs Dataset Challenge
- Short Paper: L. Marin, A. Moreno, D. Isern. Preference Function Learning over Numeric and Multi-valued Categorical Attributes
11.25-11.35: Short Break
11.35-13.00: Object and Instance Ranking
- E. Tsivtsivadze, K. Hofmann, T. Heskes. Large Scale Co-Regularized Ranking
- R. Busa-Fekete, G. Szarvas, T. Elteto, B. Kegl. An apple-to-apple comparison of Learning-to-rank algorithms in terms of Normalized Discounted Cumulative Gain
- D. Meunier, Y. Deguchi, R. Akrour, E. Suzuki, M. Schoenauer, M. Sebag. Direct Value Learning: a Preference-based Approach to Reinforcement Learning
- Short Paper: C. Wirth and J. Fürnkranz. First Steps Towards Learning from Game Annotations
Organizers
- Johannes Fürnkranz (TU Darmstadt)
- Eyke Hüllermeier (Universität Marburg)
Related Events
There will be several events related to preferences and preferences learning at ECAI-12:- The organizers of this workshop will also hold a Tutorial on Preference Learning.
- There will be a Workshop on Advances in Preference Handling at ECAI. We aim at co-ordinating the program of both workshops.