Home

Preference Learning (PL-10)

ECML/PKDD-10 Tutorial and Workshop

24 September 2010, Barcelona, Spain


Background

The topic of preferences has recently attracted considerable attention in artificial intelligence in general and machine learning in particular, where the topic of preference learning has emerged as a new, interdisciplinary research field with close connections to related areas such as operations research, social choice and decision theory. Roughly speaking, preference learning is about methods for learning preference models from explicit or implicit preference information, typically used for predicting the preferences of an individual or a group of individuals. Approaches relevant to this area range from learning special types of preference models, such as lexicographic orders, over "learning to rank" for information retrieval to collaborative filtering techniques for recommender systems.

Format

This joint tutorial/workshop is a follow-up activity of two previous ECML/PKDD workshops (PL-08, PL-09). It will be held in Barcelona on the last day of ECML/PKDD 2010, right before ACM Recommender Systems 2010, as a one-day session with a tutorial part in the morning, and paper presentations in the afternoon.

The event aims at providing a forum for the discussion of recent advances in the use of machine learning and data mining methods for problems related to the learning and discovery of preferences, and to offer an opportunity for researchers and practitioners to identify new promising research directions.

Tutorial

The primary goal of this tutorial is to survey the field of preference learning in its current stage of development. The presentation will focus on a systematic overview of different types of preference learning problems, methods and algorithms to tackle these problems, and metrics for evaluating the performance of preference models induced from data.

We will cover the following topics

  1. Introduction
  2. Preference Learning Tasks
    • Object Ranking
    • Instance Ranking
    • Label Ranking
  3. Loss Functions for Ranking and Preference Learning
    • ranking errors (Spearman, Kendall's tau, ...)
    • multipartite ranking measures (AUC, C-index, ...)
    • information retrieval measures (precision@k, NCDG, ...)
  4. Preference Learning Techniques
    • learning utility functions
    • learning preference relations
    • model-based preference learning
    • local aggregation of preferences
  5. Complexity of Preference Learning
    • training complexity
    • prediction complexity

This outline essentially follows the introductory chapter of a forthcoming book on preference learning.

Workshop

Topics of interest include, but are not limited to

  • quantitative and qualitative approaches to modeling preferences as well as different forms of feedback and training data;
  • learning utility functions and related regression problems;
  • preference mining and preference elicitation;
  • learning relational preference models;
  • embedding of other types of learning problems in the preference learning framework (such as label ranking, ordinal classification, or hierarchical classification);
  • comparison of different preference learning paradigms (e.g., "big bang" approaches that use a single model vs. modular approaches that decompose the learning of preference models into subproblems);
  • ranking problems, such as learning to rank objects or to aggregate rankings;
  • scalability and efficiency of preference learning algorithms;
  • methods for special application fields, such as web search, information retrieval, electronic commerce, games, personalization, or recommender systems;
  • connections to other research fields, such as decision theory, operations research, and social choice theory.

As the workshop addresses a quite recent research topic, we also encourage submissions presenting more preliminary results and discussing open problems. Correspondingly, two types of contributions will be solicited, namely short communications (short talks) and full papers (long talks) reporting on mature research results.bv

Program

10.30-12.00: Preference Learning Tutorial [All Slides]

  1. Preference Learning Tasks [Slides]
  2. Loss Functions [Slides]
  3. Preference Learning Techniques [Slides]
  4. Complexity [Slides]
  5. Conclusions [Slides]

Each presentation is allotted 20 mins (15+5) including discussion. In addition, we have reserved 10 mins per session for final discussions.

12.15-13.45: Preference Learning Algorithms

15.00-16.30: Preference Learning in Recommender Systems

17.00-18.10: Rule-Based Preference Learning

18.10-18.30: Final Discussion

Organizers