Preference Learning: Problems and Applications in AI (PL-12)

ECAI-12 Workshop, Montpellier

Tuesday, August 28th, 2012

Background

The topic of preferences has recently attracted considerable attention in Artificial Intelligence (AI) research, notably in fields such as agents, non-monotonic reasoning, constraint satisfaction, planning, and qualitative decision theory. Preferences provide a means for specifying desires in a declarative way, which is a point of critical importance for AI. Drawing on past research on knowledge representation and reasoning, AI offers qualitative and symbolic methods for treating preferences that can reasonably complement hitherto existing approaches from other fields, such as decision theory and economic utility theory. Needless to say, however, the acquisition of preferences is not always an easy task. Therefore, not only are modeling languages and representation formalisms needed, but also methods for the automatic learning, discovery, and adaptation of preferences. It is hence hardly surprising that methods for learning and predicting preference models from explicit or implicit preference information and feedback are among the very recent research trends in machine learning and related areas.

The goal of the proposed workshop is on the one hand to continue a series of successful workshops (PL-08, PL-09, PL-10), but, more importantly, also to expand the scope by drawing the attention to a broader AI audience. In particular, we seek to figure out new problems and applications of preference learningnatural language processing, game playing, decision making and planning. Indeed, we believe that there is a strong potential for preference learning techniques in these areas, which has not yet been fully explored.

Topics of Interest

Topics of interest include, but are not limited to

  • applications of preference learning in all areas of Artificial Intelligence
  • interaction of preference learning with reasoning and decision making
  • preference learning methods for special application fields
  • practical preference learning algorithms and techniques
  • theoretical contributions that are of practical interest

We particularly solicit descriptions of challenge problems, i.e., problems in Artificial Intelligence for which preference learning methods are useful. Descriptions of challenge problems will be given limited space in the proceedings (2-4 pages, depending on the format) and a presentation in the workshop that encourages interaction with the audience (e.g., 3 min. spot-light talk and poster presentation if feasible).

Proceedings

Individual papers are linked in the program below, a pdf-file with the entire proceedings can be downloaded here.

Program

Long presentations are allotted 20 mins (15+5) including discussion. Short presentations are allotted 15 mins (10+5). In addition, we have reserved 10 mins per session for final discussions.

08.30-08.35: Opening of Workshop

08.35-10.00: Model-Based Preference Learning

10.00-10.20: Coffee Break

10.20-11.25: Preference Learning in Recommender Systems

11.25-11.35: Short Break

11.35-13.00: Object and Instance Ranking

Organizers

Related Events

There will be several events related to preferences and preferences learning at ECAI-12: