Experimentation with fairness-aware recommendation using librec-auto

The field of machine learning fairness has developed some well-understood metrics, methodologies, and data sets for experimenting with and developing classification algorithms with an eye to their fairness properties. However, equivalent research is lacking in the area of personalized recommender systems, even though such systems are crucial components of the applications through which millions of individuals daily access news, shopping, social connections, and employment opportunities. This 180-minute hands-on tutorial will introduce participants to concepts in fairness-aware recommendation (as a distinct from classification-oriented systems) and metrics and methodologies in evaluating recommendation fairness. The tutorial will introduce LibRec, a well-developed platform for recommender systems evaluation, and fairness-aware extensions to it. Participants will also gain hands-on experience with conducting fairness-aware recommendation experiments with LibRec using the librec-auto scripting platform, and learn the steps required to configure their own experiments, incorporate their own data sets, and design their own algorithms and metrics.

Tutorial plan

  • Introduction to recommender systems and fairness-aware recommendation.
  • Installation and configuration of LibRec and librec-auto.
  • Configuring and running sample recommendation experiments.
  • Consumer and provider fairness. Fairness metrics for recommendation.
  • Fairness-aware recommendation algorithms.
  • Implementing a fairness-aware re-ranker.
  • Visualizing experimental results.
  • Questions and individual exploration.


  • Professor Robin Burke is Professor and Chair of the Department of Information Science at the University of Colorado, Boulder. He has been conducting recommender systems research since the mid-1990s and his current research focused on fairness in recommendation, including current support from the National Science Foundation in this area. Professor Burke has prior experience in conference tutorials to a wide variety of audiences including “Robust Recommendation” (RecSys 2008), “Mining Diverse Texts for Location and Sentiment” with John Shanahan and Ana Lucic (Digital Humanities and Computer Science 2017), and “Fairness and Discrimination in Recommendation and Retrieval” with Michael Ekstrand and Fernando Diaz (RecSys 2019). He has led the librec-auto project since its inception.

Masoud Mansoury photo

  • Masoud Mansoury is a PhD student at the Eindhoven University of Technology under the direction of Mykola Pechenizkiy. His research interests include fairness-aware recommender systems, machine learning, and social network analysis. Masoud has been one of the core contributors to the librec-auto project and presented on the project, most recently at the 1st Interdisciplinary Workshop on Algorithm Selection and Meta-Learning in Information Retrieval (AMIR 2019).


  • Nasim Sonboli is a PhD student in the Department of Information Science at the University of Colorado, Boulder, and a member of That Recommender Systems Lab. She’s working on the fairness, accountability and transparency of recommendation algorithms and has several publications in ACM FAT* Conference and ACM Recommender Systems Conference. She is one of the core contributors to the librec-auto and LibRec projects, especially in the area of fairness-aware metrics and algorithms.

Target audience

The aim of this tutorial is to support researchers who have an interest in fairness and recommendation by offering:

  • An introduction to the unique and complex aspects of fairness in the recommender systems context,
  • An introduction to methodologies for off-line recommender systems evaluation,
  • Hands-on experience designing and configuring fairness-aware recommender systems experiments using a state-of-the-art tool.

Participants will leave the tutorial with the ability to conduct fairness-aware recommendation experiments on their own data sets, and for the more technically-inclined, with the foundations necessary to conduct experimental research in fairness-aware recommendation including implementing and evaluating fairness-aware recommendation algorithms and creating new fairness metrics for such experiments.

Instructions for participants

For performing sample experiments during the tutorial session, participants will need to have:

  • a machine running Linux, MacOS, or Windows
  • ability to run Java libraries (version >= 1.8)
  • Anaconda with Python version >= 3.6
  • ability to install new Python libraries
  • a text editor


Next Post Previous Post