Participatory Approaches to Machine Learning

ICML 2020 Workshop (July 17)

Because of the SARS-CoV-2/COVID-19 pandemic, the workshop will take place virtually. The exact format will be based on the ICML conference guidelines.

ICML Virtual Site:
Discord server:
Twitter hashtag: #PAML2020
Please participate in our poster and breakout sessions on our Discord server


The designers of a machine learning (ML) system typically have far more power over the system than the individuals who are ultimately impacted by the system and its decisions. Recommender platforms shape the users’ preferences; the individuals classified by a model often do not have means to contest a decision; and the data required by supervised ML systems necessitates that the privacy and labour of many yield to the design choices of a few.

The fields of algorithmic fairness and human-centered ML often focus on centralized solutions, lending increasing power to system designers and operators, and less to users and affected populations. In response to the growing social-science critique of the power imbalance present in the research, design, and deployment of ML systems, we wish to consider a new set of technical formulations for the ML community on the subject of more democratic, cooperative, and participatory ML systems.

Our workshop aims to explore methods that, by design, enable and encourage the perspectives of those impacted by an ML system to shape the system and its decisions. By involving affected populations in shaping the goals of the overall system, we hope to move beyond just tools for enabling human participation and progress towards a redesign of power dynamics in ML systems.


We invite work that makes progress on the workshop themes, including but not limited to:


The central part of our workshop is the livestream on the ICML virtual site. The livestream will broadcast invited talks and panels, as well as the latest announcements and possible schedule changes. The interactive discussions with other participants—poster and breakout sessions—will happen on our Discord server.

Opening RemarksLive talk (Iivestream)
1:00 PM - 1:15 PM UTC · Organizing committee
AI’s Contradiction: King’s Radical Revolution in ValuesLive talk (livestream)
1:15 PM - 1:45 PM UTC · Tawana Petty
What does it mean for ML to be trustworthy?Talk (livestream)
1:45 PM - 2:15 PM UTC · Nicolas Papernot
Turning the tables on Facebook: How we audit Facebook using their own marketing toolsTalk (livestream)
2:15 PM - 2:45 PM UTC · Piotr Sapieżyński
Poster Session 1Poster session (Discord)
2:45 PM - 3:30 PM UTC
Breakout Sessions 1 / BreakBreakout discussions (Discord)
3:30 PM - 4:15 PM UTC
Panel 1Panel (livestream)
4:15 PM - 5:00 PM UTC · Tawana Petty, Nicolas Papernot, Piotr Sapieżyński, Aleksandra Korolova, Deborah Raji (moderator)
Affected Community Perspectives on Algorithmic Decision-Making in Child Welfare Services Talk (livestream)
5:00 PM - 5:30 PM UTC · Alexandra Chouldechova
Actionable Recourse in Machine Learning Talk (livestream)
5:30 PM - 6:00 PM UTC · Berk Ustun
Beyond Fairness and Ethics: Towards Agency and Shifting PowerTalk (livestream)
6:00 PM - 6:30 PM UTC · Jamelle Watson-Daniels
Panel 2Panel (livestream)
6:30 PM - 7:15 PM UTC · Berk Ustun, Alexandra Chouldechova, Jamelle Watson-Daniels, Deborah Raji (moderator)
Poster Session 2Poster session (Discord)
7:15 PM - 8:00 PM UTC
Breakout Sessions 2Breakout discussions (Discord)
8:00 PM - 8:45 PM UTC

Poster Sessions

Each contributed paper has its own audio/video channel on our Discord server. During the “poster” session, participants can discuss the paper with its authors in the respective channel. Before joining a channel as a participant, please make sure to familiarize yourself with the paper and watch the paper’s short introduction video.

Poster Session 1 (2:45 PM - 3:30 PM UTC)

Poster Session 2 (7:15 PM - 8:00 PM UTC)

Breakout Sessions

On our Discord server, you can chat with other workshop participants at any time in thematic text or audio/video channels; we will add channels during the day if topics come up. In addition to this, as an experiment, we will have thematic discussion sessions guided by facilitators:

Breakout Session 1 (3:30 PM - 4:15 PM UTC)

Breakout Session 2 (8:00 PM - 8:45 PM UTC)

Contributed Papers

The contributed papers include both novel research papers as well as papers that have already been published in other venues (marked as “Encore”).

Qualitative frameworks, methods, and analyses

More than a label: machine-assisted data interpretation Case study
Maja Trębacz (University of Cambridge); Luke Church (University of Cambridge)
What are you optimizing for? Aligning Recommender Systems with Human Values Position paper
Jonathan Stray (Partnership on AI); Steven Adler (Partnership on AI); Dylan Hadfield-Menell (UC Berkeley)
Participatory Problem Formulation for Fairer Machine Learning Through Community Based System Dynamics Position paper Encore: ICLR 2020 ML-IRL
Donald Martin, Jr. (Google); Vinodkumar Prabhakaran (Google); Jill Kuhlberg (System Stars); Andrew Smart (Google); William Isaac (DeepMind)
Keeping Designers in the Loop: Communicating Inherent Algorithmic Trade-offs Across Multiple Objectives Encore: DIS 2020
Bowen Yu (University of Minnesota); Ye Yuan (University of Minnesota); Loren Terveen (University of Minnesota); Steven Wu (University of Minnesota); Jodi Forlizzi (CMU); Haiyi Zhu (Carnegie Mellon University)
Measuring Non-Expert Comprehension of Machine Learning Fairness Metrics Encore: ICML 2020
Debjani Saha (University of Maryland); Candice Schumann (University of Maryland); Duncan C McElfresh (University of Maryland); John P Dickerson (University of Maryland); Michelle Mazurek (University of Maryland); Michael Tschantz (International Computer Science Institute)

Algorithmic approaches and quantitative methods

Interpretable Privacy for Deep Learning Inference Position paper
Fatemehsadat Mireshghallah (UC San Diego); Mohammadkazem Taram (UC San Diego); Ali Jalali (; Ahmed Taha Elthakeb (UC San Diego); Dean Tullsen (UC San Diego); Hadi Esmaeilzadeh (UC San Diego)
Metric-Free Individual Fairness in Online Learning
Yahav Bechavod (Hebrew University of Jerusalem); Christopher Jung (University of Pennsylvania); Steven Wu (University of Minnesota)
Designing Recommender Systems with Reachability in Mind
Sarah Dean (UC Berkeley); Mihaela Curmei (UC Berkeley); Benjamin Recht (UC Berkeley)
Heuristic-Based Weak Learning for Automated Decision-Making Case study
Ryan Steed (Carnegie Mellon University); Benjamin Williams (George Washington University)
Deconstructing the Filter Bubble: User Decision-Making and Recommender Systems
Guy Aridor (Columbia University); Duarte Goncalves (Columbia University); Shan Sikdar (Everquote)
Fairness, Equality, and Power in Algorithmic Decision-Making
Maximilian Kasy (University of Oxford); Rediet Abebe (Harvard University)
Iterative Interactive Reward Learning
Pallavi Koppol (Carnegie Mellon University); Henny Admoni (Carnegie Mellon University); Reid Simmons (Carnegie Mellon University)
Recourse for Humans
Kaivalya Rawal (Harvard University); Himabindu Lakkaraju (Harvard University)
Designing Fairly Fair Classifiers Via Economic Fairness Notions Encore: WWW 2020
Safwan Hossain (University of Toronto); Nisarg Shah (University of Toronto); Andjela Mladenovic (Independent Researcher)


Contestable City Algorithms Case study
Kars Alfrink (Delft University of Technology); Thijs Turel (Amsterdam Institute for Advanced Metropolitan Solutions); Ianus Keller (Delft University of Technology); Neelke Doorn (Delft University of Technology); Gerd Kortuem (Delft University of Technology)
Preference Elicitation and Aggregation to Aid with Patient Triage during the COVID-19 Pandemic Case study
Caroline M Johnston (University of Southern California); Simon Blessenohl (University of Southern California); Phebe Vayanos (University of Southern California)
Soliciting Stakeholders’ Fairness Notions in Child Maltreatment Predictive Systems Position paper
Hao-Fei Cheng (University of Minnesota); Paige Bullock (Kenyon College); Alexandra Chouldechova (CMU); Steven Wu (University of Minnesota); Haiyi Zhu (Carnegie Mellon University)
Can Algorithms Help Support Participatory Housing? Position paper
Rediet Abebe (Harvard University); Daniel Wu (Immuta)
Adapting a kidney exchange algorithm to align with human values Encore: Artificial Intelligence Journal, 2020
Rachel Freedman (UC Berkeley); Jana Schaich Borg (Duke University); Walter Sinnott-Armstrong (Duke University); John P. Dickerson (UMD); Vincent Conitzer (Duke University)
Co-designing a real-time classroom orchestration tool to support teacher–AI complementarity Case study Encore: Journal of Learning Analytics, 2019
Kenneth Holstein (Carnegie Mellon University); Bruce M. McLaren (Carnegie Mellon University); Vincent Aleven (Carnegie Mellon University)
Keeping Community in the Loop: Understanding Wikipedia Stakeholder Values for Machine Learning-Based Systems Case study Encore: CHI 2020
C. Estelle Smith (University of Minnesota); Bowen Yu University of Minnesota); Anjali Srivastava (University of Minnesota); Aaron Halfaker (Wikimedia Foundation); L. Terveen (University of Minnesota); Haiyi Zhu (Carnegie Mellon University)

Critical examinations

Is Machine Learning Speaking my Language? A Critical Look at the NLP-Pipeline Across 8 Human Languages Case study
Esma Wali (Clarkson University); Yan Chen (Clarkson University); Christopher M Mahoney (Clarkson University); Thomas G Middleton (Clarkson University); Marzieh Babaeianjelodar (Clarkson University); Mariama Njie (Iona College); Jeanna Neefe Matthews (Clarkson University);
Participation is Not a Design Fix for Machine Learning Position paper
Mona Sloane (NYU); Emanuel Moss (CUNY Graduate Center); Olaitan Awomolo (Temple University); Laura Forlano (IIT)
What If I Don't Like Any Of The Choices? The Limits of Preference Elicitation for Participatory Algorithm Design Position paper
Samantha Robertson (UC Berkeley); Niloufar Salehi (UC Berkeley)
Bringing the People Back In: Contesting Benchmark Machine Learning Datasets Position paper
Emily Denton (Google); Alex Hanna (Google); Razvan Amironesei (USF); Andrew Smart (Google); Hilary Nicole (Google); Morgan Scheuerman (Google)
The Hidden Assumptions Behind Counterfactual Explanations and Principal Reasons Encore: FAccT 2020
Solon Barocas (Cornell University); Andrew Selbst (UCLA School of Law); Manish Raghavan (Cornell)


A Review of the UK-ICO’s Draft Guidance on the AI Auditing Framework
Emre Kazim (University College London); Adriano Koshiyama (University College London)

Invited Speakers

Call for Participation

We would like to experiment with breakout discussions to encourage group discussions around shared topics of interest. Please fill out our this form if you are interested in joining or organizing such a discussion.

Call for Papers

The workshop will include contributed papers. All accepted papers will be allocated either a virtual poster presentation, or a virtual talk slot. We will not publish proceedings, but will optionally link the papers and talk recordings on the workshop website.

We invite submissions in two tracks:

Submissions are currently closed.

For any questions, please send us an email to participatory-ml-organizers at


Organizing Committee

Funding assistance

We are grateful to Open Philanthropy for providing funding assistance for the workshop. If you want to participate in the workshop and require funding, please write us an email to participatory-ml-organizers at