Because of the SARS-CoV-2/COVID-19 pandemic, the workshop will take place virtually. The exact format will be based on the ICML conference guidelines.
The designers of a machine learning (ML) system typically have far more power over the system than the individuals who are ultimately impacted by the system and its decisions. Recommender platforms shape the users’ preferences; the individuals classified by a model often do not have means to contest a decision; and the data required by supervised ML systems necessitates that the privacy and labour of many yield to the design choices of a few.
The fields of algorithmic fairness and human-centered ML often focus on centralized solutions, lending increasing power to system designers and operators, and less to users and affected populations. In response to the growing social-science critique of the power imbalance present in the research, design, and deployment of ML systems, we wish to consider a new set of technical formulations for the ML community on the subject of more democratic, cooperative, and participatory ML systems.
Our workshop aims to explore methods that, by design, enable and encourage the perspectives of those impacted by an ML system to shape the system and its decisions. By involving affected populations in shaping the goals of the overall system, we hope to move beyond just tools for enabling human participation and progress towards a redesign of power dynamics in ML systems.
We invite work that makes progress on the workshop themes, including but not limited to:
Richer interactive ML: Incorporating richer user feedback into ML systems. Limitations of current methods in fully capturing a user’s ‘preferences’ (Schnabel, Bennett, and Joachims 2018; Yang et al. 2019)
Mechanism design or computational social choice and learning: designs that incorporate preference elicitation in allocative ML systems, such as adaptive experimentation or learning-based algorithmic governance (Narita 2019; M. K. Lee et al. 2019; Kahng et al. 2019)
Collective and participatory design applied to community involvement in ML: Qualitative and quantitative methods and frameworks aimed at giving a voice to communities affected by ML systems. Tools for collective training processes and collective design of ML systems (Katell et al. 2020; Brown et al. 2019; Halfaker and Geiger 2019; Patton et al. 2020).
Contestation: Technological methods and tools for analyzing/protesting/contesting the outcomes of ML systems in the absence of centralized cooperation (Kulynych & Overdorf et al. 2020).
Analysis or audits of amplifiers of systemic injustice in decision systems (Arnold et al. 2020; Pierson et al. 2020; Obermeyer et al. 2019; Ali & Sapieżyński et al. 2019; Raji and Buolamwini 2019; Buolamwini and Gebru 2018).
The workshop will include contributed papers. All accepted papers will be allocated either a virtual poster presentation, or a virtual talk slot. We will not publish proceedings, but will optionally link the papers and talk recordings on this page.
We invite submissions in two tracks:
Research Track. Full papers, works-in-progress, position papers, and case studies. We expect that these submissions introduce novel ideas or results.
The papers should have up to 4 pages (excluding references, acknowledgements, or appendices), and be formatted using the ICML submission template. Papers should be anonymized.
Encore Track. Papers that have already been accepted at other venues.
There are no format requirements for this track. The papers should be accepted at another recognized archival conference or journal, and be submitted by one of the paper’s authors.
Submission website: CMT
For any questions, please send us an email to
We are grateful to Open Philanthropy for providing funding assistance for the workshop. We plan to use the funding to cover workshop registration fees for participants who need it.