The Yeoh Tiong Lay Centre for Politics, Philosophy & Law
The London Medical Imaging & AI Centre for Value Based Healthcare and the Sowerby Philosophy & Medicine Project are very pleased to announce our jointly organized Special Legal-Themed Panel Discussion on Stereotyping & Medical AI, which will form the 5th instalment of the Philosophy & Medicine Project’s Stereotyping & Medical AI online summer colloquium series!

UPDATE: We are delighted to announce that chairing our talk will be Robin Carpenter, who is the Senior Research Data Governance Manager at the London Medical Imaging & AI Centre for Value Based Healthcare!

Special Legal-Themed Panel Discussion on Stereotyping and Medical AI

Jointly Organized by the Yeoh Tiong Lay Centre for Politics, Philosophy & Law, The London Medical Imaging & AI Centre for Value Based Healthcare & the Philosophy & Medicine Project

Panellists:

Dr. Jonathan Gingerich (KCL)

Lecturer in the Philosophy of Law at theYeoh Tiong Lay Centre for Politics, Philosophy & Law

Dr. Reuben Binns (Oxford)

Associate Professor of Human Centred Computing

Prof. Georgi Gardiner (Tennessee)

Associate Professor of Philosophy

Prof. David Papineau (KCL)

Professor of Philosophy of Science

Chair:

Robin Carpenter (The London Medical Imaging & AI Centre for Value Based Healthcare)

Senior Research Data Governance Manager

When: Thursday 29th of July, 5pm BST

REGISTER and find out more about the event here

Understanding “statistical stereotyping” as forming beliefs about individuals on the basis of statistical generalizations about the groups to which the individuals belong, can it be legally problematic to statistically stereotype patients in medicine, either when these beliefs are formed by medical AI/artificial agents or by medical professionals? In this Special Legal-Themed Panel Discussion, we’ll hear from relevant experts in law, computer science, and philosophy on this and related questions around the legal aspects of stereotyping in medicine, by both human and artificial agents. 

* For those unable to attend these colloquia, please feel free to register for our events in order to be notified once recordings of previous colloquia become available! You can also subscribe to the Philosophy & Medicine Project’s newsletter here, or follow us on Twitter or Facebook. Follow the YTL Centre at King’s on Twitter here and the London Medical Imaging & AI Centre for Value Based Healthcare here. Previous colloquia will also be posted to the Philosophy & Medicine Project’s YouTube channel.

About the Stereotyping and Medical AI Summer Colloquium Series

The aim of this fortnightly colloquium series on Stereotyping and Medical AI is to explore philosophical and in particular ethical and epistemological issues around stereotyping in medicine, with a specific focus on the use of artificial intelligence in health contexts. We are particularly interested in whether medical AI that uses statistical data to generate predictions about individual patients can be said to “stereotype” patients, and whether we should draw the same ethical and epistemic conclusions about stereotyping by artificial agents as we do about stereotyping by human agents, i.e., medical professionals.  

Other questions we are interested in exploring as part of this series include but are not limited to the following: 

  • How should we understand “stereotyping” in medical contexts? 
  • What is the relationship between stereotyping and bias, including algorithmic bias (and how should we understand “bias” in different contexts?)? 
  • Why does stereotyping in medicine often seem less morally or epistemically problematic than stereotyping in other domains, such as in legal, criminal, financial, educational, etc., domains? Might beliefs about biological racial realism in the medical context explain this asymmetry? 
  • When and why might it be wrong for medical professionals to stereotype their patients? And when and why might it be wrong for medical AI, i.e. artificial agents, to stereotype patients? 
  • How do (medical) AI beliefs relate to the beliefs of human agents, particularly with respect to agents’ moral responsibility for their beliefs? 
  • Can non-evidential or non-truth-related considerations be relevant with respect to what beliefs medical professionals or medical AI ought to hold? Is there moral or pragmatic encroachment on AI beliefs or on the beliefs of medical professionals? 
  • What are potential consequences of either patients or doctors being stereotyped by doctors or by medical AI in medicine? Can, for example, patients be doxastically wronged by doctors or AI in virtue of being stereotyped by them? 

We will be tackling these topics through a series of online colloquia hosted by the Sowerby Philosophy and Medicine Project at King’s College London. The colloquium series will feature a variety contributors from across the disciplinary spectrum. We hope to ensure a discursive format with time set aside for discussion and Q&A by the audience. This event is open to the public and all are very welcome.

Our working line-up for the remainder of this summer series is as follows, with a few additional speakers and details to be confirmed:

June 17            Professor Erin Beeghly (Utah), “Stereotyping and Prejudice: The Problem of Statistical Stereotyping” 

July 1               Dr. Kathleen Creel, (HAI, EIS, Stanford) “Let’s Ask the Patient: Stereotypes, Personalization, and Risk in Medical AI” (recording linked)

July 15             Dr. Annette Zimmermann (York, Harvard), “ “Structural Injustice, Doxastic Negligence, and Medical AI” 

July 22             Dr. William McNeill (Southampton), “Neural Networks and Explanatory Opacity” (recording linked)

July 29             Special Legal-Themed Panel Discussion: Dr. Jonathan Gingerich (KCL), Dr. Reuben Binns (Oxford), Prof. Georgi Gardiner (Tennessee), Prof. David Papineau (KCL), Chair: Robin Carpenter (The London Medical Imaging & AI Centre for Value Based Healthcare) (link to register)

August 12        Professor Zoë Johnson King (USC) & Professor Boris Babic (Toronto), “Algorithmic Fairness and Resentment”

August 26        Speakers TBC

September 2    Dr. Geoff Keeling (HAI, LCFI, Google)

September 9    Professor Rima Basu (Claremont McKenna)   

All best wishes, and we very much hope you can join us! 

The Organizers (Dr. Jonathan Gingerich, Robin Carpenter, Professor Elselijn Kingma, Dr. Winnie Ma, and Eveliina Ilola)