Vlog with Prof. Talia Gillis (Columbia) on the Fairness of Machine-Assisted Human Decisions
In this episode of the CLE's vlog & podcast series, Prof. Talia Gillis (Columbia) and Prof. Alexander Stremitzer (ETH Zurich) discuss Gillis' study on how properties of machine predictions affect the resulting human decisions.
When machine-learning algorithms are deployed in high-stakes decisions, we want to ensure that it leads to fair and equitable outcomes. However, many machine predictions are deployed to assist in decisions where a human decision-maker retains the ultimate decision authority.
In their study external page On the Fairness of Machine-Assisted Human Decisions external page Talia Gillis (Columbia), external page Bryce McLaughlin (Stanford) and external page Jann Spiess (Stanford) show in a formal model that the inclusion of a biased human decision-maker can revert common relationships between the structure of the algorithm and the qualities of resulting decisions. Specifically, they document that excluding information about protected groups from the prediction may fail to reduce disparities. Their results demonstrate more broadly that any study of critical properties of complex decision systems, such as the fairness of machine-assisted human decisions, should go beyond focusing on the underlying algorithmic predictions in isolation.
In this episode of the CLE's vlog series, Prof. Gillis and Prof. Alexander Stremitzer (ETH Zurich) discuss the study and its implications.
- Watch the video on YouTube external page here
- Listen to the podcast version external page here