Gal
Yona
How Fair Can We Be
Computer Science PhD candidate at the Weizmann Institute of Science
Gal
Yona
How Fair Can We Be
Computer Science PhD candidate at the Weizmann Institute of Science
Bio
Gal Yona is a Computer Science PhD candidate at the Weizmann Institute of Science. Her research is aimed at making machine learning methods more reliable in human-facing applications, with a focus on defining and promoting fairness and non-discrimination. Before her PhD, Gal worked as a data scientist at the digital forensics company Cellebrite.
Bio
Gal Yona is a Computer Science PhD candidate at the Weizmann Institute of Science. Her research is aimed at making machine learning methods more reliable in human-facing applications, with a focus on defining and promoting fairness and non-discrimination. Before her PhD, Gal worked as a data scientist at the digital forensics company Cellebrite.
Abstract
Machine learning is increasingly used to drive predictions and inform consequential decisions about individuals; examples range from estimating a felon’s recidivism risk to determining whether a patient is a good candidate for a medical treatment. There is, however, a growing concern that these tools may inadvertently (or not) discriminate against individuals or groups.
In this talk, I will give an overview of some of the recent attempts at formally defining when a machine learning procedure is unfair and providing algorithms that provably mitigate such unfairness. My focus in this talk will be on subgroup fairness, a particular type of guarantee that significantly strengthens existing fairness notions by asking that they hold with respect to a rich collection of (possibly intersecting) subgroups of individuals.
I will give some intuition to the theory behind this approach, and present the results of our recent collaboration with the Clalit Research Institute, demonstrating that this approach can be made practical on real medical risk prediction tasks
Abstract
Machine learning is increasingly used to drive predictions and inform consequential decisions about individuals; examples range from estimating a felon’s recidivism risk to determining whether a patient is a good candidate for a medical treatment. There is, however, a growing concern that these tools may inadvertently (or not) discriminate against individuals or groups.
In this talk, I will give an overview of some of the recent attempts at formally defining when a machine learning procedure is unfair and providing algorithms that provably mitigate such unfairness. My focus in this talk will be on subgroup fairness, a particular type of guarantee that significantly strengthens existing fairness notions by asking that they hold with respect to a rich collection of (possibly intersecting) subgroups of individuals.
I will give some intuition to the theory behind this approach, and present the results of our recent collaboration with the Clalit Research Institute, demonstrating that this approach can be made practical on real medical risk prediction tasks
Planned Agenda
8:45 | Reception |
---|---|
9:30 | Opening words by Shir Meir Lador, Data Science leader at Intuit |
9:45 | Yael Karov - AI For Assisting in Task Completion |
10:15 | Ofra Amir - Agent Strategy Summarization: Describing Agent Behavior to People |
10:45 | Break |
11:00 | Lightning talks |
12:30 | Lunch & Poster session |
---|---|
13:30 | Roundtable session & Poster session |
14:30 | Roundtable closure |
14:45 | Gal Yona - How Fair Can We Be |
15:15 | Daphna Weissglas - Turning Data Science Into Precision Medicine Empowering Millions |
15:45 | Closing remarks |
Planned Agenda
8:45 | Reception |
---|---|
9:30 | Opening words by Shir Meir Lador, Data Science leader at Intuit |
9:45 | Yael Karov - AI For Assisting in Task Completion |
10:15 | Ofra Amir - Agent Strategy Summarization: Describing Agent Behavior to People |
10:45 | Break |
11:00 | Lightning talks |
12:30 | Lunch & Poster session |
13:30 | Roundtable session & Poster session |
14:30 | Roundtable closure |
14:45 | Gal Yona - How Fair Can We Be |
15:15 | Daphna Weissglas - Turning Data Science Into Precision Medicine Empowering Millions |
15:45 | Closing remarks |