Bio
Adi is a Data Scientist at PayPal, where she develops ML models for fraud detection that affect real time decisions and influence millions of users. She is thrilled to take an active part in using ML explainability to improve models’ performance and overall modeling lifecycle.
Adi has an MSc in Computer Science from the Weizmann Institute with a thesis about unsupervised algorithms for Microbiome data research and a BSc in Computer Science and Computational Biology from the Hebrew University. She also volunteers in the public Knowledge Workshop, applying data science to improve public transportation in Israel.
Bio
Adi is a Data Scientist at PayPal, where she develops ML models for fraud detection that affect real time decisions and influence millions of users. She is thrilled to take an active part in using ML explainability to improve models’ performance and overall modeling lifecycle.
Adi has an MSc in Computer Science from the Weizmann Institute with a thesis about unsupervised algorithms for Microbiome data research and a BSc in Computer Science and Computational Biology from the Hebrew University. She also volunteers in the public Knowledge Workshop, applying data science to improve public transportation in Israel.
Abstract
As our antifraud ML models perform better and better in production, we understand that we want much more than the lonesome prediction score. We want to be able to explain WHY the model has made its decision. In our case at PayPal, this explanation must be actionable: it should serve us, the data scientists, in debugging and improving the model and should also help us to investigate model misses in collaboration with risk experts.
I am going to share our journey to develop a method that extracts actionable explanations to single predictions. I will describe our definition of a useful explanation, the drawbacks we found in using SHAP values output as-is and the enlightening approach of counterfactual explanations. I will present how combining all of the above enabled us to push our ML solutions a few steps further and share tips from our explainability experience that should be relevant to every data scientist.
Abstract
As our antifraud ML models perform better and better in production, we understand that we want much more than the lonesome prediction score. We want to be able to explain WHY the model has made its decision. In our case at PayPal, this explanation must be actionable: it should serve us, the data scientists, in debugging and improving the model and should also help us to investigate model misses in collaboration with risk experts.
I am going to share our journey to develop a method that extracts actionable explanations to single predictions. I will describe our definition of a useful explanation, the drawbacks we found in using SHAP values output as-is and the enlightening approach of counterfactual explanations. I will present how combining all of the above enabled us to push our ML solutions a few steps further and share tips from our explainability experience that should be relevant to every data scientist.
Planned Agenda
8:45 | Reception |
---|---|
9:30 | Opening words by Shir Meir Lador, Data Science leader at Intuit |
9:45 | Yael Karov - AI For Assisting in Task Completion |
10:15 | Ofra Amir - Agent Strategy Summarization: Describing Agent Behavior to People |
10:45 | Break |
11:00 | Lightning talks |
12:30 | Lunch & Poster session |
---|---|
13:30 | Roundtable session & Poster session |
14:30 | Roundtable closure |
14:45 | Gal Yona - How Fair Can We Be |
15:15 | Daphna Weissglas - Turning Data Science Into Precision Medicine Empowering Millions |
15:45 | Closing remarks |
Planned Agenda
8:45 | Reception |
---|---|
9:30 | Opening words by Shir Meir Lador, Data Science leader at Intuit |
9:45 | Yael Karov - AI For Assisting in Task Completion |
10:15 | Ofra Amir - Agent Strategy Summarization: Describing Agent Behavior to People |
10:45 | Break |
11:00 | Lightning talks |
12:30 | Lunch & Poster session |
13:30 | Roundtable session & Poster session |
14:30 | Roundtable closure |
14:45 | Gal Yona - How Fair Can We Be |
15:15 | Daphna Weissglas - Turning Data Science Into Precision Medicine Empowering Millions |
15:45 | Closing remarks |