Adi
Watzman

Practical Explainability Beyond Buzzwords

PayPal

Adi Watzman

Adi
Watzman

Practical Explainability Beyond BuzzwordsPayPal
Adi Watzman

Bio

Adi is a Data Scientist at PayPal, where she develops ML models for fraud detection that affect real time decisions and influence millions of users. She is thrilled to take an active part in using ML explainability to improve models’ performance and overall modeling lifecycle.

 

Adi has an MSc in Computer Science from the Weizmann Institute with a thesis about unsupervised algorithms for Microbiome data research and a BSc in Computer Science and Computational Biology from the Hebrew University. She also volunteers in the public Knowledge Workshop, applying data science to improve public transportation in Israel.

Bio

Adi is a Data Scientist at PayPal, where she develops ML models for fraud detection that affect real time decisions and influence millions of users. She is thrilled to take an active part in using ML explainability to improve models’ performance and overall modeling lifecycle.


Adi has an MSc in Computer Science from the Weizmann Institute with a thesis about unsupervised algorithms for Microbiome data research and a BSc in Computer Science and Computational Biology from the Hebrew University. She also volunteers in the public Knowledge Workshop, applying data science to improve public transportation in Israel.

Abstract

As our antifraud ML models perform better and better in production, we understand that we want much more than the lonesome prediction score. We want to be able to explain WHY the model has made its decision. In our case at PayPal, this explanation must be actionable: it should serve us, the data scientists, in debugging and improving the model and should also help us to investigate model misses in collaboration with risk experts.

 

I am going to share our journey to develop a method that extracts actionable explanations to single predictions. I will describe our definition of a useful explanation, the drawbacks we found in using SHAP values output as-is and the enlightening approach of counterfactual explanations. I will present how combining all of the above enabled us to push our ML solutions a few steps further and share tips from our explainability experience that should be relevant to every data scientist.

Abstract

As our antifraud ML models perform better and better in production, we understand that we want much more than the lonesome prediction score. We want to be able to explain WHY the model has made its decision. In our case at PayPal, this explanation must be actionable: it should serve us, the data scientists, in debugging and improving the model and should also help us to investigate model misses in collaboration with risk experts.

 

I am going to share our journey to develop a method that extracts actionable explanations to single predictions. I will describe our definition of a useful explanation, the drawbacks we found in using SHAP values output as-is and the enlightening approach of counterfactual explanations. I will present how combining all of the above enabled us to push our ML solutions a few steps further and share tips from our explainability experience that should be relevant to every data scientist.