Talia Tron &
Miriam Manevitz

Practical AI Ethics – From Words to Actions

Intuit

Talia Tron

Talia Tron & Miriam
Manevitz

Practical AI Ethics – From Words to Actions

Intuit

Talia Tron

Bio

Talia is a data scientist and an innovation catalyst at Intuit, a leading fintech company, where she leads the work around explainable AI. She holds a PhD in computational neuroscience from the Hebrew University in which she developed automatic tools for analyzing facial expressions and motor behavior in schizophrenia. Prior to her work at Intuit, she worked as a data scientist in Microsoft in the security and education domain.

 

Miriam is a data scientist at Intuit in the fraud detection domain. She has a Bsc. in Computer science and computational Biology and an Msc. in theoretical computer science both from the Hebrew University. Her thesis deals connections between Machine learning and Differential Privacy. Prior to her work at Intuit she worked as a data scientist in Cisco in the security domain.

Bio

Talia is a data scientist and an innovation catalyst at Intuit, a leading fintech company, where she leads the work around explainable AI. She holds a PhD in computational neuroscience from the Hebrew University in which she developed automatic tools for analyzing facial expressions and motor behavior in schizophrenia. Prior to her work at Intuit, she worked as a data scientist in Microsoft in the security and education domain.


Miriam is a data scientist at Intuit in the fraud detection domain. She has a Bsc. in Computer science and computational Biology and an Msc. in theoretical computer science both from the Hebrew University. Her thesis deals connections between Machine learning and Differential Privacy. Prior to her work at Intuit she worked as a data scientist in Cisco in the security domain.

Abstract

“Concern for man and his fate must always form the chief interest of all technical endeavors. Never forget this in the midst of your diagrams and equations.” – Albert Einstein. In recent years, ML algorithms are increasingly assisting in making complex decisions of the utmost significance in various fields including medicine, transportation, and diverse financial services, or even operating autonomously. Thus, there is an urgent need to take a step back, consider the overwhelming impact of these systems, evaluate their potential for harm and actively take the necessary precautions.

 

Many companies and governments worldwide are beginning to wake up. In the European Union, the High-Level Expert Group on AI (AI HLEG) has published guidelines of 7 key requirements that AI systems should meet in order to be deemed trustworthy. Leading global companies such as IBM, Microsoft, and Google are defining and sharing their ethical principles and policies, to set up new standards and ensure responsible use of AI capabilities. In Israel, the national sub-committee for AI Ethics regulation has recently published an extended report in which they provide guidelines on identifying & handling ethical risks in various phases of model development – from research & design, performance evaluation, deployment and monitoring. One of their key recommendations is extensive ethical training for AI developers, which will be held accountable for criminal negligence in case they do not take into account the ethical implementation of their work.

 

In this roundtable, we will dive into the various aspects of ethical-AI principles and have a practical discussion on how these can be applied to support responsible development, deployment, and operation of machine learning systems. We will start by raising the questions about the moral obligation of AI developers (that’s us!) and discuss specific scenarios where algorithm prediction or the end outcome caused harm including harm of allocation (when a system allocates or withholds certain groups an opportunity or resource, for example the COMPAS algorithm where black defendants were wrongly labeled for having higher risk of reoffending than white defendants), harm of representation (ways in which individuals might be represented differently in a feature space even before training a model. For example word embeddings contain gender bias, the results of a Google image search for the term “CEO” in 2015 surfaced only white men), and harm to basic human rights such as privacy and autonomy (e.g., Netflix prize data deanonymization , Cambridge Analytica). We will discuss practices, frameworks and state-of-the-art technological solutions to avoid these potential harms and brainstorm around the challenges and opportunities in incorporating them in our day to day work.

Abstract

“Concern for man and his fate must always form the chief interest of all technical endeavors. Never forget this in the midst of your diagrams and equations.” – Albert Einstein. In recent years, ML algorithms are increasingly assisting in making complex decisions of the utmost significance in various fields including medicine, transportation, and diverse financial services, or even operating autonomously. Thus, there is an urgent need to take a step back, consider the overwhelming impact of these systems, evaluate their potential for harm and actively take the necessary precautions.

 

Many companies and governments worldwide are beginning to wake up. In the European Union, the High-Level Expert Group on AI (AI HLEG) has published guidelines of 7 key requirements that AI systems should meet in order to be deemed trustworthy. Leading global companies such as IBM, Microsoft, and Google are defining and sharing their ethical principles and policies, to set up new standards and ensure responsible use of AI capabilities. In Israel, the national sub-committee for AI Ethics regulation has recently published an extended report in which they provide guidelines on identifying & handling ethical risks in various phases of model development – from research & design, performance evaluation, deployment and monitoring. One of their key recommendations is extensive ethical training for AI developers, which will be held accountable for criminal negligence in case they do not take into account the ethical implementation of their work.

 

In this roundtable, we will dive into the various aspects of ethical-AI principles and have a practical discussion on how these can be applied to support responsible development, deployment, and operation of machine learning systems. We will start by raising the questions about the moral obligation of AI developers (that’s us!) and discuss specific scenarios where algorithm prediction or the end outcome caused harm including harm of allocation (when a system allocates or withholds certain groups an opportunity or resource, for example the COMPAS algorithm where black defendants were wrongly labeled for having higher risk of reoffending than white defendants), harm of representation (ways in which individuals might be represented differently in a feature space even before training a model. For example word embeddings contain gender bias, the results of a Google image search for the term “CEO” in 2015 surfaced only white men), and harm to basic human rights such as privacy and autonomy (e.g., Netflix prize data deanonymization , Cambridge Analytica). We will discuss practices, frameworks and state-of-the-art technological solutions to avoid these potential harms and brainstorm around the challenges and opportunities in incorporating them in our day to day work.

Discussion Points

  • Data scientist role definitions – full stack data scientists vs. specialisations
  • Pure data science teams vs embedded teams
  • Data science reporting lines
  • Professional and personal development in embedded teams

Discussion Points

  • Data scientist role definitions – full stack data scientists vs. specialisations
  • Pure data science teams vs embedded teams
  • Data science reporting lines
  • Professional and personal development in embedded teams