Session Title: Data Stewardship In An AI-Driven Ecosystem: InterpretML, FairLearn, WhiteNoise
Speaker: Alicia Moniz
Abstract: At the core of Microsoft’s AI are the principles of fairness, reliability & safety, privacy & security, inclusiveness, transparency & accountability. As AI capabilities increase along with adoption, it is important that we also leverage tools that enable us to practice AI responsibly.
Responsible ML provides us with tools to ensure that as practitioners we
Understand machine learning models – Are we able to interpret and explain model behavior? Are we able to assess and mitigate model unfairness
Protect people and their data – Are we actively working to prevent data exposure with differential privacy?
Control the end-to-end machine learning process – Are we documenting the machine learning life cycle?
Announced at Build this year were multiple Responsible ML open source packages. The accessibility of these freely available tools enables every machine learning developer to consider incorporating Responsible ML into the development cycle of their AI projects.
InterpretML – An open source package that enables developers to understand their models behavior and the reasons behind individual predictions.
A python package that enables developers to assess and address fairness and observed unfairness within their models.
WhiteNoise – An open source library that enables developers to review and validate the differential privacy of their data set and analysis. Also included are components for data access allowing data consumers to dynamically inject ‘noise’ directly into their queries.
Datasheets for Models – A python SDK that enables developers to document assets within a model, enabling easier access to metadata about models
It is import that we design sustainable AI systems with ethics in mind. Join us for an overview and demo of these packages!
300+ sessions are now available on-demand from Data Platform Summit 2021 & 2020 at no cost. Browse all sessions.
Stay tuned, more learning coming your way.