43 minutes | Mar 5, 2020

156  |  Visualizing Fairness in Machine Learning with Yongsu Ahn and Alex Cabrera

  In this episode we have PhD students Yongsu Ahn and Alex Cabrera to talk about two separate data visualization systems they developed to help people analyze machine learning models in terms of potential biases they may have. The systems are called FairSight and FairVis and have slightly different goals. FairSight focuses on models that generate rankings (e.g., in school admissions) and FairVis more on comparison of fairness metrics. With them we explore the world of “machine bias” trying to understand what it is and how visualization can play a role in its detection and mitigation. [Our podcast is fully listener-supported. That’s why you don’t have to listen to ads! Please consider becoming a supporter on Patreon or sending us a one-time donation through Paypal. And thank you!] Enjoy the show! Links: Alex Cabrera Yongsu Ahn FairSight FairVis Google: “Attacking Discrimination with Smarter Machine Learning” Nicky Case: “Parable of Polygons” https://datastori.es/wp-content/uploads/2020/02/157_FairML.mp4 Related episodes Visualizing Fairness in Machine Learning with Yongsu Ahn and Alex Cabrera
Play
Like
Play Next
Mark
Played
Share