Go hands-on with interactive AI visualizations
Artificial Intelligence systems can recognize our voices, forecast the weather and help decide who gets a loan. Given the increasing ubiquity of AI, it’s important that everyone is able to understand more about it.
Like any system or technology, AI doesn’t always get it right. And understanding why AI systems break is often not easy for people who aren’t experts in the field; research results are shared in dense papers filled with formulas.
Of course, people who haven’t studied AI still need to be able to ask critical questions about these systems. To help support these kinds of discussions, we’ve created AI Explorables, a series of interactive explanations of key AI concepts. They’re specifically geared toward non-experts (even though we think and hope that experts will also find them interesting and thought-provoking).
The first two Explorables walk you through an assessment determining whether an AI system is fair and unbiased. Measuring Fairness weighs the trade-offs involved in building a machine that diagnoses a disease—and lets you try tuning it to be fairer.
In another Explorable, called Hidden Bias, we examine a system that predicts student’s grades. Biased by the data it has learned from, the system predicts lower grades for women. Trying to fix this by hiding gender from the system doesn’t always work (and, in some cases, can actually increase the bias in the system).
In the coming months we plan on sharing more Explorables on other fairness issues (how do feedback loops affect the biases of an AI system?), interpretability (why did the AI system decide to do that?) and privacy (what does it mean in the context of an AI system?).
People and AI Research (PAIR) is committed to making machine learning more participatory, and we believe that Explorables will help expand the conversation around machine learning and make it more inclusive. You can find more updates about Explorables and our other work at the (new) PAIR Medium channel.
Tags: Archive
Leave a Reply