Behind the AI Curtain - Designing for Trust in Machine Learning Products

Whether you're launching a voice assistant or predictive recommendations, designing for trust matters more than ever for the user experience of products powered by machine learning and AI.

Case Study

Abstract

When startups first launch, they can make the news with application of cutting edge artificial intelligence (AI) - but convincing users to trust the AI is often another story. There's often also no process for integrating future AI development into product roadmaps.

This session covers three key principles for how design and data science teams can work together better to build greater trust among users. Additionally, a case study on how a design and data science team partnered to redesign predictive analytics scores powered by machine learning will illustrate those principles in practice. Along the way, we'll learn a few key principles from the fields of data visualization, user research, and content strategy, and why they in particular are relevant to designing for machine learning products.

Audience background

How will they benefit? *
The audience members who would benefit most from this session are content strategists, product managers, UX designers, and developers who work with data scientists or want to be prepared to work on machine learning products soon. Ideally, attendees have some familiarity with how machine learning is used in popular consumer apps (e.g. Netflix recommendations).

Benefits of participating

1. Learn how to create an effective process for designers, developers, and data scientists to collaborate together
2. Learn three key principles for building greater trust in machine learning
3. Hear how those principles were put into practice with a case study

Materials provided

presentation
case study materials

Process

I'll start off with a brief presentation, then introduce a case study. Next, we'll divide attendees into small working groups to work through a few exercises specific to a case study. Finally, groups will present back to the team on what they learned in prototyping and testing a voice assistant.

Detailed timetable

00:00 - 00:20: presentation
00:20 - 00:25: introduce a case study, divide attendees into small working groups
00:25 - 00:60: teams work through a few exercises specific to a case study
00:60 - 00:75: groups will present back to the team on what they learned

Outputs

I'll send followup content after the workshop on key lessons learned and resources for learning more:
1. Less is often more when it comes to visualizing content
2. When testing content, make sure to ask the right questions
3. Writing well matters a lot, especially when it comes to conversational UI products (e.g. voice assistants). Tactically, you can design conversations with the same design tools, you just have to adapt your process.

History

Past speaking engagements include:
- StarsConf: "Bias, Uncovered" workshop and "Behind the AI Curtain" talk in Santiago, Chile
- PAPIs International Conference on Predictive Applications and APIs: "Behind the AI Curtain" talk in Boston, MA
- Midwest UX: "Behind the AI Curtain" talk in Cincinnati, OH
- SeleniumConf: "Zero to Test: How to Run Your First Beta Testing Program" talk in Berlin, Germany
- Grace Hopper 2017: "The Art & Science of Product Management" panel in Orlando, FL
- Refresh DC: "Zero to Test: How to Run Your First Beta Testing Program" talk in Washington, DC
- Scenic City Summit: "Behind the AI Curtain: Designing for User Trust in Data Science" talk in Chattanooga, TN
- UXDC 2017: "Behind the AI Curtain: Designing for User Trust in Data Science" talk in Washington, DC
- InnovatorsBox: "Beyond the Box: Government" panel in Washington, DC
- ProductTank 2017: "Gathering and Prioritizing Customer Feedback" panel in Washington, DC
- ULL Conf 2016: "Bias, Uncovered" workshop in Killarney, Ireland
- AIGA DotGovDesign 2016: "Design Sprints for the Real World" talk in Washington, DC

Presenters

  1. Crystal Yan