Design
February 22, 2022

Designing for failure, not success with AI/ML

Designing for failure, not success with AI/ML

Designing for failure, not success with AI/ML

Designing for failure not success with AI/ML
Designing for failure, not success with AI/ML

Many companies are now designing AI systems that aim to help people. These can be digital assistants like Apple’s Siri and Google’s Assistant, chatbots on a bank’s website, or even the chatty characters in video games.

These are designed to do things for us — give us advice, recommend products, get us appointments with doctors, and so on. But how can we make sure these AI products don’t make mistakes that will end up costing users?

One useful heuristic is to think about how to avoid errors by the people using it rather than by the artificial reps themselves. It’s a simple approach called error avoidance. So how can we use error avoidance to design better AI products?

If you’re designing software for people, it’s a good idea to think about how they’ll be using it. We tend to focus on the success of the AI, but that’s not necessarily the same as a success for the people using it.

Consider Google search: A good search engine will return high-quality results, but a bad interface can make it tricky to find those results. You could end up scrolling through pages and pages of search results — or clicking on advertisements — before you find something useful.

Google speach to search results page
Google image speech to text search

From an AI point of view, a good search engine is one that sends users to relevant websites as fast as possible. But from the user’s perspective, that might not be ideal. What if the website you’re sent to is full of malware? Designing for users means designing around their mistakes and misconceptions. There are two main ways of doing this:

  1. Provide guidance: Sometimes people simply get confused about what they want. In this case, we can show them how to fix their search terms or narrow down their options so they don’t waste time looking at irrelevant information.
  2. Redirect them: Other times, people may enter search terms that aren’t completely wrong but still aren’t very useful.

If you design an algorithm to work autonomously, it’s easy to make mistakes and create something that is extremely difficult to use. However, if you design an algorithm with the intention of making it easy for a human to step in and correct any mistakes, the whole system becomes more reliable.

Our goal should not be to design AI to succeed; it should be to design AI to avoid errors by the people who work with it.

AI is designed for humans, not for machines. It will become part of our daily lives — from shopping online to using a digital bank service — so if we’re going to design it, we need to make sure it works for everyone.

Designing the User Experience of ML Products

Three Principles: Expectations, Errors, And Trust!

towardsdatascience.com

Here are some things I feel that we should consider when designing AI:

It should be inclusive.

People from all walks of life should be able to use it without having special training. For example, if you’re designing an app for children with special needs, make sure it doesn’t rely on verbal input.

It should be accessible.

If people can’t see or hear your AI, they won’t use it. For example, make sure text-to-speech voices are gender-neutral and sound natural; don’t assume everyone wants their phone to talk to them like Siri or Alexa. It should be transparent.

It’s all about optimization.

AIs are optimized by humans to interact with humans in a way that maximizes their value to humans. The best example of this is an autonomous vehicle. An autonomous vehicle’s objective function is to minimize the number of accidents and fatalities involving its passengers. The safety of the passengers is prioritized over that of other people in accidents, pedestrians, bikers, even animals.

                                                                                                                       Autonomous Tesla car gif

Treating AI as a component in a system, rather than the whole, allows you to design it to account for human error and misunderstanding. There are three ways that you can do this:

  1. Accept that people will make mistakes
  2. Guide people towards best practices
  3. Give people tools to recover from mistakes

The goal of designing for AI is not to eliminate errors, but to reduce errors on the part of the people working with the system. This isn’t just about preventing mistakes through good design practices. It’s also about anticipating how people will respond when things go wrong.

This is one of the most important principles in the paper on “human-centred machine learning (small PDF)”.

Humans make mistakes, especially when working with machines. So instead of designing an AI system that relies on human-like intelligence — a human-like understanding of language, — it’s better to design for errors and misunderstandings.

So the first step to designing an intelligent system is:

Keep it simple. Don’t overthink it, don’t over-engineer it.

The design process becomes complex once you require that your system be both smart and safe (as we saw with self-driving cars). That said, one way to test if your system is capable of making errors is by testing for adversarial scenarios. Multiple times. Just as humans offer different points of attack/failure, so does the design.

In order to make the best use of the technology, it is important that we design our systems with failure in mind. An AI/ML-based product requires a certain level of trust. The users trust the product to perform certain functions, and that trust should be supported by design. At its very core, user experience (UX) design is based on trust: previously, we’ve trusted products to suit our needs; now we are trusting systems.

These are some of the references to read more when designing for AI/ML

Hi, I’m Sid, a UX/UI designer, blogger and mentor to designers. I work with teams who are passionate about improving the end-user experience. Currently, I am working on AI/ML solutions at PI.EXCHANGE where our goal is to make AI accessible to everyone.

Please learn more about me on Twitter LinkedIn.