By Wojciech Gryc on March 25, 2021
Building products that use machine learning or artificial intelligence comes with significant challenges. It’s a different process than building traditional products.
AI-driven products are not deterministic – they make mistakes, and they behave differently in seemingly similar situations, which is something users are not typically comfortable with. They might also make recommendations that a user disagrees with or didn’t expect. Not only is this a risk for the user – they might choose to ignore all the AI features as a result – but it could lead to experiences that make the user decide against using the product again.
In this article, we explore three major types of ML-driven products and provide five design considerations for ML product managers.
There are three types of ML-driven products: those that do magic, those that do analysis, and those that automate. It’s critical to know the difference between the three.
Those that do magic are ML-driven products that lack clear requirements and whose product owners simply say, “And then we apply AI and it gives us a solution”. These products assume that the ML will generate magical, special insights that no human can generate. Product managers building these products assume they’ll simply pass data to a system and get insights, ideas, and results.
This usually means the product team isn’t familiar with what AI or ML is capable of and is hoping for a miracle. If this is your product then stop right now and don’t build it. This is where AI products fail miserably because the problem is too vague and ill-defined to have a realistic solution. This is the perfect recipe for vaporware.
Next, we have products that do analysis. These are products that help people make decisions, but not products that automate or manage things. This is similar to statistical software where you put data in, and you get analysis and results out. With these products, it’s still up to the user to make a decision or interpret an insight.
Examples of such products include...
Analysis products come with their own design challenges. Since they are meant to aid a human in decision-making or insights generation, they need to have a way to communicate why they are making certain recommendations. They need to enable the user to explore analysis and dive into the details of recommendations to address concerns or nuances of their questions. In other words: explaining results and enabling exploration are critical components of an analysis product.
The last bucket of ML-driven products focuses on automating experiences without any human intervention or decision-making. The AI works on its own to power specific product features or experiences. The end result is a user experience that doesn’t require user input or decision-making, and one that leads to an optimal experience for the user.
Automation approaches don’t require explainability to the extent analysis approaches do. Oftentimes the benefit makes a self-sufficient product experience better, but isn’t required to enable the primary experience. In other words, the benefit is secondary: an improved user experience, faster decision-making, or another incremental benefit over a non-ML approach. What’s critical here, however, is that we are improving on a process that could technically be done without ML – it would just be a less-than-ideal experience.
If you’re building an ML-driven product, regardless of it being an analysis product or automation product, there are a few things to consider.
1. Is the product trying to do magic?
Returning to the start of our article, a clear warning sign with ML-driven products is if they are expected to do something we can’t explain or can’t describe. A common example here is when ML products are expected to “generate insights” when data is “passed to them”, but when no one can explain what a great insight could look like or what it would elucidate.
A more nuanced example of “magic” is around unrealistic expectations, such as if the product is expected to be 100% accurate, can’t make any errors, or whose results will never be questioned. Similarly, if your algorithms need to be significantly better than whatever is the state of the art today, and you don’t have staff committed to doing research, then this could be a severe case of inflated expectations.
2. How does the product learn?
AI and ML is able to learn over time and improve as a result. Your product needs to have feedback loops built into it so that it can improve over time. If you can’t point to where this happens in your product, then you’re not taking advantage of the data being collected and are likely not using AI.
3. What is the cost of an error?
In the case of fraud detection, the cost can be quite high... But it’s low enough that we trust algorithms to do it. In the case of medical diagnostics, the cost of error is high enough that humans are still required to review any final decisions or recommendations. Knowing what the costs associated with an error are is critical in determining how much you are willing to “outsource” to an intelligent algorithm.
It’s also important to understand which errors are costly and which are not. For example, a false positive (i.e., an error where we incorrectly say ‘yes’) in medical diagnostics could lead to further tests, so this could be an acceptable error. A false negative (i.e., where we incorrectly say ‘no’) could mean a person with a disease is classified as healthy and the disease goes undetected. In this case, a false negative error is more important to avoid than a false positive error.
4. What is the minimum algorithmic performance?
Related to the above, it’s important to know how this cost/benefit structure impacts your ability to use or trust an algorithm. The algorithmic threshold for product recommenders is low (we don’t buy most of the products we are recommended!), but that’s because the cost of error is also low (it costs almost nothing to make a recommendation) so it’s fine to automate this. The cost of error is unacceptable with medical diagnostics, so we still don’t have a good way to outsource this fully.
As you design your ML-driven product, ask yourself how well your algorithms need to perform – and whether it’s realistic to actually build a model that does as well as you need it to perform.
We often meet product teams working on ML-driven products that require such a level of accuracy or model performance that it’s actually impossible to delegate the product experience to AI and ML algorithms. You want to avoid this mistake at all costs.
5. How does the algorithm get communicated to the end user?
While UX in AI is a complex subfield, a key question for any ML product manager is how the algorithm’s decisions get communicated, and whether they need to be explained to the user.
To use our earlier examples, something like a fraud detection model typically doesn’t need to explain itself as the user has little recourse during a transaction anyway; they will have to call a bank or customer service phone line either way. The same goes for product recommendations, where the cost of error is so low that users rarely care.
Contrast this to an analytics product where an algorithm tells you that a gender is correlated with fraud – now this needs to be explained and a user won’t be able to do much without an explanation and clear illustrations of why the algorithm is deciding this.
Algorithmic decisions come with scrutiny that human decisions often avoid. Users interacting with a system and getting an unexpected or negative result will want to know what happened or why. The UX manager or product manager on the team needs to consider how these results will get communicated or clarified to users.
This is by no means a comprehensive set of requirements, but a good starting point to think about when building ML-driven products. Such products are exciting as they represent a completely new way of designing and building things. They come with their own challenges but are well worth the effort, given the promise they hold.