Discover the benefits of interpretable machine learning

0

Tech companies have been rapidly developing machine learning models and algorithms in recent years. Those familiar with this technology probably remember a time when, for example, bank staff and loan officers were the ones who ultimately decided whether you were approved for a loan. Nowadays, models are trained to handle such procedures in massive quantities.

It is important to understand how a given model or algorithm works and why it would make certain predictions. The first chapter of Interpretable machine learning with Pythonwritten by data scientist Serg Masís, discusses interpretable ML, or the ability to interpret ML patterns to find meaning in them.

The importance of interpretability and explainability in ML

To prove that this is not just theory, the chapter goes on to describe examples of use cases where interpretability is not only applicable but necessary. For example, a climate model can teach a meteorologist a lot if it is easy to interpret and exploitable for scientific knowledge. In another scenario involving an autonomous vehicle, the algorithm involved may have points of failure. It must therefore be debuggable so that the developers can answer it. Only then can it be considered reliable and safe.

This chapter clarifies that interpretability and explainability in ML are related concepts, but explainability is different because it requires the inner workings of a model to have user-friendly explanations.

Click on the book cover to learn

Continued

Interpretable ML is good for business

These concepts add value and practical benefits when companies apply them. For starters, interpretability can lead to better decision-making because when a model is real world tested, those who developed it can observe its strengths and weaknesses. The chapter gives a plausible example, where a self-driving car mistakes snow for pavement and crashes into a cliff. Knowing exactly why the car’s algorithm mistook snow for a road can lead to improvements, as developers will change the algorithm’s assumptions to avoid more dangerous situations.

Businesses also want to retain public trust and maintain a good reputation. For a relevant example, the chapter uses Facebook’s model for maximizing digital ad revenue, which has inadvertently shown users offensive content or misinformation in recent years. The solution would be for Facebook to examine why its model displays this content so often, and then commit to reducing it. Interpretability plays a crucial role here.

In the next chapter, Masís expresses his belief that interpretable ML will lead to more reliable and reliable ML models and algorithms, which will then enable companies to gain public trust and become more profitable.

Click here to download chapter 1.

Share.

About Author

Comments are closed.