UC Berkeley researchers present ‘imodels: A Python package for tuning interpretable machine learning models

0

Recent developments in machine learning have resulted in more complicated predictive models, usually at the expense of interpretability. Interpretability is frequently required, especially in high-stakes applications in health, biology, and political science. Moreover, interpretable models facilitate various tasks, including error detection, exploitation of domain knowledge, and speeding up inference.

Despite recent breakthroughs in formulating and fitting interpretable models, implementations are often difficult to locate, use, and compare. imodels solves this void by providing a single interface and implementation for a wide range of state-of-the-art interpretable modeling techniques, especially rule-based methods. imodels is essentially a simple, transparent, and accurate predictive modeling Python tool. It offers users a simple way to adapt and use state-of-the-art interpretable models, all compatible with scikit-learn (Pedregosa et al., 2011). These models can frequently replace black box models while improving interpretability and computational efficiency without compromising prediction accuracy.

What’s new in the field of interpretability?

Interpretable models have a structure that makes them easy to inspect and understand. The figure below illustrates four different configurations for an interpretable model in the imodels package.

There are many approaches to fitting the model to each of these shapes, prioritizing different things. Greedy techniques, such as CART, emphasize efficiency, while global optimization methods may focus on finding the smallest possible model. RuleFit, Bayesian Rule Lists, FIGS, Optimal Rule Lists, and various other approaches are all implemented in the imodels package.

Source: https://bair.berkeley.edu/blog/2022/02/02/imodels/

How to use imodels?

It is quite easy to use imodels. It is simple to configure (pip install imodels) and can then be used in the same way as other scikit-learn models: use the fitting and prediction methods to fit and predict a classifier or regressor.

Source: https://bair.berkeley.edu/blog/2022/02/02/imodels/

An example of interpretable modeling

Interpretable modeling is an example of interpretable modeling. The diabetes categorization dataset is reviewed, which collected eight risk indicators and used them to predict the incidence of diabetes over the next five years. While adapting many models, it was discovered that the model could achieve outstanding test performance with just a few rules.

For example, although extremely simple, the figure below illustrates a model fitted using the FIGS approach that obtains a test-AUC of 0.820. Each factor contributes independently of the others in this model, and the final risks of each of the three essential characteristics are added to generate a risk of developing diabetes (the higher the risk). Unlike a black box model, this one is simple to understand, quick to calculate, and allows you to make predictions.

Source: https://bair.berkeley.edu/blog/2022/02/02/imodels/

Conclusion

Overall, interpretable modeling is a viable alternative to traditional black box modeling, and in many circumstances can provide significant gains in efficiency and transparency without sacrificing performance.

Article: https://joss.theoj.org/papers/10.21105/joss.03192.pdf

Github: https://github.com/csinva/imodels

Reference: https://bair.berkeley.edu/blog/2022/02/02/imodels/

Share.

About Author

Comments are closed.