Event Location

Title: Simpler Machine Learning Models for a Complicated World

Abstract:

While the trend in machine learning has tended towards building more complicated (black box) models, such models have not shown any performance advantages for many real-world datasets, and they are more difficult to troubleshoot and use. For these datasets, simpler models (sometimes small enough to fit on an index card) can be just as accurate. However, the design of interpretable models for practical applications is quite challenging for at least two reasons: 1) Many people do not believe that simple models could possibly be as accurate as complex black box models. Thus, even persuading someone to try interpretable machine learning can be a challenge. 2) Transparent models have transparent flaws. In other words, when a simple and accurate model is found, it may not align with domain expertise and may need to be altered, leading to an "interaction bottleneck" where domain experts must interact with machine learning algorithms.

In this talk, I will present a new paradigm for machine learning that gives us insight into the existence of simpler models for a large class of real-world problems and solves the interaction bottleneck. In this paradigm, machine learning algorithms are not focused on finding a single optimal model, but instead capture the full collection of good (i.e., low-loss) models, which we call "the Rashomon set." Finding Rashomon sets is extremely computationally difficult, but the benefits are massive. I will present the first algorithm for finding Rashomon sets for a nontrivial function class (sparse decision trees) called TreeFARMS. TreeFARMS, along with its user interface TimberTrek, mitigate the interaction bottleneck for users. TreeFARMS also allows users to incorporate constraints (such as fairness constraints) easily.

I will also present a "path," that is, a mathematical explanation, for the existence of simpler yet accurate models and the circumstances under which they arise. In particular, problems where the outcome is uncertain tend to admit large Rashomon sets and simpler models. Hence, the Rashomon set can shed light on the existence of simpler models for many real-world high-stakes decisions. This conclusion has significant policy implications, as it undermines the main reason for using black box models for decisions that deeply affect people's lives.

I will conclude the talk by providing an overview of applications of interpretable machine learning within my lab, including applications to neurology, materials science, mammography, visualization of genetic data, the study of how cannabis affects the immune system of HIV patients, heart monitoring with wearable devices, and music generation.

This is joint work with my colleagues Margo Seltzer and Ron Parr, as well as our exceptional students Chudi Zhong, Lesia Semenova, Jiachang Liu, Rui Xin, Zhi Chen, and Harry Chen. It builds upon the work of many past students and collaborators over the last decade.

Rui Xin, Chudi Zhong, Zhi Chen, Takuya Takagi, Margo Seltzer, Cynthia Rudin Exploring the Whole Rashomon Set of Sparse Decision Trees, NeurIPS (oral), 2022.

https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Farxiv.org%2Fabs%2F2209.08040&data=05%7C01%7Cgp5224%40princeton.edu%7Cd39232c5e3264b4f247d08dbf5abec6f%7C2ff601167431425db5af077d7791bda4%7C0%7C0%7C638373891010087646%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=P%2FxFHIvWhMZSVexjB8hxrm4rJTPmspQftlDN7b5HNsw%3D&reserved=0

 

Zijie J. Wang, Chudi Zhong, Rui Xin, Takuya Takagi, Zhi Chen, Duen Horng Chau, Cynthia Rudin, Margo Seltzer

TimberTrek: Exploring and Curating Sparse Decision Trees with Interactive Visualization, IEEE VIS, 2022.

https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpoloclub.github.io%2Ftimbertrek%2F&data=05%7C01%7Cgp5224%40princeton.edu%7Cd39232c5e3264b4f247d08dbf5abec6f%7C2ff601167431425db5af077d7791bda4%7C0%7C0%7C638373891010087646%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=qyzq1JwyuPflHiygxi2yXjTWLrHeobTtOGjWFQn%2BuA8%3D&reserved=0

 

Lesia Semenova, Cynthia Rudin, and Ron Parr On the Existence of Simpler Machine Learning Models. ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT), 2022.

https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Farxiv.org%2Fabs%2F1908.01755&data=05%7C01%7Cgp5224%40princeton.edu%7Cd39232c5e3264b4f247d08dbf5abec6f%7C2ff601167431425db5af077d7791bda4%7C0%7C0%7C638373891010087646%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=Rls%2F%2BEzSlLdt%2FGRwdnzRlNBIM8RUaazF6%2B46QPTlsSk%3D&reserved=0

 

Lesia Semenova, Harry Chen, Ronald Parr, Cynthia Rudin A Path to Simpler Models Starts With Noise, NeurIPS, 2023.

https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Farxiv.org%2Fabs%2F2310.19726&data=05%7C01%7Cgp5224%40princeton.edu%7Cd39232c5e3264b4f247d08dbf5abec6f%7C2ff601167431425db5af077d7791bda4%7C0%7C0%7C638373891010087646%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=9VcnUfiodHknX1cRCWbuV481wEtvNFMkHs0EhNckGgk%3D&reserved=0

 

Bio:

Cynthia Rudin is Earl D. McLean, Jr. Professor of Computer Science, Electrical and Computer Engineering, Statistical Science, Mathematics, Biostatistics & Bioinformatics at Duke University. She directs the Interpretable Machine Learning Lab, whose goal is to design predictive models that people can understand. Her lab applies machine learning in many areas, such as healthcare, criminal justice, and energy reliability.

Prof. Rudin holds an undergraduate degree from the University at Buffalo and a PhD from Princeton University. She is the recipient of the 2022 Squirrel AI Award for Artificial Intelligence for the Benefit of Humanity from the Association for the Advancement of Artificial Intelligence. She is also a three-time winner of the INFORMS Innovative Applications in Analytics Award and a 2022 Guggenheim Fellow. She received the 2023 INFORMS Best Data Mining Paper Award and second prize in the 2023 Bell Labs Prize Competition. She is a fellow of the American Statistical Association, the Institute of Mathematical Statistics, and the Association for the Advancement of Artificial Intelligence. She is a member of the National Academy of Sciences, Engineering, and Medicine committee on Facial Recognition Technology and a member of the US National AI Advisory Committee Subcommittee on AI and Law Enforcement (NAIAC-LE). Her work and opinions have been featured in news outlets including the NY Times, Washington Post, Wall Street Journal, the Boston Globe, Businessweek, NPR, The Hill, CNN, and the Raleigh News&Observer.

 

Share

Contributions to and/or sponsorship of any event does not constitute departmental or institutional endorsement of the specific program, speakers or views presented.