top of page

AI/ML Evaluation Frameworks

How can we make sure a machine learning algorithm is doing what we want it to do, at the performance of what we expect? We develop new methodologies for rigorous and safe evaluation of machine learning under both randomized and observational settings with minimal assumptions.

Filter by Years
Filter by Status
Experimental Evaluation of Individualized Treatment Rules

K. Imai, M. L. Li

Journal of American Statistical Association

2023

Statistical Performance Guarantee for Subgroup Identification with Generic Machine Learning

K. Imai, M. L. Li

R&R at Biometrika

2023

Statistical Inference for Heterogeneous Treatment Effects in Randomized Experiments

K. Imai, M. L. Li

Journal of Business and Economic Statistics

2024

Pricing for Heterogeneous Products: Analytics for Ticket Reselling

M. Alley, M. Biggs, R. Hariss, C. Herrmann, M. L. Li, G. Perakis

Manufacturing & Service Operations Management

2019

Robust Inference for Machine Learning with Observational Data

D. Bertsimas, K. Imai, M. L. Li

R&R at Journal of Machine Learning Research

2022

©2025 by Michael Lingzhi Li

Contact

mili at hbs dot edu (Academic)

 

michaelliling2 at gmail dot com (Personal)​​

Technology & Operations Management, 
Harvard Business School

​

Morgan Hall, Soldiers Field
Boston, MA 02163

bottom of page