“Synthetic Combinations: A Causal Inference Framework for Combinatorial Interventions,” to appear in Proceedings of the 37th Conference on Neural Information Processing Systems (NeurIPS, 2023).
(with Abhineet Agarwal and Anish Agarwal)
We study how to learn the effects of combinations of treatments across heterogeneous units. We propose a new estimator that uses latent similarity between potential outcomes (i.e., approximate factor structure) as well as restrictions on how treatments interact: for example, two sets of treatments with a large overlap should have similar effects. We give conditions on the design that allow consistent estimation and show that they are satisfied for a particular class of randomized experimental designs. Both in theory and in simulations, this results in a substantial error reduction relative to existing methods.
“Can Calibration and Equal Error Rates be Reconciled?” Proceedings of the 2nd Symposium on Foundations of Responsible Computing (FORC 2021).
(with Claire Lazar-Reich)
We revisit a foundational result in algorithmic fairness on the incompatibility of equal error rates and group-wise calibration. Instead, we consider the information design problem of providing a calibrated risk score to a rational decision maker, so that resulting decisions have equal error rates. We exactly characterize when such scores can be constructed, and devise a simple post-processing algorithm (based upon convex programming) to maximize precision of the risk score under the necessary constraints. Finally, we successfully apply our post-processing to the COMPAS pre-trial risk assessment algorithm.
“Localization, Convexity and Star Aggregation,” Proceedings of the 35th Conference on Neural Information Processing Systems (NeurIPS, 2021).
We study the method of offset Rademacher complexities, a powerful tool for deriving statistical guarantees for high-dimensional and non-parametric estimators. By novel geometric arguments, we extend the technique to settings where the model is non-convex and where the loss is not strongly convex. As applications, we provide a sharp analysis of non-parametric logistic regression and regression with p-loss.