Testing Regression Monotonicity in Econometric Models (Job Market Paper)
Monotonicity is often a key qualitative prediction of a wide array of economic models derived via robust comparative statics. It is therefore important to design effective and practical econometric methods for testing this prediction in empirical analysis. This paper develops a general nonparametric framework for testing monotonicity of a regression function. Using this framework, a broad class of new tests is introduced, which gives an empirical researcher a lot of flexibility to incorporate ex ante information she might have. The paper also develops new methods for simulating critical values, which are based on the combination of a bootstrap procedure and new selection algorithms. These methods yield tests that have correct asymptotic size and are asymptotically nonconservative. It is also shown how to obtain an adaptive rate optimal test that has the best attainable rate of uniform consistency against models whose regression function has Lipschitz-continuous first-order derivatives and that automatically adapts to the unknown smoothness of the regression function. Simulations show that the power of the new tests in many cases significantly exceeds that of some prior tests, e.g. that of Ghosal, Sen, and Van der Vaart (2000). An application of the developed procedures to the dataset of Ellison and Ellison (2011) shows that there is some evidence of strategic entry deterrence in pharmaceutical industry where incumbents may use strategic investment to prevent generic entries when their patents expire.
Adaptive Test of Conditional Moment Inequalities
In this paper, I construct a new test of conditional moment inequalities based on studentized kernel estimates of moment functions. The test automatically adapts to the unknown smoothness of the moment functions, has uniformly correct asymptotic size, and is rate optimal against certain classes of alternatives. Some existing tests have nontrivial power against n-1/2 -local alternatives of the certain type whereas my method only allows for nontrivial testing against (n/log n)-1/2-local alternatives of this type. There exist, however, large classes of sequences of well-behaved alternatives against which the test developed in this paper is consistent and those tests are not.
IV Quantile Regression for Group-Level Treatments (with Brad Larsen and Christopher Palmer)
We propose a simple approach for estimating distributional effects of a group-level treatment when there are unobservable components at the group level which may be correlated with the treatment. Standard quantile regression techniques are inconsistent in this setting. We use techniques from the empirical processes literature to show the consistency of our estimator and derive its asymptotic distribution. Simulations confirm the superiority of this grouped instrumental variables quantile regression estimator to standard quantile regression. We illustrate the estimation approach with examples from industrial organization, labor, and urban economics.
Central Limit Theorems and Multiplier Bootstrap When p is Much Larger than n (with Victor Chernozhukov and Kengo Kato)
We derive a central limit theorem for the maximum of a sum of high dimensional random vectors. Specifically, we establish conditions under which the distribution of the maximum is approximated by that of the maximum of a sum of the Gaussian random vectors with the same covariance matrices as the original vectors. The key innovation of this result is that it applies even when the dimension of random vectors (p) is large compared to the sample size (n); in fact, p can be much larger than n. We also show that the distribution of the maximum of a sum of the random vectors with unknown covariance matrices can be consistently estimated by the distribution of the maximum of a sum of the conditional Gaussian random vectors obtained by multiplying the original vectors with i.i.d. Gaussian multipliers. This is the multiplier bootstrap procedure. Here too, p can be large or even much larger than n. These distributional approximations, either Gaussian or conditional Gaussian, yield a high-quality approximation to the distribution of the original maximum, often with approximation error decreasing polynomially in the sample size, and hence are of interest in many applications. We demonstrate how our central limit theorem and the multiplier bootstrap can be used for high dimensional estimation, multiple hypothesis testing, and adaptive specification testing. All these results contain non-asymptotic bounds on approximation errors.
Gaussian Approximation of Suprema of Empirical Processes (with Victor Chernozhukov and Kengo Kato)
We develop a new direct approach to approximating suprema of general empirical processes by a sequence of suprema of Gaussian processes, without taking the route of approximating empirical processes themselves in the sup-norm. We prove an abstract approximation theorem that is applicable to a wide variety of problems, primarily in statistics. Especially, the bound in the main approximation theorem is non-asymptotic and the theorem does not require uniform boundedness of the class of functions. The proof of the approximation theorem builds on a new coupling inequality for maxima of sums of random vectors, the proof of which depends on an effective use of Stein's method for normal approximation, and some new empirical processes techniques. We study applications of this approximation theorem to local empirical processes and series estimation in nonparametric regression where the classes of functions change with the sample size and are not Donsker-type. Importantly, our new technique is able to prove the Gaussian approximation for the supremum type statistics under considerably weak regularity conditions, especially concerning the bandwidth and the number of series functions, in those examples.
Comparison and Anti-Concentration Bounds for Maxima of Gaussian Random Vectors (with Victor Chernozhukov and Kengo Kato)
Slepian and Sudakov-Fernique type inequalities, which compare expectations of maxima of Gaussian random vectors under certain restrictions on the covariance matrices, play an important role in probability theory, especially in empirical process and extreme value theories. Here we give explicit comparisons of expectations of smooth functions and distribution functions of maxima of Gaussian random vectors without any restriction on the covariance matrices. We also establish an anti-concentration inequality for maxima of Gaussian random vectors, which derives a useful upper bound on the Levy concentration function for the maximum of (not necessarily independent) Gaussian random variables. The bound is universal and applies to vectors with arbitrary covariance matrices. This anti-concentration inequality plays a crucial role in establishing bounds on the Kolmogorov distance between maxima of Gaussian random vectors. These results have immediate applications in mathematical statistics. As an example of application, we establish a conditional multiplier central limit theorem for maxima of sums of independent random vectors where the dimension of the vectors is possibly much larger than the sample size.
Anti-Concentration of Gaussian Processes and Honest Adaptive Confidence Bands (with Victor Chernozhukov and Kengo Kato)
Modern construction of uniform confidence bands for nonparametric densities (and other functions) often relies on the Smirnov-Bickel-Rosenblatt (SBR) condition; see e.g. Gine and Nickl (2010). This condition requires existence of a limit distribution of an extreme value type for a supremum of a studentized empirical process (equivalently, for a supremum of a Gaussian process with an equivalent covariance kernel). The principal contribution of this paper is to remove the need for SBR condition. We show that a weaker sufficient condition is the anticoncentration inequality for the supremum of the approximating Gaussian process, and we derive such an inequality under weak assumptions. Our new result shows that the supremum does not concentrate too fast around its expected value. We then apply this result to derive a Gaussian bootstrap procedure for constructing honest and adaptive confidence bands for nonparametric density estimators, completely avoiding the need for SBR condition. An essential advantage of our approach is that it applies even in those cases where the limit distribution does not exist (or is unknown). Furthermore, our approach provides an approximation to the exact finite sample distribution with an error that converges to zero at a fast, polynomial speed (with respect to the sample size). In sharp contrast, the Smirnov-Bickel-Rosenblatt approach provides an approximation with an error that converges to zero at a slow, logarithmic speed.