This paper explores the use of heuristics among highly-trained physicians diagnosing heart disease in the emergency department, a common task with life-or-death consequences. Using data from a large private-payer claims database, I find compelling evidence of heuristic thinking in this setting: patients arriving in the emergency department just after their 40th birthday are roughly 10% more likely to be tested for and 20% more likely to be diagnosed with ischemic heart disease (IHD) than patients arriving just before this date, despite the fact that the incidence of heart disease increases smoothly with age. Moreover, I show that this shock to diagnostic intensity has meaningful implications for patient health, as it reduces the number of missed IHD diagnoses among patients arriving in the emergency department just after their 40th birthday, thereby preventing future heart attacks. I then develop a model that ties this behavior to an existing literature on representativeness heuristics, and discuss the implications of this class of heuristics for diagnostic decision making.
Selected Work in Progress
Worth the Price of Admission? Evidence from Emergency Department Admissions
Emergency department (ED) physicians are responsible for ~50% of all hospital admissions and are therefore important gatekeepers for expensive, high-intensity inpatient care. Many have pointed out the large variation in admission rates across EDs within the US, much of which cannot be explained by patient characteristics. Far fewer studies have been conducted on physician-level variation in admission rates within a given ED, which is also substantial. Much of this variation is likely due to how each physician operates within the “gray areas” of medicine, but there is little literature on the value of these marginal hospital admissions from the ED, despite the high costs associated with inpatient treatment. Using data from a large Boston-area hospital, I find that even after controlling for relevant patient characteristics, attending physicians with above-average admission rates are ~20% more likely to admit a patient than attending physicians with a below-average admission rate. I then use this variation in the physicians' admission rates in order to recover estimates of costs and benefits associated with marginal inpatient care, and explore the extent to which physicians' characteristics predict their propensity to admit patients.
Fight the [Statistical] Power: Efficient Treatment Effect Estimation under Imperfect Compliance
Experimental research designs must often allow for participants’ imperfect compliance with their randomized treatment assignment. That is, some in the control group might obtain the treatment of interest, while some in the treatment group might not obtain the treatment. Instrumental variables estimation in this context sidesteps this problem, and yields an unbiased estimate of the average treatment effect on "compliers," the group of individuals whose treatment take-up is determined by their randomized treatment assignment. Although non-compliance does not bias this treatment effect estimate, it nonetheless comes at a cost in the form of a reduction in precision. Low compliance rates therefore result in many studies being underpowered, weakening the conclusions that can be drawn from them. While individual compliers cannot be directly identified, in this paper I propose a framework for assessing individuals’ likelihood of compliance utilizing their baseline (i.e. pre-randomization) characteristics, and harnessing this information to improve the precision of treatment effect estimates. Using publicly available data from the Oregon Health Insurance Experiment, I find that I am able to reduce the standard errors found in Taubman et al. (2014) by roughly 40%.
Incorporating Compliance Prediction into RCT Design
In this paper, I propose a new methodological framework for selecting participants for randomized control trials (RCT) with imperfect compliance. It is known that the statistical power of an experiment increases in the compliance rate among participants. Therefore, selecting participants with above-average likelihoods of compliance can provide an experiment with more power, or the same level of power with fewer participants. I demonstrate how data from prior experiments (or quasi-experiments) can allow researchers to systematically predict potential participants' likelihood of compliance, using either regression models or machine learning algorithms. I then discuss the potential benefits and drawbacks of incorporating compliance prediction into RCT design, and describe the scenarios in which this method is likely to be most beneficial. Using publicly available data from the Oregon Health Insurance Experiment, I empirically demonstrate the feasibility of these methods.