The article by Seidenwurm et al in this issue of the AJNR (page 426) addresses questions that are faced daily by any radiologist who performs MR imaging. These are: Does a patient have intraorbital metal that would be a contraindication for having an MR examination? Which patients should be screened radiographically? When is radiographic screening cost-effective? Because these are common dilemmas, the impact of any recommendations based on this study is potentially important. It is therefore imperative that a cost-effectiveness analysis (CEA) be rigorous and complete.
Guidelines have indeed been developed for conducting and evaluating CEAs (1). We reviewed Seidenwurm et al's article based on 10 points that all readers should consider when evaluating such studies (Table, page 246).
Cost-effectiveness can be determined by comparing the resources consumed by a given strategy (the costs) with the improvement in health that results from that strategy (the consequences). The consequences are measured in units most relevant to the strategy under study. This results in ratios such as “dollars per year of life gained.”
Most researchers think that, in general, quality of life should be incorporated into these analyses. Seidenwurm et al chose to use utilities, which are the most widely accepted measures of quality of life. Utilities refer to preferences for a particular level of health status. These preferences may be those of either an individual or society for a particular health outcome, and they result in a “quality weighting” factor. The denominator of a cost-utility analysis is therefore quality adjusted life years (QALY), rather than simply life years.
This is the type of analysis that Seidenwurm et al have performed. The first and most important step in the design of any study is the formation of a focused, answerable question. This question must describe the alternatives being compared as well as the viewpoint of the analysis. Seidenwurm et al do an admirable job by clearly stating that the purpose of their study was to compare the cost-effectiveness of clinical versus radiologic screening for orbital foreign bodies. They describe the clinical screening in adequate detail, but they could have provided more information for the radiologic screening, such as the number of views obtained. To their credit, the authors unambiguously declare that the analysis is from the societal viewpoint. This viewpoint takes into account the widest possible range of costs and consequences and is most appropriate for policy decision making.
Seidenwurm et al are somewhat unconventional in their identification of costs and consequences and in how they organize their economic model. Economists generally categorize costs as direct and indirect. The costs of organizing and operating a service are the direct costs, and they include health professionals' time, supplies, equipment, power, capital costs, and out-of-pocket expenses for the patient. Time lost from work is an indirect cost. In their classic paper on cost-effectiveness analysis, Weinstein and Stason (2) present the following equation for determining the net healthcare costs of an intervention: where ΔCRx includes all direct medical and health-care costs, ΔCSE are the costs associated with adverse effects of the intervention, ΔCMorb are the savings due to prevention or alleviation of disease, and ΔCRx ΔLE are the costs of treating diseases that would not have occurred if the patient had not lived longer because of the intervention. Because length of life is probably not affected significantly by the intervention (orbital screening), ΔCRx ΔLE can be ignored. Similarly, there are probably no adverse effects of orbital screening, so this term can be ignored as well. ΔCMorb needs to be estimated because this is the cost benefit of screening. The authors account for this with their variables A and M, both of which they assume to be $0 in their base case. Although one can question this base-case assumption, their sensitivity analysis demonstrated that these were not influential variables. The authors ignore direct out-of-pocket costs, and while it is difficult to be certain that these are insignificant, assuming they are negligible is conservative.
Seidenwurm et al chose to measure consequences in terms of QALY, which is the most appropriate measure for a cost-utility analysis. Drummond identifies two other categories of consequences: 1) changes in functional status (physical, social, and emotional functioning); and 2) changes in future resource use. Neither of these categories is addressed by Seidenwurm et al's article, but this is true of many economic analyses.
The authors state that the cost of screening was “culled from the medical literature on screening for orbital foreign body, Medicare fee schedules for various examinations, and usual, customary and reasonable charges fee schedules for various examinations.” Using Medicare fee schedules is probably appropriate in this setting, because they reflect a resource-based relative-value scale. Nevertheless, the authors remain vague as to how exactly they arrived at their base-case estimate of $173, an amount they indicate represents the charge of the examination rather than a true cost. Numerous authors have emphasized why it is important to distinguish between costs and charges, with a recent example being an editorial by Picus in Radiology (3). Seidenwurm et al state in their discussion that the Medicare allowable fee for a single view screening examination is $25. How do they account for the difference between this amount and their base case? They state that $25 does not cover the costs of radiography. This may be true, but needs justification. After all, their analysis demonstrated that the cost of the radiographic screening was a critical variable, and if the cost was as low as $25, then screening might be cost-effective.
Using QALY as a metric for consequences implies accounting for preferences, either on the individual or societal level, for given health states. The authors estimate the degree of disability from monocular blindness using two separate sources. Both of these, however, probably use functional status and not preference-based measures, and thus are not true utilities. Nonetheless, their base-case estimate of the utility for monocular blindness being 0.24 is probably quite conservative. We recently collected a cohort of 142 patients who completed a time trade-off for monocular blindness, and the mean utility was 0.82.
It is impossible to tell from the authors' methods exactly how various types of disability were converted to QALYs. The authors include in their cost-effectiveness equation the variable “D,” which is the degree of disability associated with injury. They use disability rating guides to assess disability due to ocular injury, but do not provide essential details. QALYs describe a preference for a given health state, and not just the functional status within that health state.
The alternative to radiologic screening, clinical screening, is reasonably well described in their methods. Enough details are supplied so that a different provider could carry out the clinical screening. The radiologic screening is less thoroughly described, with no details provided as to whether one or more views were obtained, or if costs assumed digital or film-screen systems.
One of the most compelling aspects of the article is the last paragraph of the discussion section in which they describe their experience using the proposed screening protocol. Although limited to a single practice, this experience is a true measure of effectiveness (how a protocol performs in real life).
The authors appropriately use a range of discount rates for costs in their sensitivity analysis. They do not discount consequences. This is a somewhat controversial area, but for the most part, people agree that it should be done.
With respect to costs, the authors account for both the costs of radiologic screening, for which they use charges as a proxy, and the costs of clinical screening, which they argue are negligible. They also look at the incremental improvement in the detection of ocular foreign bodies and thus the incremental improvement in QALY of radiologic versus clinical screening.
Sensitivity analysis is a method to determine the degree of uncertainty associated with economic analyses. It is in many ways the equivalent of defining confidence intervals. A sensitivity analysis is performed by varying the value of a particular variable across a range of clinically relevant values. If large changes in the value of this variable do not substantially affect the cost-utility ratio, then the confidence in the original results is high. If certain variables do greatly affect the ratio, then greater precision is needed in defining the value of these variables. A one-way sensitivity analysis varies one variable at a time. Two-way and greater sensitivity analyses can be done, although the difficulty of interpreting the analysis increases as the number of variables increases.
The authors performed multiple one-way sensitivity analyses, and determined that cost of screening, expected life span, and prevalence of foreign bodies were all critical variables. This means that their model is not robust along a realistic range of values for these variables. The authors discount the importance of the cost of screening, asserting that the point at which screening becomes effective ($25) is so low as to be unrealistic. As I've stated, they need to justify that costs are significantly greater than $25. Similarly, if patients can be preselected to increase the prevalence of foreign bodies to 2.5%, then screening becomes cost-effective.
In their discussion, the authors touch on aspects of the analysis that required them to make critical assumptions, such as the average length of life, or the utility associated with blindness. One aspect of the decision-making process the authors do not address, but which may be the most critical variable, is the question of liability and the legal costs associated with ocular injury. This is an indirect cost, and therefore is not accounted for in their analysis. The fear of litigation, however, may be the driving force in current screening protocols.
As Drummond (1) states, the “… intent in offering a checklist is not to create hypercritical users who will be satisfied only by superlative studies…[but rather to] help users of economic evaluations to identify quickly the strengths and weaknesses of studies.” Although Seidenwurm et al fall somewhat short of the rigorous and complete standard set by Drummond, they have made a commendable effort, and their conclusions are probably correct.
- Copyright © American Society of Neuroradiology