Case Conference: When ‘3-for-5’ Is Not Enough
In our previous columns, we discussed the utility of the ‘3-for-5’ heuristic in a scenario when the patient did not have 3 out of 5 cardinal features of the diagnosis ascribed to them. In such a case, the diagnosis needs to be reconsidered.1,2 Here, we consider the converse scenario—when the 3-for-5 rule is satisfied. Does this imply that the proposed diagnosis is correct? Unfortunately, the answer to this question cannot be known without considering the specificity of the cardinal features for the proposed diagnosis and the pretest probability of the disease. If the cardinal features are relatively nonspecific and the condition is rare, the 3-for-5 heuristic cannot reliably establish the diagnosis. To appreciate this limitation, we take a brief foray into epidemiology. We will review the basic concepts of specificity, sensitivity, and pretest probability using an informal, intuitive approach and describe the relevance of these concepts to the 3-for-5 rule.
Sensitivity and Specificity
Imagine the Isle of Health, if you will, populated entirely by healthy people. In contrast, on the Isle of Sickness, everyone is afflicted with a morbid illness ‘M.’ If we have a test that is 99% sensitive for ‘M,’ how many people on the Isle of Sickness will test positive for ‘M’? Bearing in mind that sensitivity is the percentage of patients who do have the disease and test positive for the disease and that everyone on the Isle of Sickness has the condition ‘M,’ the answer is that on average 99 out of 100 people will have a positive test result (ie, ‘true positives’), and 1 out of 100 will have a negative test result (‘false negative’). What about the Isle of Health? How many people there would test positive for ‘M’ with our 99% positive test? If you cannot answer this question, you are correct. There is no way to know the answer because we do not know how specific our test is for ‘M.’ Specificity is the percentage of patients who do not have the disease and test negative. If the test is 90% specific for ‘M,’ then on average 90 out of 100 people on the Isle of Health will have a negative test result (ie, ‘true negatives’), but 10 out of 100 will have a positive result (ie, ‘false positives’). Considering that we know everyone on the Isle of Health is healthy, anyone with a positive test result is a false positive.
These isles illustrate a few key points about diagnostic tests. First, sensitivity is calculated in the subset of people who have the disease (ie, the Isle of Sickness), whereas specificity is calculated in the subset of people without the disease (ie, the Isle of Health). Thus, sensitivity and specificity refer to 2 nonoverlapping subpopulations and are completely independent of each other. Sensitivity and specificity are a bit like Tweedle Dee and Tweedle Dum—it is hard to remember which is which. The commonly used mnemonics of SeNsitivity rules OUT (SNOUT) and SPecificity rules IN (SPIN) are only helpful with the understanding that in a highly sensitive test, a negative result rules out the disease, whereas for a highly specific test, the positive result rules in the disease. The more suggestive, but less catchy version of this mnemonic is SN-N➜OUT and SP-P➜IN.3 Perhaps, a simpler way to differentiate, or remember which is which, is to keep in mind that sensitivity is calculated using the sample of people who are sensitive, because they are sick.
Screening Tests
A 100%-sensitive test makes a perfect screening test because it will identify all cases of a condition that is tested. A screening test, however, is not optimized for specificity and thus may capture many noncases (ie, false positives) in the diagnostic net. We informally refer to these wrongly diagnosed cases as the ‘diagnostic bycatch.’ A 100%-specific test is a perfect diagnostic test because all positive test results will reflect actual cases of the condition. The downside of a highly specific diagnostic test is that it is usually not very sensitive and, therefore, will miss many people who do have the disease (ie, false negatives). Counterintuitively, many people with the condition will still test negative with a highly specific diagnostic test. Because it is unusual for a test to be both highly sensitive and highly specific, we often need to deploy a 2-test strategy to achieve optimal diagnostic accuracy. We start with a highly sensitive screening test to flag all the people with a positive result indicating they may have the disease. Next, we follow up with a highly specific diagnostic test for everyone who had a positive result on the screening test. The highly specific diagnostic test is used to exclude the false positives identified by the highly sensitive screening test.
Disease Prevalence and Pretest Probability
The practical utility of a test depends not only on its intrinsic properties of sensitivity and specificity but also on the prevalence of the disease in we are testing for in the population being evaluated. The test for ‘M’ is 99% sensitive and 90% specific. It is a very good test—real-life tests are usually not nearly as good—but is quite useless on the Isle of Health, where it will only yield false positives. In other words, a test for a disease should only be applied to a population in which there is at least a reasonable possibility of the disease. To get a feel for how test utility depends on disease prevalence, or pretest probability, of the disease, different prevalences can be evaluated with a diagnostic test calculator freely available online. A 99%-sensitive, 90%-specific test applied to a population where disease prevalence is 0.1% would yield just 1 true positive for every 99 false positives. The same test applied to a population where the disease prevalence is 20% will yield 198 true positives for every 80 false positives.
Using the 3-for-5 Heuristic as a Screening Test
The 3-for-5 heuristic can be conceptualized as a screening test because, overwhelmingly, patients with the disease will have at least 3 out of 5 most common features of that disease. However, the 3-for-5 rule is not necessarily specific, nor does it take into account pretest probabilities. Thus, clinical acumen is required to know when to apply the rule and how to interpret the results. We illustrate how the issues of specificity, sensitivity, and pretest probability play out in real life using 2 instructive examples from our practice.
Discussion
Headache awakening a patient from sleep; relentless progression of pain; continuous vomiting; and, most alarmingly, persistent neurologic symptoms are all ‘red flags’ that should prompt an emergent search for secondary causes. Although migraine, a much more common cause of headache in a child than secondary causes, can cause severe pain and vomiting, and is worse with motion, it would be dangerous to assume that these nonspecific features are sufficient to diagnose migraine. These features are typical but not diagnostic of migraine, and may be present with headaches of far more ominous etiology. In the person whose case is presented here, the presence of ‘red flags’ requires that intracranial pathology be excluded as the first step. Head CT shown in the accompanying video demonstrated a large intracranial hematoma due to ruptured thalamic arteriovenous malformation (AVM). This case illustrates the danger of zeroing in on a specific diagnosis before considering wider differential and the clinical context. This is especially problematic in the context of severe-at-onset headache, which can be the presentation of several neurologic emergencies, including subarachnoid hemorrhage, cervical arterial dissection, pituitary apoplexy, hypertensive brain hemorrhage, reversible cerebral vasoconstriction syndrome (RCVS), and venous sinus thrombosis, as well as ruptured brain AVM.
Conclusions
When the 3-for-5 rule is not satisfied, the diagnosis is unlikely, and the clinician should consider alternatives. When the 3-for-5 heuristic is satisfied, the diagnosis is possible but not certain, especially when the characteristic features are nonspecific and the disease is rare. Clinicians should then confirm their diagnostic suspicion with specific clinical findings or tests.
Ready to Claim Your Credits?
You have attempts to pass this post-test. Take your time and review carefully before submitting.
Good luck!
Recommended
- Imaging & Testing
Real-World Experience Using Biomarker Testing for the Evaluation of Acute Traumatic Brain Injury
Steve Rauchman, MDSteve Rauchman, MD - Imaging & Testing
TBI Today: Quantitative Diffusion Tensor Imaging for Assessment of Mild Traumatic Brain Injury
Michael L. Lipton, MD, PhD, FACRMichael L. Lipton, MD, PhD, FACR - Imaging & Testing
An Overview of Neurologic Complications of HIV and Opportunistic Infections
Ferron F. Ocampo, MDFerron F. Ocampo, MD