Review I

1. Give examples of algorithmic, pattern recognition, hypothetical deductive, and exhaustive approaches to diagnosis

Diagnostic strategies:Pattern recognition or “Type 1 thinking”: Rapid recognition of patterns, rashes, classic symptoms or signs of disease. Hypotheticodeductive or “Type 2 thinking”: Like a detective; Ask a series of questions and narrow down the diagnosis; Goal is to rule out all but one diagnosis by systematically asking a series of questions. Algorithmic: order some tests, and use an algorithm to evaluate the tests.

2. Be able to interpret a threshold diagram and to draw threshold diagram given clinical scenario, including understanding what pretest probability is

Key points to understanding the diagram: Probability of disease ranges from 0% to 100%. Below test threshold, disease is “ruled out” (at least for now). Above the treatment threshold, we should “rule in” disease and start treatment. Between thresholds, we need more information

Pretest probability: Screening: Screening for breast cancer, where disease is rare (let’s say 10 in 10,000). Surveillance: Surveillance in women diagnosed with breast cancer in the past two years, where recurrent or contralateral disease is much more common (let’s say 1 in 10 or 10%).

3. Be able to frame a clinical question using PICO format

Population; Intervention; Comparison; Outcome.

4. Distinguish patient oriented outcomes from disease oriented outcomes

Patient Oriented Evidence: Things patients would care about; Improved symptoms, reduced duration of illness, reduced morbidity and mortality, lower cost

Disease Oriented Evidence: Things a scientist should care about; Improved BP, blood sugar, flow rate; But: physiologic measures may not translate into helping a patient live a longer or better life.

5. Be able to abstract data regarding test accuracy (FP, FN, TP, TN) from a table or text in a diagnostic test study. Given these data, be able to create a 2 x 2 table (or an “n x 2” table for a polytomous test), and be able to calculate: Sensitivity; Specificity; Positive and negative predictive value; Pretest and Post-test probability; Likelihood ratios

Sensitivity = TP / (TP + FN) = a / (a+c)

Specificity = TN / (FP + TN) = d / (b+d)

Post-test probability of a positive test having disease = Positive predictive value (PV+): PV+ = TP / (TP + FP) = a / (a + b)

Post-test probability of a negative test not having disease = Negative predictive value (PV-): PV- = TN / (TN + FN) = d / (c + d)

Likelihood ratio (LR): the ratio of the Probability that an individual with disease has the test result to the probability that an individual without disease has the test result.

Positive likelihood ratio (LR+): LR+ = 4, when 80% with disease have a positive test divided by 20% without disease have a positive test. LR+ = Sensitivity / (100 – specificity). “How well does a positive test help us rule in disease?”.

Negative likelihood ratio (LR-): LR+ = 0.1, when 9% with disease have a negative test divided by 90% without disease have a negative test. LR- = (100 - sensitivity) / specificity. “How well does a negative test help us rule-out disease?”.

6. In a study description, be able to identify threats to validity such as verification bias (partial and differential), spectrum bias, and failure to mask. Also be able to distinguish diagnostic cohort from diagnostic case-control designs.

Verification bias: All patients get same reference standard (no verification bias). Random sample of those with negative index test get same reference std as those with positive test (partial verification bias). Some patients get one standard, some another (differential verification bias)

spectrum bias: case-control study design. This type of “case-control design” creates a bias that makes the test look more accurate than it actually is. Use a “cohort design” by studying patients with suspected disease, rather than perfectly healthy and very sick. Used a cohort design, enrolling a reasonable spectrum of patients suspected of the disease in question (not just previously confirmed cases and healthy controls)

Masking (blinding): Are persons reading index test blinded to reference standard test? Are persons interpreting reference standard blinded to index test result?

**7. Given the methods section of a study evaluating the accuracy of a diagnostic test, be able to assess its quality based on study design, masking, and the presence or absence of verification bias (none, partial, or differential). **

Similar to above.

8. Given a clinical dataset, be able to draw a receiver operating characteristic curve (sensitivity vs 1 – specificity).

y: sensitivity, x: 1 – specificity

9. Understand approaches to development and validation of clinical decision rules (i.e. multivariate, point scores). Given some clinical data, suggest a decision rule with 3 risk categories

“clinical decision rules” or “clinical prediction rules”. A multivariate analysis with disease (yes/no) as the dependent variable, and the 10-20 clinical variables as independent or predictor variables. Developing CDRs: Point scores. Point scores make a multivariate analysis more transparent and practical Begin with multivariate model, and create and additive point score based on: Counting (i.e. 1 point per clinical finding or risk factor). Example: Strep Score. Assign points based on $\beta$ coefficients, which are a measure of how strongly a sign or symptom is associated with the diagnosis.

10. Understand how low, moderate and high risk groups created by a clinical decision rule fit into the threshold model of decision-making.

Do the risk groups created by the CPR correspond to clinically useful test and treatment thresholds? Low risk group should have a disease likelihood less than rule out threshold to be most useful (no further action needed). High risk group should have a disease likelihood of at least rule in threshold (can start treatment for this group). Further testing is only needed for moderate risk patients.

11. Understand basic steps in a diagnostic meta-analysis, and what makes a good one.

Key elements in a systematic review or meta-analysis

  1. A focused clinical question

  2. An exhaustive search for studies

  3. Clear inclusion/exclusion criteria: Inclusion/exclusion by setting, population, and/or symptoms

  4. Evaluation of study quality: QUADAS-2 is most often recommended; check with your target journal or journals to see their preference

  5. Careful abstraction of data

  6. Evaluation for homogeneity

  7. Meta-analysis if possible

12. Be able to interpret ROC curves, both for a single study and a summary ROC curve from a diagnostic meta-analysis.

  1. Sensitivity is on y axis, 1 – specificity on x-axis (100% - specificity if using percent)

  2. Area under the ROC curve is generally between 0.5 (useless) to 1.0 (perfect)

  3. Area is proportional to ability of a test to discriminate diseased from non-diseased. It is an important tool for comparing tests.

  4. It is useful for identifying the best cutoff to define an abnormal test

  5. Slope of a curve = y/x = Se / (100-Sp)

  6. The slope at any point on the curve is also the positive likelihood ratio!

  7. Closest to upper left corner. Youden’s J: point that maximizes sensitivity + specificity - 1

Previous
Next