Sensitivity and specificity

Positive and negative predictive values 

Authored by Dr James Newcombe and endorsed by Dr Stephen Braye 

Download a hardcopy here  (PDF 181.2KB)

March 2021

There continues to be a high volume of requests for detailed information on SARS-CoV-2 diagnostic testing provided by NSW Health Pathology.

The global COVID-19 pandemic has seen a sharp rise in interest in the processes and techniques used in laboratories, specifically seeking assurances around access to timely, reliable diagnostic results.

This document aims to help provide factual information related to some of the common but less well-known terms related to COVID-19 diagnostic testing; namely: sensitivity, specificity and positive and negative predictive values. This document will be reviewed and updated by NSWHP’s Incident Management Team as needed.

What is different about COVID-19 diagnostic analysis and reporting?

As NSW Health Pathology introduces, performs, and interprets new tests for SARS-CoV-2, we are in real time defining the sensitivity and specificity of new assays.

For most of us in pathology practice this is a new experience. As scientists and clinicians, we are more used to having pre-defined sensitivity and specificity values of tests and then applying these in different clinical scenarios to come up with positive and negative predictive values.

The rate of asymptomatic infections and the varied ways in which symptomatic disease presents, has made it difficult to define the sensitivity and specificity of COVID-19 tests.

What does sensitivity and specificity mean and how important is it?

Sensitivity and specificity play vital roles in evaluating the accuracy and reliability of a diagnostic test both prior to implementation and as part of ongoing quality assurance. These elements co-exist and together inform the strengths and shortcomings of a test. They are considered gold standard in measuring the validity of a diagnostic test; those with high sensitivity and high specificity are ideal.

The following definitions are expressed in relation to the role they play in validation and implementation of new COVID-19 diagnostic tests.  

Sensitivity

Sensitivity relates to how well a test can detect the presence of COVID-19 disease (the percentage of true positive results in patients who have the disease).

Higher test sensitivity equates to positive infection and means there is a lower rate of false negative results.

Specificity

Specificity relates to how well a test can confirm the absence of COVID-19 infection. It indicates the percentage of true negative results in patients who don’t have the disease.

Higher specificity will mean a lower rate of false positive results for the test.

Table 1 illustrates the Sensitivity, Specificity, Positive Predictive Value and Negative Predictive Value concepts.

 Table1

Table 1 from https://step1.medbullets.com/stats/101006/testing-and-screening

Positive predictive value

The Positive Predictive Value (PPV) addresses the questions: how likely is a positive test result to reflect the presence of disease, and does a positive test result mean the patient has the disease? Without clear indicators of the sensitivity and specificity for a COVID-19 test, it difficult to define the likelihood that a positive test result for COVID-19, for a particular patient, actually represents true infection (PPV of the test).

Negative predictive value

The Negative Predictive Value (NPV) addresses the questions: how likely does a negative test result actually reflect an absence of the disease, and does a negative test result actually mean there is no disease for a particular patient (NPV of the test).

The positive and negative predictive values are influenced by the sensitivity and specificity; the disease incidence in the community; and the pre-test probability for a patient.

Pre-test probability

What is the likelihood that a patient will have the disease for which we are testing? This will obviously vary depending upon the presence or absence of appropriate symptoms. So, a person with symptoms of a disease is more likely to have test confirmation of a disease presence, than another person with no symptoms at all. In the context of COVID-19, if a person has the full constellation of COVID symptoms they are more likely to register a positive test result than another person with identical demographic features, but no symptoms.  This is the pre-test probability, which greatly impacts both positive and negative predictive values.

Pre-test probability is based on how many new diagnoses of COVID-19 there are in the community at any one time – disease incidence – as well as an estimate of the clinical likelihood this specific patient has COVID-19, based on their symptoms, signs, other test results and personal contact (or lack thereof) with other cases of COVID-19.

Summary of key points and definitions:

  • Sensitivity: the likelihood of the test to be positive in a patient with the disease
  • Specificity: the likelihood of the test to be negative when the patient does not have the disease
  • Positive Predictive Value: a positive test result in the patient actually reflects the disease presence
  • Negative Predictive Value: a negative test result in a patient actually reflects an absence of the disease
  • Incidence: the measure of disease present in the community
  • Pre-test probability: the likelihood the patient has the disease before testing has been conducted.

The two tables below indicate the value of a test in two disease incidence scenarios, and subsequent differing pre-test probabilities, and their effect on PPV and NPV in a test with the same sensitivity and specificity.

Table 2:  the disease incidence in this community is 50% – 1000 patients truly have COVID-19 and 1000 do not. This 50% disease incidence may, for instance, reflect a catastrophic COVID-19 situation in an overseas community.

Table 3: the disease incidence in another community is 1%, perhaps more akin to what Sydney experienced during its ‘second wave’ in July-August 2020.

In these hypothetical testing environments, the nominated sensitivity of the COVID-19 PCR test is 95% and the nominated specificity is 98%. These are reasonable estimates of the performance of the assays currently in use.

Table 2

New Test

Gold Standard

Test +ve

Gold Standard

Test -ve

Predictive Values

Test +ve

950

20

950/970 = 97.9% (PPV)

Test -ve

50

980

980/1030 = 95.1% (NPV)

Total

1000

1000

Table 2: Predictive Values of a Test with 95% sensitivity and 98% specificity, with a pre-test probability of 50%

If the COVID-19 PCR is positive in the setting of 50% pre-test probability, there is a 97.9% chance that the patient has the infection (positive predictive value).

If the PCR is negative, however, there is a lower 95.1% chance the patient does not have the infection (negative predictive value). In this scenario, the negative predictive value is lower than the positive predictive value due to the relatively lower sensitivity of the test compared to its specificity.

Table 3

New Test

Gold Standard

Test +ve

Gold Standard

Test -ve

Predictive Values

Test +ve

950

1980

950/2930 = 32.4% (PPV)

Test -ve

50

97020

97020/97070 = 99.9% (NPV)

Total

1000

99000

Table 3: Predictive Values of a Test with 95% sensitivity and 98% specificity, with a pre-test probability of 1%

Under the scenario of 1% pre-test probability, using exactly the same test, if the test, for instance COVID-19 PCR, is positive there is only a 32.4% chance the patient has the infection (positive predictive value) while a negative test gives a 99.9% chance the patient is not infected (negative predictive value).

In low disease incidence situations such as this, a specificity of 98%, even though it may seem high, becomes problematic, as 67.6% of all positive results are actually, false positives.

Thus, paradoxically, specificity becomes relatively more important than sensitivity when the pre-test probability of the disease being tested is low. The response to this problem in NSW has been to repeat positive COVID-19 PCR tests on an alternative assay with different targets to improve the overall specificity of COVID-19 testing.

This approach, called algorithmic or ‘reflex’ testing, is a long-standing solution to improve the specificity of laboratory tests. The higher specificity of reflex testing reduces the number of false positive results, particularly in a low disease incidence population, and has been used successfully for many years with, for example, hepatitis C and HIV serology.

This discussion is further complicated when comparing different kinds of tests: for example, SARS-CoV-2 PCR versus SARS-CoV-2 serology (which may include IgG, IgM and IgA) versus virus isolation, as well as different disease scenarios, e.g. symptomatic vs asymptomatic infection and acute versus past infection.