Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Concordance, sensitivity, specificity

    Hello,

    I have a very small data set with two diagnostic test data. One is an old test and gold standard and we want to test how good the new test is at diagnosing the same disease. Should I calculate the specificity, sensitivity, NPV, PPV, AUC or concordance? There are so many tests.


    ----------------------- copy starting from the next line -----------------------
    Code:
    * Example generated by -dataex-. For more info, type help dataex
    clear
    input byte(id oldtest newtest)
    1 1 1
    2 1 1
    3 1 0
    4 0 0
    5 1 1
    6 0 0
    7 0 0
    8 0 1
    9 1 1
    end
    ------------------ copy up to and including the previous line ------------------



    So far, I have used this code in Stata but I am not sure why it says "True D defined as newtest ~= 0 ", shouldn't the disease be defined by 1?

    diagtest oldtest newtest

    Click image for larger version

Name:	Screenshot 2023-02-23 at 6.22.56 PM.png
Views:	1
Size:	97.8 KB
ID:	1703179




  • #2
    So far, I have used this code in Stata but I am not sure why it says "True D defined as newtest ~= 0 ", shouldn't the disease be defined by 1?
    What -diagtest- is telling you is just repeating the way Stata handles all logical conditions: 0 is false and non-zero is true. The warning is apt in the sense that if your variable newtest contains values other than 0 and 1 (or has any missing values), then you should be aware that all of the non-zero values (including missings) are treated as if they were 1.

    Should I calculate the specificity, sensitivity, NPV, PPV, AUC or concordance?
    These statistics are answers to different questions. So what is the question you wish to answer about your test?

    Sensitivity and specificity are measures of test accuracy conditional on the actual disease state. Importantly, they are inherent properties of the test itself and are not dependent on prevalence of disease.

    NPV and PPV are measures of test accuracy conditional on the test result. They depend on the prevalence of disease, so results from one study cannot be expected to generalize to other contexts where disease prevalence is different.

    The AUC is a measure of discrimination, the ability to distinguish cases from non-cases. It is not a measure of accuracy per se. The AUC is not really useful for tests with dichotomous results. It is better applied to tests with continuous score results. You can think of the AUC as follows: from the population randomly sample one person with the disease and one person without it. The AUC is the probability that the person with the disease has the higher score. The AUC is independent of disease prevalence and it is also independent of the cutoff used to categorize a result as positive.

    There are actually a few different statistics that are called concordance. They are all similar, however, and are basically transforms of the AUC, or actually equal to the AUC.

    Comment


    • #3
      Hi Clyde,

      Thank you for the thorough explanation of the tests. My research question is whether the new test can accurately diagnose the disease as well as the old test. The new test is introduced into a low resource area, it is cheaper and readily available but would only be acceptable if its accuracy is acceptable as well. Which tests would you recommend?

      Comment


      • #4
        And your definition of acceptable accuracy is...?

        Comment

        Working...
        X