Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Interrater agreement

    I have a random selection of participants from a study where two raters are rating the same 8 binary categories.

    The dataset:

    Code:
    record_id    inter_blinies    inter_pef    inter_cons    inter_ptx    inter_ef    inter_rv    inter_pe    inter_ivcci    intra_blinies    intra_pef    intra_cons    intra_ptx    intra_ef    intra_rv    intra_pe    intra_ivcci
    6    0    0    0    0    1    1    0    0    0    1    1    0    1    0    0    0
    47    0    1    1    0    0    0    0    1    1    1    1    0    1    0    0    0
    55    1    0    0    0    0    0    0    1    1    0    0    0    1    0    0    0
    71    0    1    1    0    0    0    0    1    1    1    1    0    0    0    0    0
    75    0    0    0    0    1    0    0    1    1    0    1    0    1    0    0    0
    84    0    0    0    0    1    0    0    1    0    0    0    0    0    0    0    0
    89    1    1    1    0    0    0    1    1    1    1    0    0    1    0    1    0
    90    0    1    0    0    0    0    0    0    0    0    0    0    0    0    0    0
    93    1    1    0    0    1    0    0    1    1    0    0    0    0    0    0    0
    100    1    1    1    0    1    0    0    1    1    1    0    0    1    0    0    0
    108    0    0    0    0    0    0    0    1    0    0    0    0    0    0    0    0
    123    1    1    1    0    1    0    0    1    1    1    1    0    0    0    0    1
    124    0    1    1    0    0    0    0    1    1    1    1    0    0    0    0    0
    129    0    0    0    0    1    0    0    1    1    0    0    0    1    0    0    1
    132    1    1    1    0    1    0    0    0    1    1    1    0    1    0    0    0
    158    1    0    0    0    0    0    0    1    1    0    0    0    1    0    0    1
    163    1    0    0    0    1    0    0    0    1    0    0    0    1    0    0    0
    168    1    0    0    0    0    0    0    1    1    0    0    0    0    0    0    0
    179    0    1    1    0    1    0    0    1    0    1    0    0    0    0    0    0
    180    1    0    0    0    0    0    0    1    1    1    1    0    0    0    0    1
    185    0    1    0    0    1    0    0    1    1    1    1    0    1    0    0    1
    188    0    0    0    0    0    0    0    1    0    0    0    0    0    0    0    1
    193    0    0    0    0    0    0    0    1    0    0    0    0    0    0    0    1
    197    1    1    1    0    1    0    0    0    1    1    1    0    1    0    0    0
    205    0    0    0    0    0    0    0    0    0    0    0    0    0    0    0    0
    I have to access the agreement between the two raters in each category, e.g., all inter*-variables have to be compared to the corresponding intra*-variables. So if I use kappa/kap command, I get kappa statistics for each category, but I want to report a single number for the overall agreement.

    Can anyone help me?

    Best regards
    Michael
    Stata 17.0

  • #2
    Code:
    reshape long inter_ intra_, i(record_id) j(response) string
    
    kappa inter intra
    Note: The kappa statistic itself will be OK here. But the Z-score and p-values probably are not because you are no longer working with independent observations: they are nested within record_id.

    Comment


    • #3
      Dear Clyde.

      Thank you very much. I works perfectly!

      Best regards
      Michael

      Comment


      • #4
        Maybe this is also of interest for you: https://journals.sagepub.com/doi/10....867X1801800408
        Best wishes

        (Stata 16.1 MP)

        Comment

        Working...
        X