Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Intraclass correlation coefficient

    Hello,

    I am running into some problems calculating an intraclass correlation coefficient. I want to find the intra-rater reliability using an absolute agreement two-way mixed effects model. Here is an excerpt of my data:

    Code:
    * Example generated by -dataex-. For more info, type help dataex
    clear
    input byte(house test1 test2)
    1 13  1
    2 29 15
    3 35 36
    4 13 12
    5 36 36
    6  5  9
    7  1  1
    8 13 13
    end
    I have been using the following command:
    HTML Code:
    kappaetc test1-test2, icc(random) id(house)
    In the output, I only get the ICC for inter-rater reliability, not intra-rater reliability. When I look at the stored results, there is a missing value for r(icc_a) . Is this an issue because of my small sample size or something else? Any help would be appreciated.

    Thanks!

  • #2
    how many observations for each house?

    Comment


    • #3
      As George points out, you need repeated measurements per subject and rater. The example data could be interpreted in two ways:

      1. test1 and test2 represent two raters (tests, instruments) that rate 8 houses once. You cannot calculate intra-rater reliability here because there are no repeated measures.

      2. test1 and test2 represent the same rater (test, instrument) that rates 8 houses repeatedly. You could technically calculate intra-rater reliability here. However, with a single rater, you cannot generalize results beyond that specific rater. Also, for a single rater, a model assuming random sampling of raters makes no sense.

      Comment


      • #4
        Here is some clarification of my study: A survey was administered to 8 different households, and the results are the scores for test1 . The next day, each household was re-surveyed and the score is labeled test2 . So there are 8 raters (houses) that rate the subject twice. Theoretically, people's responses should be the same on both days, but I want to measure how much their responses vary between days. Is my data in the wrong format or do I need to use a different test for this type of analysis?

        Comment


        • #5
          Let's fix the terminology. Which subject did the households (which typically consist of multiple human beings) rate? How exactly did they rate it? I think you might be confusing the rater and the subject. Say you take an IQ test. Then you are not the rater. You do not rate the IQ test. You are the subject. The IQ test is the rater (instrument).

          Comment


          • #6
            correlation doesn't seem to be right approach. you have 2 observations per household.

            a difference would be more informative, and actually computable.

            Code:
            g diff = test1-test2
            tabstat test1 test2 diff , by(house)
            If you wanted to test for outliers (here, quite obvious), then maybe bootstrap an empirical distribution across the households.

            Comment


            • #7


              daniel klein I was confused, this makes much more sense now- thanks. So my survey would be the rater, the participants answering the survey would be the subjects, and the scores on each day would be the ratings. Since there is technically only one rater (every participant receives the exact same survey on both days), does that mean I cannot use this test?

              Comment


              • #8
                Originally posted by Audrey Safir View Post
                Since there is technically only one rater (every participant receives the exact same survey on both days), does that mean I cannot use this test?
                With your data, you could

                Code:
                kappaetc test1 test2 , icc(oneway)
                and interpret the result as intra-rater reliability. My second point from #3 applies.

                Comment


                • #9
                  daniel klein Thank you!

                  Comment

                  Working...
                  X