Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by daniel klein View Post
    kappaetc usually gives you the observed agreement along with five chance-corrected agreement coefficients. Please show the specific output you are referring to and describe what you do not understand about it.
    Please refer to post #20 in this thread in which the "Scott/Fleiss' Kappa" row consistently gives me -0.0185 for each of the 9 health outcome whereas the "percent agreement" row gives me different values. However, the value I wanted to obtain was the Fleiss' kappa.

    Comment


    • #32
      Type

      Code:
      matlist r(prop_e)
      after calling kappaetc for each of the 9 health outcomes. This command show the matrix r(prop_e), which contains the expected agreement (or chance-agreement). You will find that as the observed agreement changes so does the expected agreement. The ratio of these two (as given by the formula in #9) gives Fleiss' kappa coefficient, which is (nearly) identical for any possible combination of observed and expected agreement when there is only one subject. That is just how Fleiss' kappa is designed. I feel are pretty much back to square one, where I am telling you that chance-corrected agreement coefficients are not well suited to analyse single subjects.

      Comment


      • #33
        Let me add one additional thought, which I hope helps you understand the problem. The key idea behind chance-corrected agreement coefficients is to remove chance-agreement from the observed agreement. To do so, we have to measure/estimate "chance agreement". Kappa estimates chance agreement from the frequencies with which the raters use any of the rating categories. If you have only one subject, then each rater uses exactly one of the rating categories (once). Thus, there is no variation in the (single) raters' frequencies of using a category.

        In contrast, Brennan and Prediger (1981) assume that raters pick the rating categories with equal probability and define "change agreement" accordingly. In this case, we can estimate chance agreement even before any subject is ever rated (technically, you should specify the number of possible rating categories if not all of them are observed in the data). As a result, the chance agreement for Brennan and Prediger's coefficient is constant across subjects, and their coefficient changes whenever the observed agreement changes. Perhaps this concept of chance agreement better suits analysis with only one subject?


        Brennan, R. L., and D. J. Prediger. 1981. Coefficient kappa: Some uses, misuses, and alternatives. Educational and Psychological Measurement 41: 687-699.
        Last edited by daniel klein; 27 Jun 2020, 02:41.

        Comment

        Working...
        X