Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Creating a scale with low cronbachs alpha?

    Hi!

    This might be a very basic question for those of you with more statistical experience than myself and it is not really of technical nature but I hope it is okay that I post it anyway.

    I have run a survey with (among others) 3 questions supposed to measure creative self efficacy. The plan was to make a scale of these three items, but when analysing the data the three items get a very low Cronbachs Alpha (0.2396) which signals they should not be combined. But, why is it that we want internal consistency in a scale? After thinking about it I would think that the whole point of asking three different questions and summarising the answer score would be that someone rating high on one item and lower on others would just mean that the score will balance out and overall be more correct. Sort of like triangulation. I understand that if one expects the respondent to score consistent the alpha score is valuable but if you, like this case, attack the problem from three different angles to figure out the more complete picture, does one still seek internal consistency?

    In my head this doesn't really make sense, but I'm very new to the game here. Would be very helpful with some help to clarify the issue!


    If it is of any interest these are the questions and the alpha table:
    1. I am good at generating new ideas to solve problems (generateIdeas)
    2. I have good imagination (fantasy)
    3. I am more inventive than most of my colleagues (creativeRelative)
    Item Obs Sign item-test
    correlation
    item-rest
    correlation
    average
    interitem
    covariance
    alpha
    generateId~s 161 + 0.5360 0.1202 .0954969 0.1965
    creativeR~ve 161 + 0.6931 0.0866 .1085792 0.3086
    fantasy 161 + 0.6550 0.1952 .0106755 0.0268
    Test scale .0715839 0.2396

    (PS: I have also tried to run the omega analysis but it cannot be computed on my data)

  • #2
    The idea is that there is only one underlying (often referred to as latent) variable: creative self efficacy. So every person in your dataset has one value on that variable, but we cannot measure it directly. Instead, we measure it by asking three different questions. Since all three are supposed to measure the same thing, they should be correlated. In fact, it is the part that all three variables have in common that is our estimate of the latent variable, while the remainders are the estimates of measurement error.

    The interitem correlation seems suspiciously low for those three variables. My first check would be to see if there values that should have been missing values. For example, -99 is often used to denote a missing value, but if you don't tell Stata that, then Stata will treat -99 as a regual number. Thus radically changing the correlations.
    ---------------------------------
    Maarten L. Buis
    University of Konstanz
    Department of history and sociology
    box 40
    78457 Konstanz
    Germany
    http://www.maartenbuis.nl
    ---------------------------------

    Comment


    • #3
      Thanks for your quick and thorough answer!

      There was 1 missing observation but it was the same for all variables and deleting it didn't have any effect. But I agree with you that the correlation is suspiciously low. My reasoning around this would be that you can still feel that you are good at coming up with ideas even so your colleagues are even better. In this case the study was done in a high creative environment so it sort of makes sense. Most of the respondents answer quite high on both fantasy and idea generation, but the variable forcing them to see themselves in relation to their colleagues creates a more nuanced picture.

      Regarding the reasoning behind the scale: couldn't a latent variable be made up of different and sometimes conflicting traits? For clarity let's take an example where we to a greater degree "know" the answers:
      Say I am trying to make a scale to measure how good a person is at driving and I have the questions: "I always follow the driving rules", "I have a lot of driving experience" and "I can read traffic well". It will be very possible to score high on experience and reading traffic and low on following the rules. In my mind, any combination of high and low on these three items seems possible without affecting if they measuring the same thing? Of course the best driver would be the one scoring top on all three. But, it least as it seems for me, an ok driver could be made up of any combination of the three variables resulting in an okay score?
      In my dataset I can see that the fantasy variable is generating some "disturbance": 72% of the respondents saying they disagree with having good imagination are also saying they are good at generating new ideas. This seems reasonable because you could have a vivid imagination but not necessarily towards practical problem solving and vice versa. But couldn't one argue that yes, you can be creative by having good imagination OR being good at generating new (perhaps practical) ideas – but that the ultimate "creative", the combination that would generate the most novel solutions, would need a combination of both?
      Last edited by Aleksander Erichsen; 04 Jan 2021, 06:32.

      Comment


      • #4
        Originally posted by Aleksander Erichsen View Post
        Regarding the reasoning behind the scale: couldn't a latent variable be made up of different and sometimes conflicting traits?
        Generally speaking: no. Usually, when we speak of a "variable", we mean one dimension or trait.

        Originally posted by Aleksander Erichsen View Post
        Say I am trying to make a scale to measure how good a person is at driving and I have the questions: "I always follow the driving rules", "I have a lot of driving experience" and "I can read traffic well". [...] In my mind, any combination of high and low on these three items seems possible without affecting if they measuring the same thing?
        Perhaps. But the differences of high and low scores would then not be attributable to measurement error and, thus, violate the basic assumption of the measurement theories that underly Cronbachs' alpha. Instead, the differences (or "conflicts") that you observe arise precisely because you are not measuring the same thing -- you are measuring three different things (dimensions, traits).

        From a substantive point of view, a good driver might indeed need to score high on different traits -- but that depends on our idea of what a "good driver" is. What you propose is close to so-called formative measurement models. Edwards (2011) explains some serious problems of formative measurement to which I have not read convincing answers.


        Edwards JR. The Fallacy of Formative Measurement. Organizational Research Methods. 2011;14(2):370-388.

        Comment


        • #5
          note that "sets of questions/items" can be looked at in different ways; you might want to read Feinstein, AR (1999), " Multi-item 'Instruments' vs Virginia Apgar's Principles of Clinimetrics," Archives of Internal Medicine, 159: 125-8; if want more on this see, same journal, letters (1999) 159: 1816-1817

          Comment


          • #6
            A Stata focussed presentation on the different ways to interpret the relationship between the latent variable and the observed items is here: http://maartenbuis.nl/presentations/...indicators.pdf
            ---------------------------------
            Maarten L. Buis
            University of Konstanz
            Department of history and sociology
            box 40
            78457 Konstanz
            Germany
            http://www.maartenbuis.nl
            ---------------------------------

            Comment


            • #7
              Thank you so much everyone!

              Daniel, your point about reflective vs formative is actually a very interesting topic in relation to creativity. Thank you for pointing this out to me. I guess this boils down to what creativity really is. Is creativity a specific skill or is it just the label society has chosen for people inhibiting a certain combination of skills. If it is the latter, this leads to what you are also pointing at "it depends our idea of what" creativity is. I guess this is a quite interesting discussion to have in the research I am doing.

              Also thank you to Rich and Maarten for reading suggestions. I'll jump right on them!

              Comment

              Working...
              X