Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Why is my CFA-model not converging?

    Dear all,

    We're trying to run a confirmatory factor analysis on a motivational measure of a total 44 items, that we expect can be reduced to 10 factors. We have 311 observations in the data set.

    I run the code below, but the model does not achieve convergence. I tried to let the model iterate to more than 600 iterations, and convergence was still not achieved.

    I looked at the assumption of multivariate normal distribution, which is not met, but as far as I understand, the Satorra Bentler estimator should be robust to this violation.


    Does anyone have any ideas how to make the model achieve convergence?


    I have tried stopping the iteration process at 50 iterations, when the log pseudolikelihood doesn't change anymore and run the stats on this model. Can these stats be interpreted, or are they not valid, when the model is not converging?


    All help is appreciated!

    Best,
    Astrid

    ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

    CODE:

    sem (ONE -> item1 item11 item20 item29 item38) ///
    (TWO -> item2 item12r item21r item30 item39) ///
    (THREE -> item3 item13 item22 item31) ///
    (FOUR -> item4 item40) ///
    (FIVE -> item5 item14 item23 item32) ///
    (SIX -> item6 item15 item24 item33 item41) ///
    (SEVEN -> item7 item16 item25 item34 item42) ///
    (EIGHT -> item8 item17r item26 item35) ///
    (NINE -> item9 item18 item27 item36 item43) ///
    (TEN -> item10 item19 item28 item37 item44), standardize latent(ONE TWO THREE FOUR FIVE SIX SEVEN EIGHT NINE TEN) vce(sbentler)

    estat gof, stats(all)

  • #2
    If possible, it would help if we had the data. Note that you can use ssd commands (e.g. ssd build) to create and use summary statistics, so you wouldn't have to share the raw data itself. The summary statistics are things you would likely include in any published piece.

    My general advice with convergence problems is
    • make sure Stata is up to date. You don't want to waste your time on a problem that was fixed months ago
    • add the difficult option. Occasionally it works miracles
    • If variables are in very different metrics, rescale some of them first, e.g. measure income in thousands of dollars rather than in dollars
    • drop options that might be complicating things -- in your case especially I would drop the vce(sbentler) option
    • simplify the model and gradually build it back up. In your case you could easily start with one latent variable and gradually add more. You might be able to identify a problem variable that is causing you grief.
    These and other tips are in pages 2-3 of https://www3.nd.edu/~rwilliam/xsoc73994/L02.pdf

    Also, I wouldn't trust the results from a model that had not converged, but the output might help you identify a problem spot.
    -------------------------------------------
    Richard Williams, Notre Dame Dept of Sociology
    StataNow Version: 19.5 MP (2 processor)

    EMAIL: [email protected]
    WWW: https://www3.nd.edu/~rwilliam

    Comment


    • #3
      Thank you very much for the tips!

      Unfortunately, the model still will not converge. I wonder if it's due to the number of observations in relation to the number of parameters, that we are asking the model to estimate, which is a little low from what I can gather from the literature.

      I will leave the CFA for now - but thanks again!

      Comment


      • #4
        You need to tell us what you tried. There are many wonderfully talented people on this forum, but as far as I know there aren't any mind readers among them. If you don't tell us what you did, we cannot tell you what went wrong.

        For example Richard told you to simplify your model and add complications step by step. So you should have tried something like:

        Code:
        sem (ONE -> item1 item11 item20 item29 item38), latent(ONE)
        And than tried of you could get two factor model to work, etc. Then you can see when the model breaks down, i.e. what the problematic part of your model is. Then you can investigate further. Did you do that?

        Richard also told how to give us your data. Without your data we can only guess. With your data we can better help you. You don't have to give us your data, but then our ability to help you is obviously limited.
        ---------------------------------
        Maarten L. Buis
        University of Konstanz
        Department of history and sociology
        box 40
        78457 Konstanz
        Germany
        http://www.maartenbuis.nl
        ---------------------------------

        Comment


        • #5
          Originally posted by Astrid Jaeger Econ View Post
          I wonder if it's due to the number of observations in relation to the number of parameters, that we are asking the model to estimate
          You're right. Your model is too ambitious.
          Code:
          version 15.1
          
          clear *
          
          set seed `=strreverse("1502602")'
          
          local One 1 11 20 29 38
          local Two 2 12r 21r 30 39
          local Three 3 13 22 31
          local Four 4 40
          local Five 5 14 23 32
          local Six 6 15 24 33 41
          local Seven 7 16 25 34 42
          local Eight 8 17r 26 35
          local Nine 9 18 27 36 43
          local Ten 10 19 28 37 44
          
          quietly set obs 311
          
          tempname Corr
          local equations
          foreach factor in One Two Three Four Five Six Seven Eight Nine Ten {
              local tally : word count ``factor''
              matrix define `Corr' = J(`tally', `tally', 0.75) + I(`tally') * 0.25
              local item_list
              forvalues item = 1/`tally' {
                  local item_list `item_list' item`: word `item' of ``factor'''
              }
              quietly drawnorm `item_list', double corr(`Corr')
              local equations `equations' (`item_list' <- `factor')
          }
          
          sem `equations', ///
              difficult technique(nr 10 bhhh 10 bfgs 10) iterate(40) ///
              nocnsreport nodescribe
          
          exit
          Nevertheless, I do agree with Richard's and Maarten's points.

          Comment

          Working...
          X