Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Which panel data model to choose?

    Dear community,
    For my study I'm analysing defense expenditures for all NATO countries over a period of 23 years. The Hausmann-test clearly indicates I should use a fixed effects model. When testing for time effects, heterogeinty and serial correlation and adjusting my model, the results do not seem very robust. How do I know whether I have a good model and which one to choose?
    Variable name
    % to personnel
    Random effects model Fixed effects
    model
    Fixed effects
    model with
    time effects
    Fixed effects model
    with time effects and robust standard errors
    Linear dynamic panel
    model
    Ideology variables
    MoD left dummy 1.512441*
    (0.9328862)
    1.680197*
    (0.9126713)
    0.4122638
    (0.9346421)
    0.4122638
    (1.993553)
    1.950547**
    (0.8087553)
    MoD right dummy 1.135277
    (0.921039)
    1.649345*
    (0.9059467)
    0.7.71812
    (0.9093065)
    0.7071812
    (1.473121)
    1.4195
    (0.8745139)
    Control variables
    GDP -0.0008396
    (0.0005681)
    0.0005658
    (0.0008544)
    0.0012762
    (0.0008648)
    0.0012762
    (0.0008444)
    -0.0007733*
    (0.0004283)
    Unemployment rate 0.6657644***
    (0.1027658)
    0.6452617***
    (0.1009946)
    0.3464043**
    (0.1121592)
    0.3464043*
    (0.1788313)
    0.6465263***
    (0.0979289)
    Armed forces in % of labour force -1.252134
    (0.9430639)
    -2.397579**
    (0.9749935)
    -3.044518**
    (1.135696)
    -3.044518
    (2.392644)
    2.022514**
    (0.9780823)
    WGIscore 0.0081857**
    (0.0035284)
    0.0115299***
    (0.0036109)
    0.0063384*
    (0.0039227)
    0.0063384
    (0.0061555)
    -0.0013217
    (0.0034632)
    Distance to Moscow 0.000933
    (0.0012788)
    /
    Lag % personnel spending / / 0.4943256***
    (0.0406795)
    Constant 46.36936***
    (3.527201)
    47.28703
    (2.323946)***
    51.98476***
    (6.100107)
    20.50039***
    (2.395178)
    Number of observations 546 546 546 546 517
    Number of groups 28 28 28 28 28
    R² within 0.0909 0.1003 0.1866 0.1866
    R² between 0.1197 0.0973 0.1644 0.1644
    R² overall 0.1177 0.0076 0.0336 0.0336
    F 9.51*** 4.52*** 285.2***
    Wald chi 54.94*** 422.66***

  • #2
    Originally posted by cind du bois View Post
    When testing for time effects, heterogeinty and serial correlation and adjusting my model, the results do not seem very robust.
    By "not robust," I understand this to mean that the results change from one specification to the next. In \(N>T\) panels where you are dealing with non-experimental data (no random assignment of units to treatment), the most robust model is the two-way fixed effects model with cluster-robust standard errors (no pun intended!). You have 28 clusters which is on the low side, but you can get away with clustering. Therefore, you should not compare this specification to the rest, except for pedagogic reasons.


    GDP -0.0008396
    (0.0005681)
    0.0005658
    (0.0008544)
    0.0012762
    (0.0008648)
    0.0012762
    (0.0008444)




    WGIscore 0.0081857**
    (0.0035284)
    0.0115299***
    (0.0036109)
    0.0063384*
    (0.0039227)
    0.0063384
    (0.0061555)




    Also, you will want to rescale your GDP and WGIscore variables so that their coefficients are readable.
    Last edited by Andrew Musau; 02 Feb 2024, 02:58.

    Comment


    • #3
      Dear Andrew,

      Thank you very much for your answer. So, you mean that the 'best' model for my data is the fixed effects model with time effects and robust standard errors (column 4 in my table)?


      Thanks also for the tip of rescaling!

      C

      Comment


      • #4
        Correct. Do not report the constant for the FE models. It is meaningless.

        Comment

        Working...
        X