Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • What is the right method of testing hypotheses in panel data regression?

    What is the right method of testing hypotheses in a random effect panel data regression model?

    I want to test if the coefficient to the independent variable is higher than 0, i.e. if the independent variable has a positive influence on the dependent variable.
    How should these hypotheses look? Is a T-test or a F-test applicable to test these hypotheses? if so, which one should be adopted?

  • #2
    Amalie:
    welcome to this forum.
    If the outcome table of, say, -xtreg- does not give you what you want, you can see -test-, as per the following toy-example:
    Code:
    use "http://www.stata-press.com/data/r15/nlswork.dta"
    . xtreg ln_wage age, fe rob
    
    Fixed-effects (within) regression               Number of obs     =     28,510
    Group variable: idcode                          Number of groups  =      4,710
    
    R-sq:                                           Obs per group:
         within  = 0.1026                                         min =          1
         between = 0.0877                                         avg =        6.1
         overall = 0.0774                                         max =         15
    
                                                    F(1,4709)         =     884.05
    corr(u_i, Xb)  = 0.0314                         Prob > F          =     0.0000
    
                                 (Std. Err. adjusted for 4,710 clusters in idcode)
    ------------------------------------------------------------------------------
                 |               Robust
         ln_wage |      Coef.   Std. Err.      t    P>|t|     [95% Conf. Interval]
    -------------+----------------------------------------------------------------
             age |   .0181349   .0006099    29.73   0.000     .0169392    .0193306
           _cons |   1.148214   .0177153    64.81   0.000     1.113483    1.182944
    -------------+----------------------------------------------------------------
         sigma_u |  .40635023
         sigma_e |  .30349389
             rho |  .64192015   (fraction of variance due to u_i)
    ------------------------------------------------------------------------------
    
    . test age
    
     ( 1)  age = 0
    
           F(  1,  4709) =  884.05
                Prob > F =    0.0000
    Last edited by Carlo Lazzaro; 24 Apr 2019, 04:46.
    Kind regards,
    Carlo
    (StataNow 18.5)

    Comment


    • #3
      Hi Carlo,

      Thanks for the reply. I have tried this, but my results appear to look different. See below:

      ( 1) LognumberofCVC = 0

      chi2( 1) = 6.29
      Prob > chi2 = 0.0121

      Are you perhaps able to explain why it is different?

      Comment


      • #4
        Amalie:
        I would say first that they differ because -test- was run on two different regression models.
        That said, from the outcome you provide, you can say that -LognumberofCVC- is significantly different from 0.
        Kind regards,
        Carlo
        (StataNow 18.5)

        Comment


        • #5
          Dear Carlo,

          Okay, thank you, nice to get that confirmed.

          The above being a two-sided T-test, I would also like to conduct a one-sided T-test. I have been following this recommended approach: https://www.stata.com/support/faqs/s...-coefficients/

          Yet, due to my different results (I don't see either F or degrees of freedom in my test cf the above) I am not able to run the subsequent commands to do the one-sided test.

          Can you help me on this?

          Comment


          • #6
            Amalie:
            just type
            Code:
            return list
            after -test- and you'll get all the details you need to run the subsequent commands.
            Kind regards,
            Carlo
            (StataNow 18.5)

            Comment


            • #7
              Okay, when i do this i get the following reuslts :

              return list
              scalars:
              r(drop) = 0
              r(chi2) = 3.398229583286582
              r(df) = 1
              r(p) = .0652664345916576

              display "Ho: coef <= 0 p-value = " ttail(r(df),`sign_CVC'*sqrt(r(p)))
              Ho: coef <= 0 p-value = .42038337

              display "Ho: coef >= 0 p-value = " 1-ttail(r(df),`sign_CVC'*sqrt(r(p)))
              Ho: coef >= 0 p-value = .57961663

              Is this correct? to me, it doesn't seem like this corresponds to the previous result of -LognumberofCVC- being significantly different from 0. based on this, it doesn't appear that anything can be concluded.

              Hope you can help clear this up. Your help is much appreciated

              Comment


              • #8
                or is there a mistake in the formula, caused by the different scalars?

                Comment


                • #9
                  Amalie:
                  it should be:
                  Code:
                  ttail(r(df_r),`sign_wgt'*sqrt(r(F)))
                  not
                  Code:
                  ttail(r(df_r),`sign_wgt'*sqrt(r(p)))
                  Kind regards,
                  Carlo
                  (StataNow 18.5)

                  Comment


                  • #10
                    Dear Carlo,

                    I believe that my issues are founded in my initial regression code. My regression code is robust error panel data regression xtreg Y LognumberofCVC, re. This appears to be the reason it is not working for me. I tried it on an ordinary regress, and the code worked perfectly.

                    Is there an alternative way to do this type of test on a xtreg Y X1 X2, re command?

                    Best Amalie

                    Comment

                    Working...
                    X