You are not logged in. You can browse but not post. Login or Register by clicking 'Login or Register' at the top-right of this page. For more information on Statalist, see the FAQ.
Johnny:
welcome to this forum.
Given the very limited details that you provided, the theoretical answer is: yes, you can.
Assuming that you have a continuous regressand, see -xtreg- for static panel data regressions and -xtbabond- for dynamic ones (as you mentioned a lagged dependent variable).
Claims of urgency are strongly discouraged around here. It might be urgent to you, but it is not urgent to us, and your priorities in no way have precedence over our priorities.
As to your question the fixed effects model is appropriate only if you have large T. For large N small T data, your regression (called dynamic regression) will suffer of something called Nickell bias. See this source here http://fmwww.bc.edu/ec-c/s2013/823/E...n05.slides.pdf.
If you have large N, small T, look for -[XT] xtabond-, this is an appropriate estimator for dynamic models with fixed effects.
Another appropriate estimator for your equation and large N , small T is the Anderson-Hsiao estimator, that you can do with -xtivreg- as explained in this thread: https://www.stata.com/statalist/arch.../msg00280.html
Johnny:
Joro's reply is enlightening in this respect: you shoud go -xtabond-.
Going -fe- or -re- does not depend on your regression equation but on the data generating process.
I understand now xtabond is a better option if the data is large N small T. But I would like to know the is there any reason why xtreg is less preferably option except the Nickell bias?
Johnny:
1) if you have a N>T panel dataset and you're interested in static panel data regression, go -xtreg-;
2) if you have a N>T panel dataset and you're interested in dynamic panel data regression, go -xtabond- (please note that the dynamic panel data regressions are much more theoretically demanding than their static counterparts).
Yes, the only reason that makes -xtreg- inappropriate in dynamic models is the Nickell bias, and the reason is serious enough. Kit Baum writes in the source I cited above:
For reasonably large values of T, the limit of (ρˆ− ρ) as N → ∞ will be approximately −(1 + ρ)/(T − 1): a sizable value, even if T = 10. With ρ = 0.5, the bias will be -0.167, or about 1/3 of the true value. The inclusion of additional regressors does not remove this bias. Indeed, if the regressors are correlated with the lagged dependent variable to some degree, their coefficients may be seriously biased as well.
There is no exact method to say what is "large" and what is "small" in econometrics, and it always depends on the context. Most econometricians (or maybe all econometricians? ) would say that T=13 is small.
In short you either have to choose the static model and omit the lagged value of log-wage on the right hand side; then you can proceed with -xtreg-.
Or if you really want to have lagged log-wage on the right hand side, do either -xtabond-, or the Anderson-Hsiao through -xtivreg-.
I understand now xtabond is a better option if the data is large N small T. But I would like to know the is there any reason why xtreg is less preferably option except the Nickell bias?
As in like if I am using a panel data model with instrument variable (i.e. xtivreg) with a fixed effect. Should I need to care about the robustness error ?
Comment