Perhaps a silly question, but I'm interested to hear if researchers these days care about serial correlation, especially in panel setups? I remember in time series that this was a big issue - you'd almost always do a test for white noise and adjust your lag structure until your residuals were free of serial correlation. Yet, in just about any modern economics paper I've read (my background, but I'm also interested in other fields' input), people do not care at all. They just use robust or clustered standard errors and state that these are robust to autocorrelation (and heteroskedasticity) and that's it.
Is that really all there is to it? Does it depend on the dimensions (# of units (N), # of time periods (T))? My intuition is that serial correlation plays a role in two ways. The first is through the standard errors, which will not be reliable if you do not control for serial correlation. This issue is generally solved by using robust/cluster(). The second is more complicated - is the presence of serial correlation an indication of model misspecification? In other words, can serial correlation tests help you figure out what kind of model you need to properly explain the data? E.g. do you use a static model, one in differences, one with lags of dependent and independent variables? Or will these tests give you a false sense of credibility (i.e. you think your model is correctly specified because your test says it is free of serial correlation, but actually the test isn't that informative in practice)?
I'd be very interested to hear your opinion.
Is that really all there is to it? Does it depend on the dimensions (# of units (N), # of time periods (T))? My intuition is that serial correlation plays a role in two ways. The first is through the standard errors, which will not be reliable if you do not control for serial correlation. This issue is generally solved by using robust/cluster(). The second is more complicated - is the presence of serial correlation an indication of model misspecification? In other words, can serial correlation tests help you figure out what kind of model you need to properly explain the data? E.g. do you use a static model, one in differences, one with lags of dependent and independent variables? Or will these tests give you a false sense of credibility (i.e. you think your model is correctly specified because your test says it is free of serial correlation, but actually the test isn't that informative in practice)?
I'd be very interested to hear your opinion.
Comment