Hello everyone,
I'm working with a micro dataset that includes a large sample, with N around 20,000 and T equal to 10 (though I can adjust the time dimension and sorting range). After running the following command, I received poor results. How can I improve my analysis? Do you think the large N makes my dataset unsuitable for System GMM? (I tried two-step, however it does not improve results). For context, the endogenous variables are correctly identified based on the literature. You can see the results.
Best regards.


I'm working with a micro dataset that includes a large sample, with N around 20,000 and T equal to 10 (though I can adjust the time dimension and sorting range). After running the following command, I received poor results. How can I improve my analysis? Do you think the large N makes my dataset unsuitable for System GMM? (I tried two-step, however it does not improve results). For context, the endogenous variables are correctly identified based on the literature. You can see the results.
Best regards.
Code:
xtabond2 FINV2 l.FINV2 FCF TQ CF SIZE LEV NWC SG D_DIV GDP PR year_dummy13-year_dummy21,gmm(l.FINV2, lag(1 .)) gmm(l.FCF, lag(1 .)) gmm(l.TQ, lag(1 .)) gmm(l.CF, lag(1 .)) gmm(l.SIZE, lag(1 .)) gmm(l.LEV, lag(1 .)) gmm(l.NWC, lag(1 .)) gmm(l.SG, lag(1 .)) gmm(l.D_DIV, lag(1 .)) iv(GDP PR year_dummy13-year_dummy21) robust