Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • testing the assumptions for panel data

    I have a similar question:

    I am evaluating the impact of options on the payout decision for firms in the period 2012-2016. The panel data set is unbalanced and I'm aware that the data could be incomplete.
    Variables N Mean Std. Dev. Min Max
    Dependent Variables:
    Repurchase Payout 708 0.0021 0.0098 0.0000 0.1899
    Dividend Payout 678 0.0323 0.0660 0.0000 0.9686
    Independent Variable:
    Options 782 0.0044 0.0297 0.0000 0.5684
    Control Variables:
    Free Cash Flow 778 -0.0215 0.2049 -2.3313 0.4926
    Leverage 794 0.2830 0.2352 0.0000 1.9068
    Financing Costs 800 21.5723 2.2830 15.2656 28.6068
    started by doing the following:
    xtset company-id year


    I want to know which model I should be using:
    • Conducted a BP test for RE vs OLS (xttest0)
      -> rejected H0 for the dividend variable (xtreg: Dividend= Options + Cash flow + Leverage + Financing costs)
      -> failed to reject H0 for the repurchase variable (xtreg: Repurchase= Options + Cash flow + Leverage + Financing costs)
      ---> Is it possible to use two different models for these regressions when they are based on the same dataset?
    • Additionally, I've visually inspected the residual distribution, to check for heteroskedasticity (as Carlo mentioned in another thread). With the following results:
      [ATTACH=CONFIG]n1471579[/ATTACH][ATTACH=CONFIG]n1471580[/ATTACH]
      --> How do I interpret these outputs (dividend to the left, repurchase to the right)? If there is evidens of heterosked. how do I fix it?
    • What other tests should I run in order to see if the assumptions hold?
    And how do I export the test results to word (preferably rtf-format)?

    All answers are appreciated
    Kind regards,
    Ola

  • #2
    You didn't get a quick answer. You'll increase your chances of a useful answer by following the FAQ on asking questions - provide Stata code in code delimiters, readable Stata output, and sample data using dataex. Many of us won't open files so put the material in the posting.

    Normally, we would start by checking random versus fixed effects. These estimates provide tests that the panel effects are not needed. It is quite possible that the tests support different models for different dvs. What to do about it is not clear. Some would report the different estimators for different dvs and others would report the estimator that is consistent if not efficient for both dvs.

    For exporting, there are many user-written procedures. I like outreg2, but if you search on this listserve (for findit in Stata), you'll find several others that also work fine. I don't usually bother exporting the test results - you're talking about a few numbers that I would normally put in the text of my paper.

    Comment


    • #3
      Ola:
      as an aside to Phil's helpful advice, if you have detected heteroskedastcity in your residual distribution, you can simply invoke -robust- or -cluster- standard errors (unlike -regress-, both options do the same job under -xtreg-). However, if that were the case, you cannot use -hausman- anymore to compare -fe- vs -re- specification (as Phil wisely recommended), as -hausman- allows default standard errors only. The easy fix is to install the user-written programme -xtoverid- (just type -search xtoverid- from within Stata to spot and install it), which allows non-default standard errors and tests whether -re- specification is still the way to go.
      As usual with linear regression model, the most relevant test should be aimed at checking whteher the relationships beteween regressand and predictors are actually linear (something the we check via -estat ovtest- after -regress-). Since -estat ovtest- won't work after -xtreg-, you can perform a Pregibon test yourself, following the indications reported in the valuable https://www.stata.com/bookstore/heal...s-using-stata/ (pages 58-60).
      Kind regards,
      Carlo
      (Stata 19.0)

      Comment

      Working...
      X