Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Assumptions test after negative binomial regression

    Dear all,

    I have something to ask about assumptions test after negative binomial regression.

    After finishing negative binomial regression, (nbreg y x1 x2 .....).

    I would like to assess effects of the assumptions on regression model employed on the variance of the coefficient estimates.

    Could you give me any suggestions ?

    Thank you.

    Best regards.

  • #2
    I'd rewrite your inquiry, adding more detail. I have no idea what you are asking.

    Comment


    • #3
      Thang:
      welcome to this forum.
      As George receomended, more details about your query are needed to reply positively.
      That said, usually researchers go -nbreg- following the results of an overdispersed -poisson- regression.
      If this is the case, you may find https://journals.sagepub.com/doi/pdf...867X1401400406 useful.
      Kind regards,
      Carlo
      (StataNow 18.5)

      Comment


      • #4
        Dear George Ford,

        After I found overdispersion in the Poisson regression, I ran a negative binomial regression.

        Now, I want to explore the effects of the assumptions on the regression model employed on the variance of the coefficient estimates.

        I don't know how to do that. I intend to assess the assumptions of the nbreg model, such as multicollinearity, robust standard errors, ..... and compare them with the Poisson regression results to verify changes in standard errors and model fit. I can do the Variance-Covariance Matrix.

        Could you help me find the best way to approach this problem?

        Thank you.

        Best regards.

        Comment


        • #5
          Poisson regression is consistent for the beta parameters no matter what is the variance-mean relationship. NegBin is not. You’re better off using Poisson regression with robust standard errors. Then the only think to test is the functional form of the conditional mean function.

          Comment


          • #6
            See #3:
            HTML Code:
            https://www.statalist.org/forums/forum/general-stata-discussion/general/1587040-why-do-poisson-and-negative-binomial-regressions-yield-the-same-result
            And
            HTML Code:
            https://www.statalist.org/forums/forum/general-stata-discussion/general/1567752-hausman-test-for-negative-binomial-fixed-effects-and-random-effects
            Some interesting stuff here:
            HTML Code:
            https://www3.nd.edu/~rwilliam/stats3/CountModels.pdf
            In most instances, Poisson with robust standard errors is the way to go.

            The data is inherently heteroskedastic. Why test it?

            Multicollinearity is rarely worth bothering with. MC is about the X's, not the Y. You can compute the VIFs by regressing the X's on each other and making a computation with the R2. Or, just run the model using reg and estat vif. The Y variable doesn't influence the results (unless there are missing values).

            Comment

            Working...
            X