Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Hello everyone! Your posts were really helpful, but I got stuck after Nikolay's suggestion to amend the linear predictor with random effects (April 4th). Is it possible to extend Sandra's zero-inflated poisson model to include two random intercepts for did and hid?

    Comment


    • #17
      Hello Asha! Sorry for my late response! I send you a detailed example via pm, which allow you to amend random effects in your estimation procedure. In addition to that i've a question of my own as i am currently having some problems implementing a random intercept and multiple random coefficients in a zero-inflated negative binomial model. Can anyone provide some insights into a possible solution using the likelihood option?

      Comment


      • #18
        Thank you! I read your instructions and managed to get most of my analyses to run smoothly, but in a few cases, it seems as if the burn-in period isn't sufficient. How do I solve this problem? Just increasing the burn-in length appears to be a bit unsophisticated - sorry, I'm quite new to bayesian statistics.

        Comment


        • #19
          As you've mentioned, a first step is to increase the length of your burn-in phase, but if you are running complex models a mere increase probably won't suffice. I would recommend checking your model specification previous to any increase in order to avoid unnecessary complex models in the first place. In addition, I focus on choosing a suitable prior distribution for my parameters - as a general review of various distributions, I would like to recommend "Statistical Distributions" by Forbes et al. and "Handbook on statistical distributions for experimentalists" by Christian Walck.

          Comment


          • #20
            Dear Nikolay Balov,

            I'm quite new to bayesian estimation (and bayesmh).
            Would you please supply sample code for three-level random intercept model?

            Comment


            • #21
              Hello again,

              it has been some time, but after becoming quite frustrated with bayesian analysis I've decided to give it a second try - using mcmc simulations and avoiding lots of optimization problems is too tempting. As Daniel suggestet earlier in this thread, I've read a few books on prior distributions and likelihood formulations, but still I have difficulties understanding multilevel structures via bayesmh with a third or fourth level random intercept.

              Let's assume one would like to reproduce the first example provided in this thread - except with a three-level structure:
              Code:
              use http://www.stata-press.com/data/r14/melanoma, clear
              generate random = uv*uniform()
              menbreg deaths uv random ||nation: ||region:, nolog noheader
              Following this thread, I came up with two solutions, but the results are quite different from the menbreg estimation - aside from possible difficulties concerning a bad mixture. A first approach would be:
              Code:
              fvset base none region nation
              bayesmh deaths, likelihood(llf(ln(nbinomialp(exp(-{lnalpha}), deaths,                    ///
                              1/(1+exp({lnalpha}+{nat:}+{reg:}+{deaths:uv random})))))) noconstant     ///
                  redefine(nat:i.nation) redefine(reg:i.region)           ///
                  prior({deaths:uv random _cons}, normal(0,100))          ///
                  prior({nat:i.nation}, normal({deaths: _cons},{var_nat}))            ///
                  prior({reg:i.region}, normal({deaths: _cons},{var_reg}))            ///
                  prior({var_nat}, igamma(0.001,0.001)) prior({lnalpha},flat)         ///
                  prior({var_reg}, igamma(0.001,0.001))                                 ///
                  block({deaths:uv random _cons}{lnalpha}) block({var_nat}{var_reg})    ///
                  exclude({reg:i.region}{nat:i.nation})                                ///
                  burnin(250000) mcmcsize(25000) thinning(25) dots(10000)            ///
                  nomodelsummary noheader rseed(123456789)
              However, another statalist thread presented a different approach by creating a new interaction variable of nation and region. I tried to apply the given examples to my problem, which resulted in this code:
              Code:
              egen natreg = group(nation region)
              fvset base none natreg
              bayesmh deaths, likelihood(llf(ln(nbinomialp(exp(-{lnalpha}), deaths,                    ///
                              1/(1+exp({lnalpha}+{nat:}+{reg:}+{deaths:uv random})))))) noconstant     ///
                  redefine(nat:i.nation) redefine(reg:i.natreg)           ///
                  prior({deaths:uv random _cons}, normal(0,100))          ///
                  prior({nat:i.nation}, normal({deaths: _cons},{var_nat}))            ///
                  prior({reg:i.natreg}, normal({deaths: _cons},{var_reg}))            ///
                  prior({var_nat}, igamma(0.001,0.001)) prior({lnalpha},flat)         ///
                  prior({var_reg}, igamma(0.001,0.001))                                 ///
                  block({deaths:uv random _cons}{lnalpha}) block({var_nat}{var_reg})    ///
                  exclude({reg:i.natreg}{nat:i.natreg})                                ///
                  burnin(250000) mcmcsize(25000) thinning(25) dots(10000)            ///
                  nomodelsummary noheader rseed(123456789)
              Are these approaches any good? I have some serious doubts but can't point my finger at the problem. I would highly appreciate any comments!

              Thanks you! Asha

              Comment


              • #22
                Asha,

                In general you should follow the second approach although in this particular example does not matter. The model
                Code:
                menbreg deaths uv random ||nation: ||region:, nolog noheader
                assumes that regions are nested in nations. For the melanoma dataset however, the values of the region variable are unique across nations and the nested and cross-factor specifications are in fact equivalent.
                Code:
                egen natreg = group(nation region)
                fvset base none region nation natreg
                Since the linear combination for the fixed effects {deaths:} does not include a constant term, including {deaths:_cons} as a mean of the hyperprior for the {nat:} random effects is fine. However, the prior mean for the {reg:} random effects should be 0, otherwise you introduce {deaths:_cons} twice as a fixed effect in the likelihood equation. Here it is the model I am able to fit successfully
                Code:
                set seed 12345
                bayesmh deaths, likelihood(llf(ln(nbinomialp(exp(-{lnalpha}), deaths,        ///
                        1/(1+exp({lnalpha}+{nat:}+{reg:}+{deaths:uv random}))))))            ///
                    noconstant redefine(nat:i.nation) redefine(reg:i.natreg)                 ///
                    prior({deaths:uv random _cons}, normal(0, 100))                          ///
                    prior({nat:i.nation}, normal({deaths: _cons}, {var_nat}))                ///
                    prior({reg:i.natreg}, normal(0, {var_reg}))                              ///
                    prior({var_nat}, igamma(0.01, 0.01)) prior({lnalpha}, normal(0, 100))    ///
                    prior({var_reg}, igamma(0.01, 0.01))                                     ///
                    block({deaths:uv random _cons}{lnalpha}) block({var_nat}{var_reg})       ///
                    exclude({reg:i.natreg}{nat:i.nation}) burnin(5000) mcmcs(20000) dots
                Once again, for this particular dataset, the above specification is equivalent to
                Code:
                set seed 12345
                bayesmh deaths, likelihood(llf(ln(nbinomialp(exp(-{lnalpha}), deaths,        ///
                        1/(1+exp({lnalpha}+{nat:}+{reg:}+{deaths:uv random}))))))            ///
                    noconstant redefine(nat:i.nation) redefine(reg:i.region)                 ///
                    prior({deaths:uv random _cons}, normal(0, 100))                          ///
                    prior({nat:i.nation}, normal({deaths: _cons}, {var_nat}))                ///
                    prior({reg:i.region}, normal(0,{var_reg}))                               ///
                    prior({var_nat}, igamma(0.01, 0.01)) prior({lnalpha},normal(0, 100))     ///
                    prior({var_reg}, igamma(0.01, 0.01))                                     ///
                    block({deaths:uv random _cons}{lnalpha}) block({var_nat}{var_reg})       ///
                    exclude({reg:i.region}{nat:i.nation}) burnin(5000) mcmcs(20000) dots
                Nikolay

                Comment


                • #23
                  Thank you, Nicolay! I really appreciate your comments!!!

                  Comment


                  • #24
                    I hope for a response given that this is an old post but the FAQ does say to avoid starting new posts for the same or very similar questions so here goes nothing.
                    Can anyone post the/a solution to a zero-inflated multilevel poisson that Daniel Weller unfortunately kept secret and sent to another Statalister in a private message in #17? More interesting would be the example in #14 in multilevel.

                    Thank you all.

                    Comment


                    • #25
                      An example of zero-inflated multilevel Poisson model with random intercepts included in the main predictor (xb) and the inflation (xg) forms. See #8 above for the likelihood specification of the regular zero-inflated Poisson model. The random intercepts, xb_re and xg_re, here defined for the person variable, are included only for illustration purposes.

                      Code:
                          bayesmh count, noconstant  ///
                          likelihood(llf(cond(count<=0, ///
                          ln(1/(1+exp(-{xg:}-{xg_re:})) + 1/(1+exp({xg:}+{xg_re:}))*exp(-exp({xb:}+{xb_re:}))), ///
                          count*({xb:}+{xb_re:}) - ln(1+exp({xg:}+{xg_re:})) - exp({xb:}+{xb_re:}) - lngamma(count+1)))) ///
                          xbdefine(xb: livebait) xbdefine(xg: child camper)                              ///
                          redefine(xb_re:i.persons) redefine(xg_re:i.persons)                            ///
                          prior({xb_re:i.persons}, normal({xb: _cons}, {var_xb}))                        ///
                          prior({var_xb}, igamma(0.01, 0.01))                                            ///
                          prior({xg_re:i.persons}, normal({xg: _cons}, {var_xg}))                        ///
                          prior({var_xg}, igamma(0.01, 0.01))                                            ///
                          prior({xb:} {xg:}, normal(0, 10))                                              ///
                          block({xb:}) block({xg:}) block({var_xb}{var_xg}, split)

                      Comment


                      • #26
                        Nikolay, thank you.

                        Comment

                        Working...
                        X