Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • How to compare β coefficients from two different logistic regression models using permutation test

    Hi everyone,

    I am running two logistic regression models, in which only dependent variables are different and all the 6 independent variables are same.
    I just want to know whether one of the β coefficients from one model is larger than that from the other model using permutation test instead of using suest/Hausman specification test.
    However, I had completely no idea how to type command to complete this task on Stata.

    I would really appreciate any help I might get here.

    Yuki Ishikawa

  • #2
    The regressions I run were as follows:
    logit y a b c d e f g, vce(robust)
    logit z a b c d e f g, vce(robust)

    Though I found a similar question on Statalist, that did not help me.
    https://www.statalist.org/forums/for...ing-difference

    Comment


    • #3
      You'd have to permute the outcome variable in the context of something like suest in order to get the test statistic returned to permute.

      You'd first rename the two outcome variables, say out0 and out1, then reshape long out, . . . j(which), and write a command that uses logit out a b c d e f g if `which' and . . . if !`which' with which permuted, followed by a test of the equality of the regression coefficients (with for example suest) and then return the test statistic.

      That's a lot of rigamarole, and with a permutation test, you run the risk of nonconvergence, especially with so many explanatory variables in the logistic regression model.

      What is your issue with the conventional approaches?

      Comment


      • #4
        Here:
        Code:
        version 15.1
        
        clear *
        
        set seed `=strreverse("1488678")'
        quietly set obs 250
        
        generate byte y = runiform() > 0.5
        generate byte z = runiform() > 0.5
        
        generate double a = runiform()
        
        gsem (y <- c.a) (z <- c.a), logit nocnsreport nodvheader nolog
        test _b[z:a] = _b[y:a]
        
        *
        * Begin here
        *
        rename y out0
        rename z out1
        generate long row = _n
        quietly reshape long out, i(row) j(which)
        
        program define tester, rclass
            version 15.1
            syntax
        
            quietly logit out c.a if which
            estimates store A
        
            quietly logit out c.a if !which
            estimates store B
        
            suest A B
            test [A_out]a = [B_out]a
            return scalar chi2 = r(chi2)
        
            estimates drop A B
        end
        
        tester
        
        permute which chi2 = r(chi2), reps(3000) nodots: tester
        
        exit
        It doesn't buy you much for all your trouble—p = 0.63 from the permutation test versus 0.64 from the two conventional methods. Yes, you can call for one-sided tests with permute, but you can test directional hypotheses with the conventional methods, as well.

        Comment


        • #5
          Sorry, I forgot to stratify on the row of data (that is, randomly swap y and z within an observation).
          Code:
          permute which chi2 = r(chi2), strata(row) reps(3000) nodots: tester
          It doesn't matter in the illustration above (the p-value goes from 0.63 to 0.65), because there is no correlation between the outcomes, but in your case, it might.

          Comment


          • #6
            Come to think of it, you could make the test go faster using the difference between coefficients, itself, as the test statistic. Resampling gives its reference distribution under the null hypothesis.
            Code:
            program define tester, rclass
                version 15.1
                syntax
            
                logit out c.a if !which
                tempname ya
                scalar define `ya' = _b[a]
            
                logit out c.a if which
                
                return scalar d = _b[a] - `ya'
            end
            
            permute which d = r(d), strata(row) reps(3000) nodots: tester

            Comment


            • #7
              Yes, I like using the difference of slopes as the test statistic. Saving computation time probably doesn't matter in many situations, but one of the virtues of permutation methods is that you can use whatever test statistic you like, including one that measures a quantity of direct substantive interest, i.e., the slope.

              Comment


              • #8
                Jeseph, Mike,

                I really appreciate your useful advice. Although it took time to understand and run the commands on my data set, it completely solved the problem.
                There was no issues for using conventional approaches, but I just wanted to try another approach to confirm the obtained results.
                Actually, obtained p-values were well comparable, though ones from the conventional approach were a little bit smaller.

                Thanks so much again.

                Comment


                • #9
                  @Mike Lacy @Joseph Coveney Hi Mike and Joseph, I'm sorry to ask a question in this thread as it's been five years old. I have a similar query but couldn't find the answers anywhere else. I wanna test whether the coefficient difference is significant using your code in #6. However, after using -permute-, it only generates two regression results without testing the significance of the coefficient difference. Do you possibly know why? Many thanks to you!

                  Comment


                  • #10
                    Originally posted by Jae Li View Post
                    . . . after using -permute-, it only generates two regression results without testing the significance of the coefficient difference. Do you possibly know why?
                    You don't show anything, but I guess that your code is shown in this thread.

                    I show a couple of suggestions in your other thread. Take a look at them there.

                    Comment

                    Working...
                    X