Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Determining which groups in a chi-square table differ from each other.

    Hello,

    I have a general question that involves determining which groups differ from each other in a chi-square test. The following data come from the 2008 Canadian Community Health Survey, I am not interested in the specifics of this particular analysis (receiving tangible social support x education), but I am interested in the general form of this analysis. For example:

    tab rectan educ, chi2 cchi2 expected


    +--------------------+
    | Key |
    |--------------------|
    | frequency |
    | expected frequency |
    | chi2 contribution |
    +--------------------+

    Received |
    tangible | Highest level/edu. - HH 4 levels - (D)
    social support | < THAN SE SECONDARY OTHER POS POST-SEC. | Total
    ---------------+--------------------------------------------+----------
    0 | 1,507 1,529 845 4,880 | 8,761
    | 1,759.3 1,513.0 826.1 4,662.6 | 8,761.0
    | 36.2 0.2 0.4 10.1 | 46.9
    ---------------+--------------------------------------------+----------
    YES | 1,172 775 413 2,220 | 4,580
    | 919.7 791.0 431.9 2,437.4 | 4,580.0
    | 69.2 0.3 0.8 19.4 | 89.8
    ---------------+--------------------------------------------+----------
    Total | 2,679 2,304 1,258 7,100 | 13,341
    | 2,679.0 2,304.0 1,258.0 7,100.0 | 13,341.0
    | 105.4 0.5 1.3 29.5 | 136.7

    Pearson chi2(3) = 136.6756 Pr = 0.000


    As you can see, the chi-square test is significant. But I want to know which column proportions are significantly different from the others. I think a z-test would be appropriate for testing the difference in proportions, but I am unsure of how to go about this in Stata 13. Is there a relatively straightfoward way of testing these differences?

    Cheers,

    David.
    Last edited by David Speed; 19 Jan 2017, 09:51. Reason: The table did not format properly in the post, despite looking normal on the edits.

  • #2
    The way to get a readable display of output is by using CODE delimiters as explained in the FAQ Advice linked on this page.

    I think it's usually mistaken to think in terms of further detailed tests once you have already done a chi-square test. Further analysis is usually more fruitful through informal analysis of residuals. See e.g. tabchi from tab_chi (SSC).

    Comment


    • #3
      Thank-you Nick - I will re-read the FAQ to try to avoid that issue in the future.

      Could you clarify why it's mistaken to think of further detailed tests after the chi-square? Usually significant statistical tests indicate a further need to investigate differences (e.g., ANOVAs and contrasts), don't they?

      Comment


      • #4
        It's immensely more contentious than that. I'd say rather that there is a culture of post hoc testing after ANOVA which seems more problematic than its practitioners admit. Once tests are carried out post hoc after other tests, the meaning of a test starts becoming even more questionablem unless you penalise yourself heavily.

        Comment


        • #5
          I fully agree with the risks of post hoc testing, particularly without coping with familywise error.

          Among users of other statistical packages (such as SPSS), I gather chi-square residuals is in much more common parlance, so to speak.

          As far as I'm concerned, though, it seems Stata doesn't provide automatically the residuals issued from a chi-square test.

          However, the user-written SSC tabchi (by Nick Cox) can precisely do that, as Nick pointed out in #2.

          Below, a toy example:

          Code:
          . sysuse auto
          (1978 Automobile Data)
          
          . gen myrep = rep78
          (5 missing values generated)
          
          . replace myrep = 1 if myrep <3
          (8 real changes made)
          
          . tab myrep
          
                myrep |      Freq.     Percent        Cum.
          ------------+-----------------------------------
                    1 |         10       14.49       14.49
                    3 |         30       43.48       57.97
                    4 |         18       26.09       84.06
                    5 |         11       15.94      100.00
          ------------+-----------------------------------
                Total |         69      100.00
          
          . tab myrep foreign, exp chi2
          
          +--------------------+
          | Key                |
          |--------------------|
          |     frequency      |
          | expected frequency |
          +--------------------+
          
                     |       Car type
               myrep |  Domestic    Foreign |     Total
          -----------+----------------------+----------
                   1 |        10          0 |        10
                     |       7.0        3.0 |      10.0
          -----------+----------------------+----------
                   3 |        27          3 |        30
                     |      20.9        9.1 |      30.0
          -----------+----------------------+----------
                   4 |         9          9 |        18
                     |      12.5        5.5 |      18.0
          -----------+----------------------+----------
                   5 |         2          9 |        11
                     |       7.7        3.3 |      11.0
          -----------+----------------------+----------
               Total |        48         21 |        69
                     |      48.0       21.0 |      69.0
          
                    Pearson chi2(3) =  27.2640   Pr = 0.000
          
          . tabchi myrep foreign, a
          
                    observed frequency
                    expected frequency
                    adjusted residual
          
          ------------------------------
                    |      Car type    
              myrep | Domestic   Foreign
          ----------+-------------------
                  1 |       10         0
                    |    6.957     3.043
                    |    2.262    -2.262
                    |
                  3 |       27         3
                    |   20.870     9.130
                    |    3.236    -3.236
                    |
                  4 |        9         9
                    |   12.522     5.478
                    |   -2.098     2.098
                    |
                  5 |        2         9
                    |    7.652     3.348
                    |   -4.040     4.040
          ------------------------------
          
          
          2 cells with expected frequency < 5
          
                    Pearson chi2(3) =  27.2640   Pr = 0.000
           likelihood-ratio chi2(3) =  29.9121   Pr = 0.000
          In general, absolute residuals beyond 3 are considered somewhat "implicated" in the statistical difference. Depending on the data, even beyond 2.

          Hopefully that helps
          Last edited by Marcos Almeida; 19 Jan 2017, 18:46.
          Best regards,

          Marcos

          Comment


          • #6
            Hi Nick/Marcos - I absolutely agree with you in regards to FWER and the "meaning" of dozens of posthoc tests. However, calculating the correct alpha level is trivial once one knows the number of comparisons to be made. Incidentally, SPSS has an option to use Bonferroni corrections when using z-tests for proportion comparisons, but I'm not sure how many people avail of them.

            Besides the inflated Type I error rate, is there an additional issue with using post-hoc tests?

            RE: tabchi and "adjusted residuals". I understand what residuals are, but how are the residuals adjusted? I read the "help" document for it, but it doesn't go into detail about it (I'm guessing there's a good reason for this, but I'm just not experienced enough with Stata to determine it). Additionally, why is it that residuals >3 are of note (or >2)? I'm sorry if this question is basic, I was just curious. If I'm able to use post-hoc comparisons within Stata instead of switching between Stata and SPSS, the analyses I have to do become simpler.

            Comment


            • #7
              Adjusted residuals are explained in the help of tabchi as

              Pearson residuals divided by an estimate of their standard error.

              and the code should help

              Code:
                              gen double `adj' = `Pearson' / sqrt((1 - `rowsum' / `tabsum') * (1 - `colsum'/`tabsum'))
              I don't know a canned way to follow up such analyses with post hoc tests in Stata, certainly not in any program I've written.

              Comment


              • #8
                However, calculating the correct alpha level is trivial once one knows the number of comparisons to be made.
                I fear I do not agree with this, unless you mean the Bonferroni correction, considered too conservative.

                Besides the inflated Type I error rate, is there an additional issue with using post-hoc tests? [...] Additionally, why is it that residuals >3 are of note (or >2)? I'm sorry if this question is basic, I was just curious.
                These are very important questions. But I'm afraid the answers are widely available in practically any good book on the matter. Over there, I'm absolutely sure you will find thorough - and didatic - explanations on these issues.

                All in all, I just wish to finish with two take-home messages: IMHO, post hoc tests should never be taken as an afterthought, so to speak, no matter the "correction" for the familywise error. Residuals are usually interpreted like this: the larger, the more "different" from the average values. That said, I believe 2 (rounded from 1.96) as a standardized residual "calls for" 95% and 3 "calls for 99%, in terms of the "empirical rule"...

                Hopefully that helps!
                Last edited by Marcos Almeida; 21 Jan 2017, 13:26.
                Best regards,

                Marcos

                Comment


                • #9
                  Hi Marcos, thank-you for your detailed reply. I'm not sure if we're in disagreement or are using different wording regarding multiple comparisons. The formula for FWER is:

                  FWER = 1 - (1 - alpha)^n.

                  As a researcher if I knew I wanted a .05 alpha level and was going to make 10 comparisons, then I can input those numbers into the formula, and get:

                  .05 = 1 - (1-alpha)^10 -> which is just a bit of algebra to come up with:

                  alpha = .0051

                  This is what I had meant when I said "trivial"; or am I mistaken about something? Thanks for explaining the 2> or 3> threshold, that makes sense. Could you recommend a good textbook/article regarding the perils of multiple-comparisons? Unfortunately, the statistics textbooks I have on topic (Field; Kerlinger et al.) only seem to caution against inflated error and "fishing" expeditions, but don't discuss much else.

                  Comment


                  • #10
                    David, bear in mind that the formula you show in #9 assumes that each contrast is independent of all of the other contrasts, and if you're making all pair-wise comparisons, they are not all independent.

                    I also wanted to suggest another approach that could be used if you have an a priori set of contrasts that partition the overall table into orthogonal components. In that case, it is convenient to use the likelihood ratio Chi-square, because the overall LR Chi-square with df=k can be partitioned into k orthogonal components, such that the sum of those k components sum to the overall LR Chi-square. Here is an example, using the same data Marcos used in #5.

                    Code:
                    . sysuse auto, clear
                    (1978 Automobile Data)
                    
                    . gen myrep = rep78
                    (5 missing values generated)
                    
                    . replace myrep = 1 if myrep <3
                    (8 real changes made)
                    
                    . tab myrep
                    
                          myrep |      Freq.     Percent        Cum.
                    ------------+-----------------------------------
                              1 |         10       14.49       14.49
                              3 |         30       43.48       57.97
                              4 |         18       26.09       84.06
                              5 |         11       15.94      100.00
                    ------------+-----------------------------------
                          Total |         69      100.00
                    
                    . tab myrep foreign, exp chi2 lr
                    
                    +--------------------+
                    | Key                |
                    |--------------------|
                    |     frequency      |
                    | expected frequency |
                    +--------------------+
                    
                               |       Car type
                         myrep |  Domestic    Foreign |     Total
                    -----------+----------------------+----------
                             1 |        10          0 |        10
                               |       7.0        3.0 |      10.0
                    -----------+----------------------+----------
                             3 |        27          3 |        30
                               |      20.9        9.1 |      30.0
                    -----------+----------------------+----------
                             4 |         9          9 |        18
                               |      12.5        5.5 |      18.0
                    -----------+----------------------+----------
                             5 |         2          9 |        11
                               |       7.7        3.3 |      11.0
                    -----------+----------------------+----------
                         Total |        48         21 |        69
                               |      48.0       21.0 |      69.0
                    
                              Pearson chi2(3) =  27.2640   Pr = 0.000
                     likelihood-ratio chi2(3) =  29.9121   Pr = 0.000
                    
                    . local lr0 = r(chi2_lr)
                    
                    . local p0 = r(p_lr)
                    
                    .
                    . * LR Chi-square for 2x2 table with categories 1 & 3 pooled
                    . * and categories 4 & 5 pooled.
                    . recode myrep (1 3 = 13) (4 5 = 45), gen(myrep2)
                    (69 differences between myrep and myrep2)
                    
                    . tab myrep myrep2
                    
                               |    RECODE of myrep
                         myrep |        13         45 |     Total
                    -----------+----------------------+----------
                             1 |        10          0 |        10
                             3 |        30          0 |        30
                             4 |         0         18 |        18
                             5 |         0         11 |        11
                    -----------+----------------------+----------
                         Total |        40         29 |        69
                    
                    
                    . tab myrep2 foreign, exp chi2 lr
                    
                    +--------------------+
                    | Key                |
                    |--------------------|
                    |     frequency      |
                    | expected frequency |
                    +--------------------+
                    
                     RECODE of |       Car type
                         myrep |  Domestic    Foreign |     Total
                    -----------+----------------------+----------
                            13 |        37          3 |        40
                               |      27.8       12.2 |      40.0
                    -----------+----------------------+----------
                            45 |        11         18 |        29
                               |      20.2        8.8 |      29.0
                    -----------+----------------------+----------
                         Total |        48         21 |        69
                               |      48.0       21.0 |      69.0
                    
                              Pearson chi2(1) =  23.6449   Pr = 0.000
                     likelihood-ratio chi2(1) =  24.9946   Pr = 0.000
                    
                    . display r(chi2_lr)
                    24.994622
                    
                    . local lr1 = r(chi2_lr)
                    
                    . local p1 = r(p_lr)
                    
                    .
                    . * LR Chi-square for 2x2 table with category 1 vs category 3
                    . tab myrep foreign if myrep < 4, exp chi2 lr
                    
                    +--------------------+
                    | Key                |
                    |--------------------|
                    |     frequency      |
                    | expected frequency |
                    +--------------------+
                    
                               |       Car type
                         myrep |  Domestic    Foreign |     Total
                    -----------+----------------------+----------
                             1 |        10          0 |        10
                               |       9.3        0.8 |      10.0
                    -----------+----------------------+----------
                             3 |        27          3 |        30
                               |      27.8        2.3 |      30.0
                    -----------+----------------------+----------
                         Total |        37          3 |        40
                               |      37.0        3.0 |      40.0
                    
                              Pearson chi2(1) =   1.0811   Pr = 0.298
                     likelihood-ratio chi2(1) =   1.8058   Pr = 0.179
                    
                    . display r(chi2_lr)
                    1.8057787
                    
                    . local lr2 = r(chi2_lr)
                    
                    . local p2 = r(p_lr)
                    
                    .
                    . * LR Chi-square for 2x2 table with category 4 vs category 5
                    . tab myrep foreign if myrep > 3, exp chi2 lr
                    
                    +--------------------+
                    | Key                |
                    |--------------------|
                    |     frequency      |
                    | expected frequency |
                    +--------------------+
                    
                               |       Car type
                         myrep |  Domestic    Foreign |     Total
                    -----------+----------------------+----------
                             4 |         9          9 |        18
                               |       6.8       11.2 |      18.0
                    -----------+----------------------+----------
                             5 |         2          9 |        11
                               |       4.2        6.8 |      11.0
                    -----------+----------------------+----------
                         Total |        11         18 |        29
                               |      11.0       18.0 |      29.0
                    
                              Pearson chi2(1) =   2.9360   Pr = 0.087
                     likelihood-ratio chi2(1) =   3.1117   Pr = 0.078
                    
                    . display r(chi2_lr)
                    3.1117155
                    
                    . local lr3 = r(chi2_lr)
                    
                    . local p3 = r(p_lr)
                    
                    .
                    . display "LR Chis-quare for 1st partition = "`lr1' "   p = "`p1'
                    LR Chis-quare for 1st partition = 24.994622   p = 5.749e-07
                    
                    . display "LR Chis-quare for 2nd partition = "`lr2' "   p = "`p2'
                    LR Chis-quare for 2nd partition = 1.8057787   p = .17901545
                    
                    . display "LR Chis-quare for 3rd partition = "`lr3' "   p = "`p3'
                    LR Chis-quare for 3rd partition = 3.1117155   p = .07773105
                    
                    . display "Sum of the 3 LR C-square values = "`lr1' + `lr2' + `lr3'
                    Sum of the 3 LR C-square values = 29.912116
                    
                    . display "LR Chi-square for full table = "`lr0' "   p = "`p0'
                    LR Chi-square for full table = 29.912116   p = 1.440e-06

                    I'm relatively new to Stata, so don't know if this type of testing is available in a more convenient form (e.g., a user-written package).

                    HTH.
                    --
                    Bruce Weaver
                    Email: [email protected]
                    Version: Stata/MP 18.5 (Windows)

                    Comment


                    • #11
                      David,

                      Bruce shed further light on the matter. Shall you wish to delve into this tricky issue, I recommend you take a look at this article (http://www.stata-journal.com/sjpdf.h...iclenum=st0035).

                      With regards CIs, about post-hoc corrections such as Bonferroni's as well as Sidak's, we read that:

                      Most scientists, most of the time, do not use corrected confidence intervals of this kind. It is more common to use multiple-test procedures, which reject a subset of the null hypotheses and enable us to be 100(1 − α)% confident that all, or some, of the rejected null hypotheses are false. This is often more concise, and less conservative, than giving a full list of corrected confidence limits.
                      Anyway, there you may also find detailed explanation on "correcting" formulas, the caveat related to the lack of independence (as pointed out by Bruce) and, what is more, a user-written command (SJ smileplot, by Roger Newson, a member of the Forum).

                      Hopefully that helps.
                      Last edited by Marcos Almeida; 22 Jan 2017, 07:17.
                      Best regards,

                      Marcos

                      Comment


                      • #12
                        After reading the article Marcos, I completely understand your comment about the difficulties. Thanks for the article!

                        Comment


                        • #13
                          Bruce RE: 10,

                          Bah. Yes, you're correct it assumes independence - and this would be especially problematic for post-hoc tests with mutually exclusive groups or even repeated measures. Darn, I thought I had figured out a convenient and reasonable approach to FWER.

                          What if there were no a priori hypotheses? Often when I'm asked to do analyses it's ~"Is there a difference between these 10 groups?"; which, while a valid research question, is the antithesis of a priori.

                          Comment


                          • #14
                            Hi David. Re #13, if all k contrasts are independent of each other, 1-(1-alpha)k gives the FWER exactly. But when there are dependencies among the contrasts, you have to do something else, such as using the Bonferroni inequality--i.e., set your per-contrast alpha = the desired FWER / k.

                            HTH.
                            --
                            Bruce Weaver
                            Email: [email protected]
                            Version: Stata/MP 18.5 (Windows)

                            Comment


                            • #15
                              Hi Bruce,

                              RE: #14

                              Does Bonferroni correct for correlated p-values though? Or is it better simply because it is more conservative?

                              Cheers,

                              David.

                              Comment

                              Working...
                              X