Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • ANOVA (rejection of Bartlett Test)

    Dear Forum members,

    I am trying to test the equality of means (of labour productivity) across three size groups and I end up with the following result. It looks like Bartlett rejects the equal variance assumption. What other methods I can use to report a reliable statistics? Thank you in advance for your time.

    oneway llabprod size, bonferroni scheffe sidak tabulate

    | Summary of Log (LP)
    size | Mean Std. Dev. Freq.
    ------------+------------------------------------
    1 | 11.595564 .9829692 5,755
    2 | 11.634009 .91818715 12,126
    3 | 11.735422 .9413763 6,164
    ------------+------------------------------------
    Total | 11.650805 .94139925 24,045

    Analysis of Variance
    Source SS df MS F Prob > F
    ------------------------------------------------------------------------
    Between groups 65.1167545 2 32.5583773 36.85 0.0000
    Within groups 21243.4586 24042 .883597812
    ------------------------------------------------------------------------
    Total 21308.5753 24044 .886232546

    Bartlett's test for equal variances: chi2(2) = 36.8776 Prob>chi2 = 0.000


  • #2
    Nazli:
    you can go -kwallis-, although that test focuses on ranks instead of difference in means.
    I would also add that, with such a large sample, minimal differences become significant.
    Hence, you may want to compare the results of the parametric and non-parametric test and decide which one to report in your paper.
    Kind regards,
    Carlo
    (StataNow 18.5)

    Comment


    • #3
      In your case, I doubt that it would make much difference no matter what you do. But, if you have a concern, there are a few options that are illustrated below.

      .ÿversionÿ15.1

      .ÿ
      .ÿclearÿ*

      .ÿ
      .ÿsetÿseedÿ`=strreverse("1427959")'

      .ÿ
      .ÿquietlyÿsetÿobsÿ300

      .ÿ
      .ÿgenerateÿbyteÿgrpÿ=ÿmod(_n,ÿ3)

      .ÿ
      .ÿgenerateÿdoubleÿoutcomeÿ=ÿrnormal(0,ÿcond(grpÿ==ÿ0,ÿ1,ÿcond(grpÿ==ÿ1,ÿ2,ÿ3)))

      .ÿ
      .ÿtabstatÿoutcome,ÿby(grp)ÿstatistics(meanÿvarianceÿcount)ÿnototalÿformat(%04.2f)

      Summaryÿforÿvariables:ÿoutcome
      ÿÿÿÿÿbyÿcategoriesÿof:ÿgrpÿ

      ÿÿÿÿÿgrpÿ|ÿÿÿÿÿÿmeanÿÿvarianceÿÿÿÿÿÿÿÿÿN
      ---------+------------------------------
      ÿÿÿÿÿÿÿ0ÿ|ÿÿÿÿÿÿ0.13ÿÿÿÿÿÿ1.01ÿÿÿÿ100.00
      ÿÿÿÿÿÿÿ1ÿ|ÿÿÿÿÿ-0.16ÿÿÿÿÿÿ3.86ÿÿÿÿ100.00
      ÿÿÿÿÿÿÿ2ÿ|ÿÿÿÿÿ-0.35ÿÿÿÿÿÿ8.12ÿÿÿÿ100.00
      ----------------------------------------

      .ÿ
      .ÿ//ÿOptionÿ1:ÿÿignoreÿit;ÿANOVAÿisÿrobust
      .ÿonewayÿoutcomeÿgrp,ÿbonferroni

      ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿAnalysisÿofÿVariance
      ÿÿÿÿSourceÿÿÿÿÿÿÿÿÿÿÿÿÿÿSSÿÿÿÿÿÿÿÿÿdfÿÿÿÿÿÿMSÿÿÿÿÿÿÿÿÿÿÿÿFÿÿÿÿÿProbÿ>ÿF
      ------------------------------------------------------------------------
      Betweenÿgroupsÿÿÿÿÿÿ11.7185442ÿÿÿÿÿÿ2ÿÿÿ5.85927208ÿÿÿÿÿÿ1.35ÿÿÿÿÿ0.2602
      ÿWithinÿgroupsÿÿÿÿÿÿ1286.59075ÿÿÿÿ297ÿÿÿ4.33195538
      ------------------------------------------------------------------------
      ÿÿÿÿTotalÿÿÿÿÿÿÿÿÿÿÿ1298.30929ÿÿÿÿ299ÿÿÿ4.34217155

      Bartlett'sÿtestÿforÿequalÿvariances:ÿÿchi2(2)ÿ=ÿÿ92.4939ÿÿProb>chi2ÿ=ÿ0.000

      ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿComparisonÿofÿoutcomeÿbyÿgrp
      ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ(Bonferroni)
      RowÿMean-|
      ColÿMeanÿ|ÿÿÿÿÿÿÿÿÿÿ0ÿÿÿÿÿÿÿÿÿÿ1
      ---------+----------------------
      ÿÿÿÿÿÿÿ1ÿ|ÿÿÿ-.282425
      ÿÿÿÿÿÿÿÿÿ|ÿÿÿÿÿÿ1.000
      ÿÿÿÿÿÿÿÿÿ|
      ÿÿÿÿÿÿÿ2ÿ|ÿÿÿ-.481735ÿÿÿÿ-.19931
      ÿÿÿÿÿÿÿÿÿ|ÿÿÿÿÿÿ0.308ÿÿÿÿÿÿ1.000

      .ÿ
      .ÿ//ÿOptionÿ2:ÿÿuseÿaÿconventionalÿapproximationÿtoÿtheÿBehrens-Fisherÿstatistic
      .ÿttestÿoutcomeÿifÿinlist(grp,ÿ0,ÿ1),ÿby(grp)ÿwelch

      Two-sampleÿtÿtestÿwithÿunequalÿvariances
      ------------------------------------------------------------------------------
      ÿÿÿGroupÿ|ÿÿÿÿÿObsÿÿÿÿÿÿÿÿMeanÿÿÿÿStd.ÿErr.ÿÿÿStd.ÿDev.ÿÿÿ[95%ÿConf.ÿInterval]
      ---------+--------------------------------------------------------------------
      ÿÿÿÿÿÿÿ0ÿ|ÿÿÿÿÿ100ÿÿÿÿ.1267673ÿÿÿÿ.1007463ÿÿÿÿ1.007463ÿÿÿ-.0731352ÿÿÿÿ.3266697
      ÿÿÿÿÿÿÿ1ÿ|ÿÿÿÿÿ100ÿÿÿ-.1556578ÿÿÿÿ.1963949ÿÿÿÿ1.963949ÿÿÿ-.5453478ÿÿÿÿ.2340322
      ---------+--------------------------------------------------------------------
      combinedÿ|ÿÿÿÿÿ200ÿÿÿ-.0144452ÿÿÿÿ.1105404ÿÿÿÿ1.563278ÿÿÿ-.2324262ÿÿÿÿ.2035357
      ---------+--------------------------------------------------------------------
      ÿÿÿÿdiffÿ|ÿÿÿÿÿÿÿÿÿÿÿÿÿ.282425ÿÿÿÿ.2207278ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ-.1537429ÿÿÿÿ.7185929
      ------------------------------------------------------------------------------
      ÿÿÿÿdiffÿ=ÿmean(0)ÿ-ÿmean(1)ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿtÿ=ÿÿÿ1.2795
      Ho:ÿdiffÿ=ÿ0ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿWelch'sÿdegreesÿofÿfreedomÿ=ÿÿ148.713

      ÿÿÿÿHa:ÿdiffÿ<ÿ0ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿHa:ÿdiffÿ!=ÿ0ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿHa:ÿdiffÿ>ÿ0
      ÿPr(Tÿ<ÿt)ÿ=ÿ0.8986ÿÿÿÿÿÿÿÿÿPr(|T|ÿ>ÿ|t|)ÿ=ÿ0.2027ÿÿÿÿÿÿÿÿÿÿPr(Tÿ>ÿt)ÿ=ÿ0.1014

      .ÿdisplayÿinÿsmclÿasÿtextÿ"Pÿ=ÿ"ÿasÿresultÿ%04.2fÿmin(r(p)ÿ*ÿ3,ÿ1)
      Pÿ=ÿ0.61

      .ÿ
      .ÿttestÿoutcomeÿifÿinlist(grp,ÿ0,ÿ2),ÿby(grp)ÿwelch

      Two-sampleÿtÿtestÿwithÿunequalÿvariances
      ------------------------------------------------------------------------------
      ÿÿÿGroupÿ|ÿÿÿÿÿObsÿÿÿÿÿÿÿÿMeanÿÿÿÿStd.ÿErr.ÿÿÿStd.ÿDev.ÿÿÿ[95%ÿConf.ÿInterval]
      ---------+--------------------------------------------------------------------
      ÿÿÿÿÿÿÿ0ÿ|ÿÿÿÿÿ100ÿÿÿÿ.1267673ÿÿÿÿ.1007463ÿÿÿÿ1.007463ÿÿÿ-.0731352ÿÿÿÿ.3266697
      ÿÿÿÿÿÿÿ2ÿ|ÿÿÿÿÿ100ÿÿÿ-.3549673ÿÿÿÿ.2850226ÿÿÿÿ2.850226ÿÿÿÿ-.920514ÿÿÿÿ.2105795
      ---------+--------------------------------------------------------------------
      combinedÿ|ÿÿÿÿÿ200ÿÿÿÿÿÿ-.1141ÿÿÿÿ.1517355ÿÿÿÿ2.145864ÿÿÿ-.4133158ÿÿÿÿ.1851158
      ---------+--------------------------------------------------------------------
      ÿÿÿÿdiffÿ|ÿÿÿÿÿÿÿÿÿÿÿÿ.4817345ÿÿÿÿÿ.302304ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ-.116617ÿÿÿÿ1.080086
      ------------------------------------------------------------------------------
      ÿÿÿÿdiffÿ=ÿmean(0)ÿ-ÿmean(2)ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿtÿ=ÿÿÿ1.5935
      Ho:ÿdiffÿ=ÿ0ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿWelch'sÿdegreesÿofÿfreedomÿ=ÿÿÿ123.85

      ÿÿÿÿHa:ÿdiffÿ<ÿ0ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿHa:ÿdiffÿ!=ÿ0ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿHa:ÿdiffÿ>ÿ0
      ÿPr(Tÿ<ÿt)ÿ=ÿ0.9432ÿÿÿÿÿÿÿÿÿPr(|T|ÿ>ÿ|t|)ÿ=ÿ0.1136ÿÿÿÿÿÿÿÿÿÿPr(Tÿ>ÿt)ÿ=ÿ0.0568

      .ÿdisplayÿinÿsmclÿasÿtextÿ"Pÿ=ÿ"ÿasÿresultÿ%04.2fÿmin(r(p)ÿ*ÿ3,ÿ1)
      Pÿ=ÿ0.34

      .ÿ
      .ÿttestÿoutcomeÿifÿinlist(grp,ÿ1,ÿ2),ÿby(grp)ÿwelch

      Two-sampleÿtÿtestÿwithÿunequalÿvariances
      ------------------------------------------------------------------------------
      ÿÿÿGroupÿ|ÿÿÿÿÿObsÿÿÿÿÿÿÿÿMeanÿÿÿÿStd.ÿErr.ÿÿÿStd.ÿDev.ÿÿÿ[95%ÿConf.ÿInterval]
      ---------+--------------------------------------------------------------------
      ÿÿÿÿÿÿÿ1ÿ|ÿÿÿÿÿ100ÿÿÿ-.1556578ÿÿÿÿ.1963949ÿÿÿÿ1.963949ÿÿÿ-.5453478ÿÿÿÿ.2340322
      ÿÿÿÿÿÿÿ2ÿ|ÿÿÿÿÿ100ÿÿÿ-.3549673ÿÿÿÿ.2850226ÿÿÿÿ2.850226ÿÿÿÿ-.920514ÿÿÿÿ.2105795
      ---------+--------------------------------------------------------------------
      combinedÿ|ÿÿÿÿÿ200ÿÿÿ-.2553125ÿÿÿÿ.1727762ÿÿÿÿ2.443424ÿÿÿ-.5960196ÿÿÿÿ.0853946
      ---------+--------------------------------------------------------------------
      ÿÿÿÿdiffÿ|ÿÿÿÿÿÿÿÿÿÿÿÿ.1993095ÿÿÿÿ.3461341ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ-.4837644ÿÿÿÿ.8823834
      ------------------------------------------------------------------------------
      ÿÿÿÿdiffÿ=ÿmean(1)ÿ-ÿmean(2)ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿtÿ=ÿÿÿ0.5758
      Ho:ÿdiffÿ=ÿ0ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿWelch'sÿdegreesÿofÿfreedomÿ=ÿÿ177.265

      ÿÿÿÿHa:ÿdiffÿ<ÿ0ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿHa:ÿdiffÿ!=ÿ0ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿHa:ÿdiffÿ>ÿ0
      ÿPr(Tÿ<ÿt)ÿ=ÿ0.7173ÿÿÿÿÿÿÿÿÿPr(|T|ÿ>ÿ|t|)ÿ=ÿ0.5655ÿÿÿÿÿÿÿÿÿÿPr(Tÿ>ÿt)ÿ=ÿ0.2827

      .ÿdisplayÿinÿsmclÿasÿtextÿ"Pÿ=ÿ"ÿasÿresultÿ%04.2fÿmin(r(p)ÿ*ÿ3,ÿ1)
      Pÿ=ÿ1.00

      .ÿ
      .ÿ//ÿOptionÿ3:ÿÿmodelÿit
      .ÿmixedÿoutcomeÿi.grp,ÿmlÿresiduals(independent,ÿby(grp))ÿnolrtestÿnolog

      Mixed-effectsÿMLÿregressionÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿNumberÿofÿobsÿÿÿÿÿ=ÿÿÿÿÿÿÿÿ300
      Groupÿvariable:ÿ_allÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿNumberÿofÿgroupsÿÿ=ÿÿÿÿÿÿÿÿÿÿ1

      ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿObsÿperÿgroup:
      ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿminÿ=ÿÿÿÿÿÿÿÿ300
      ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿavgÿ=ÿÿÿÿÿÿ300.0
      ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿmaxÿ=ÿÿÿÿÿÿÿÿ300

      ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿWaldÿchi2(2)ÿÿÿÿÿÿ=ÿÿÿÿÿÿÿ3.68
      Logÿlikelihoodÿ=ÿ-597.15307ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿProbÿ>ÿchi2ÿÿÿÿÿÿÿ=ÿÿÿÿÿ0.1590

      ------------------------------------------------------------------------------
      ÿÿÿÿÿoutcomeÿ|ÿÿÿÿÿÿCoef.ÿÿÿStd.ÿErr.ÿÿÿÿÿÿzÿÿÿÿP>|z|ÿÿÿÿÿ[95%ÿConf.ÿInterval]
      -------------+----------------------------------------------------------------
      ÿÿÿÿÿÿÿÿÿgrpÿ|
      ÿÿÿÿÿÿÿÿÿÿ1ÿÿ|ÿÿÿ-.282425ÿÿÿ.2196214ÿÿÿÿ-1.29ÿÿÿ0.198ÿÿÿÿÿ-.712875ÿÿÿÿÿ.148025
      ÿÿÿÿÿÿÿÿÿÿ2ÿÿ|ÿÿ-.4817345ÿÿÿ.3007887ÿÿÿÿ-1.60ÿÿÿ0.109ÿÿÿÿÿ-1.07127ÿÿÿÿ.1078005
      ÿÿÿÿÿÿÿÿÿÿÿÿÿ|
      ÿÿÿÿÿÿÿ_consÿ|ÿÿÿ.1267673ÿÿÿ.1002413ÿÿÿÿÿ1.26ÿÿÿ0.206ÿÿÿÿÿ-.069702ÿÿÿÿ.3232366
      ------------------------------------------------------------------------------

      ------------------------------------------------------------------------------
      ÿÿRandom-effectsÿParametersÿÿ|ÿÿÿEstimateÿÿÿStd.ÿErr.ÿÿÿÿÿ[95%ÿConf.ÿInterval]
      -----------------------------+------------------------------------------------
      _all:ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ(empty)ÿ|
      -----------------------------+------------------------------------------------
      Residual:ÿIndependent,ÿÿÿÿÿÿÿ|
      ÿÿÿÿbyÿgrpÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ|
      ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ0:ÿvar(e)ÿ|ÿÿÿ1.004832ÿÿÿ.1421047ÿÿÿÿÿÿ.7615793ÿÿÿÿÿ1.32578
      ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ1:ÿvar(e)ÿ|ÿÿÿ3.818523ÿÿÿ.5400207ÿÿÿÿÿÿ2.894125ÿÿÿÿ5.038178
      ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ2:ÿvar(e)ÿ|ÿÿÿ8.042553ÿÿÿ1.137389ÿÿÿÿÿÿ6.095592ÿÿÿÿ10.61138
      ------------------------------------------------------------------------------

      .ÿmarginsÿgrp,ÿpwcompare(pveffects)ÿmcompare(bonferroni)

      Pairwiseÿcomparisonsÿofÿadjustedÿpredictions

      Expressionÿÿÿ:ÿLinearÿprediction,ÿfixedÿportion,ÿpredict()

      ---------------------------
      ÿÿÿÿÿÿÿÿÿÿÿÿÿ|ÿÿÿÿNumberÿof
      ÿÿÿÿÿÿÿÿÿÿÿÿÿ|ÿÿComparisons
      -------------+-------------
      ÿÿÿÿÿÿÿÿÿgrpÿ|ÿÿÿÿÿÿÿÿÿÿÿÿ3
      ---------------------------

      -----------------------------------------------------
      ÿÿÿÿÿÿÿÿÿÿÿÿÿ|ÿÿÿÿÿÿÿÿÿÿÿÿDelta-methodÿÿÿÿBonferroni
      ÿÿÿÿÿÿÿÿÿÿÿÿÿ|ÿÿÿContrastÿÿÿStd.ÿErr.ÿÿÿÿÿÿzÿÿÿÿP>|z|
      -------------+---------------------------------------
      ÿÿÿÿÿÿÿÿÿgrpÿ|
      ÿÿÿÿÿ1ÿvsÿ0ÿÿ|ÿÿÿ-.282425ÿÿÿ.2196214ÿÿÿÿ-1.29ÿÿÿ0.595
      ÿÿÿÿÿ2ÿvsÿ0ÿÿ|ÿÿ-.4817345ÿÿÿ.3007887ÿÿÿÿ-1.60ÿÿÿ0.328
      ÿÿÿÿÿ2ÿvsÿ1ÿÿ|ÿÿ-.1993095ÿÿÿ.3443991ÿÿÿÿ-0.58ÿÿÿ1.000
      -----------------------------------------------------

      .ÿ
      .ÿexit

      endÿofÿdo-file


      .

      Comment


      • #4
        I have two comments.
        1. In his Option 1 (in #3), Joseph noted that ANOVA is robust to heterogeneity of variance. That is true when the sample sizes are equal (or nearly so). But bear in mind that Nazli's sample sizes are quite variable (5755, 12126 and 6164).
        2. I like Joseph's 3rd option a lot. But I would argue that when there are 3 groups, a Bonferroni correction is not needed. One could use the logic of Fisher's LSD instead--i.e., only proceed to the pair-wise contrasts if the omnibus test for group is statistically significant. When there are 3 conditions, Fisher's LSD controls the family-wise alpha at the per-contrast alpha--see the references listed below. Having said all that, in this particular case (with such large sample sizes), it won't really matter much!
        HTH.

        References

        Baguley, T. (2012). Serious stats: A guide to advanced statistics for the behavioral sciences. Palgrave Macmillan.

        Howell, D. C. (2012). Statistical methods for psychology. Cengage Learning.

        Meier, U. (2006). A note on the power of Fisher's least significant difference procedure. Pharmaceutical statistics, 5(4), 253-263.
        --
        Bruce Weaver
        Email: [email protected]
        Version: Stata/MP 18.5 (Windows)

        Comment


        • #5
          I agree about Option 1, and in my example, where the variances differ considerably, the sample sizes are exactly equal. In the OP's case, where as you say the sample sizes are quite different, the variance in the variances

          . display in smcl as text 0.9829692^2 / 0.91818715^2
          1.1460865


          isn''t something I'd worry about too much. In any event, a sanity check is readily to hand in Stata for those who would.
          Code:
          version 15.1
          
          clear *
          
          set seed `=strreverse("1428044")'
          
          quietly set obs 3
          generate byte grp = _n
          
          generate double sd = 0.9829692
          generate int count = 5755
          
          quietly replace sd = 0.91818715 in 2
          quietly replace count = 12126 in 2
          
          quietly replace sd = 0.9413763 in 3
          quietly replace count = 6164 in 3
          
          quietly expand count
          drop count
          
          quietly generate double outcome = .
          
          program define simem, rclass
              version 15.1
              syntax
          
              replace outcome = rnormal(0, sd)
              anova outcome grp
              return scalar p = Ftail(e(df_m), e(df_r), e(F_1))
          end
          
          simulate p = r(p), reps(10000) nodots: simem
          
          generate byte pos = p < 0.05
          summarize pos
          
          exit
          About Fisher's Least Significant Difference, thanks for that: I was unaware of the special case with three groups (I haven't run across Fisher's LSD since the 1970s). Because it uses the pooled within-groups (residual) mean square from the ANOVA, I'm not sure how it would be put into practice under Option 3.

          Comment


          • #6
            Hi Joseph. IIRC, some authors talk about using the logic of Fisher's LSD (but not necessarily Fisher's LSD procedure per se) when one has 3 independent groups and wishes to make all pair-wise comparisons.* For example, in a logit model with one 3-level categorical explanatory variable, one would carry out all pair-wise contrasts only if the initial 2-df omnibus test was statistically significant. I hope this clarifies what I was suggesting.

            * I can't at this moment point to a specific book or article that makes this argument. But if any occur to me in the next little while, I'll post them.

            Cheers,
            Bruce
            --
            Bruce Weaver
            Email: [email protected]
            Version: Stata/MP 18.5 (Windows)

            Comment


            • #7
              Here is one reference to support what I suggested in #6 about applying the logic of Fisher's LSD with 3 groups to other situations that do not involve F- and t-tests.

              Levin, J. R., Serlin, R. C., & Seaman, M. A. (1994). A controlled, powerful multiple-comparison strategy for several situations. Psychological Bulletin, 115(1), 153.

              See pp. 154-155.

              Cheers,
              Bruce
              --
              Bruce Weaver
              Email: [email protected]
              Version: Stata/MP 18.5 (Windows)

              Comment


              • #8
                Thank you for the help!

                Comment


                • #9
                  Hi all,

                  I had a similar problem as Nazli: comparing means across several groups that are unbalanced and have unequal variances. #3 has been a great help! However, I'm comparing more than 3 groups, hence I am worried about Type-I errors when using Welch's t-test. Option 3 seems like the better option but I have some trouble interpreting the coefficients. Could you maybe eloborate a bit on the specifications? Thank you

                  Comment


                  • #10
                    Originally posted by Guest
                    Option 3 seems like the better option but I have some trouble interpreting the coefficients. Could you maybe eloborate a bit on the specifications?
                    The coefficients that you interpret are of the pairwise contrasts,and they are specified in detail right there in the printout from -margins- Where are you having trouble interpreting them?
                    Last edited by sladmin; 03 Mar 2022, 07:13. Reason: anonymize original poster

                    Comment


                    • #11
                      Originally posted by Joseph Coveney View Post
                      The coefficients that you interpret are of the pairwise contrasts,and they are specified in detail right there in the printout from -margins- Where are you having trouble interpreting them?
                      Right, that makes sense thank you. I was simply confused by the mixed effects regression as I had never encountered it before (I should have done a bit more research before posing the question). One follow-up question: if you cannot assume the data is normally distributed would you model a GLMM instead? I have a fairly large sample and most of my data does not meet the normality assumption. I am unsure if I should just ignore it, transform the data or account for it in the model.

                      Comment


                      • #12
                        Originally posted by Guest
                        . . . most of my data does not meet the normality assumption. I am unsure if I should just ignore it, transform the data or account for it in the model.
                        I don't know what you've got, and so cannot suggest much.

                        Try fitting a conventional linear mixed model, which you might be contemplating anyway. Graphically examine the first-level residuals, using for example -pnorm- and -qnorm-. If the residuals in these plots don't seem too out of hand, then stick with that.

                        On the other hand, if you believe that you know enough about the distribution that you can accommodate it with a generalized linear mixed model, then try that.

                        In this day and age, I usually reserve transformation for the purpose of interpretation, and not to try to fix up failure to satisfy the normality assumption.
                        Last edited by sladmin; 03 Mar 2022, 07:14. Reason: anonymize original poster

                        Comment


                        • #13
                          The normality assumption (better: ideal condition) when there is one is about conditional distributions and always the least important assumption (ideal condition)

                          If I am reading #11 correctly you are talking about marginal distributions and thinking that it's a problem that they aren't normal.

                          Most crucially, I agree with Joseph Coveney that

                          I don't know what you've got, and so cannot suggest much.
                          What is your outcome or response variable? Binary, ordered, counted, measured? Data example? Graphs of distributions?

                          Comment

                          Working...
                          X