Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Test post

    There's no benefit to normalizing--it won't change the results of your svy: tab command, for example.

    The definition of a sampling, or probability, weight, is: \(w_i = \frac{1}{f_i}\), where \(f_i\) is the probability that sample member \(i\) was selected.

    "Normalized" weights are probability weights that are scaled (divided by a constant) to sum to some constant C. I've seen C = 1 and C = sample size \(n\). This last was popular before the days of survey software. As I've said, this makes no difference to most analyses.

    However there are good reasons not to normalize weights. If you do normalize

    1. You lose the possibility of estimating totals, including population counts. This is a serious loss in many studies
    2. If you use a finite population correction (fpc option to svyset), then estimates of the design effect DEFF be incorrect.
    3. A subject's weight has the interpretation as the number of people in the population "represented" by the subject. This is easy to understand and often important to look at. Normalizing destroys this interpretability

    Note that the "probability" weights supplied with many data sets are not sampling weights, but sampling weights adjusted for non-response and post-stratified so that estimated sample totals match population totals known from other sources (like a Census). The reasons against normalizing apply to these weights as well.
    Last edited by Steve Samuels; 11 Sep 2015, 20:43.
    Steve Samuels
    Statistical Consulting
    [email protected]

    Stata 14.2

  • #2
    The standard-two sample test of proportion is equivalent to the Chi-square test in a 2x2 table. Unfortunately, Bland's formula applies to a slightly different test, whereas \(\chi^2\) uses the the correct formula, the same formula is now used by power twoprop , (p. 158 of the PSS manual (http://www.stata.com/manuals13/pss.pdf).

    The basic definition of a p-value is that it is the probability that a test statistic exceeds a critical, if the null hypothesis is true.

    For the two-sample case, null hypothesis is \(H_0: p_1 = p_2 = p_0\), say.

    Under the null hypothesis, the difference
    \(\hat{p}_1-\hat{p}_2\) has variance
    \[
    p_0(1-p_0) \times 2/n
    \]
    Of course, \(p_0\) isn't known, but given observed proportions \(\hat{p}_1\) and \(\hat{p}_2 \) would be estimated by their average (what else?)
    \[\hat{p}_0 = \frac{\hat{p}_1+\hat{p}_2}{2}
    \]
    Then the test statistic for a two-sided test, ignoring the continuity correction, is:
    \[
    Z = \frac{\hat{p}_1-\hat{p}_2}{\sqrt{\hat{p}_0(1-\hat{p}_0) \times 2/n}}
    \]

    and we reject if |Z| is too large.

    The ordinary 1 degree of freedom Chi Square statistic, as computed by tabulate twoway is:
    \[
    Q= Z^2
    \]
    Bland computes sample size for a different test statistic:
    \[
    Z' = \frac{\hat{p}_1-\hat{p}_2}{(\sqrt{\hat{p}_1(1-\hat{p}_1)/n_1 + \hat{p_2}_1(1-\hat{p}_2)/n_2}}
    \]
    It can be proven that the denominator in \(Z'\) is less than the denominator in \(Z\), so that \(|Z'|>|Z|\). Bland's \(Z'\) calculation is using an effective \(\alpha'\) which is larger than the specified \(\alpha\).

    So, where can you use \(Z'\)? The estimated standard error in the denominator of \(Z'\) can be recognized as an estimate of the standard error for \(\hat{p}_1-\hat{p}_2\), for any values of \(p_1\) and \(p_2\). It's the one you would use for a confidence interval, for example.

    The take-home message is. to assure the desired power for the specified \(\alpha\), use sampsi.
    Last edited by Steve Samuels; 19 Sep 2015, 13:59.
    Steve Samuels
    Statistical Consulting
    [email protected]

    Stata 14.2

    Comment


    • #3


      The standard-two sample test of proportion is equivalent to the Chi-square test in a 2x2 table. sampsi does a correct power calculation for these tests, The same formula is used by power twoprop in recent versions of Stata and can be found on p. 158 of the PSS manual (http://www.stata.com/manuals13/pss.pdf).
      Bland's formula applies to a slightly different test statistic. The \(\alpha\) level associated with that statistic is larger than the specified \(\alpha\) level. This accounts for the smaller sample size given by his formula.

      Some detail:

      The basic definition of a p-value is that it is the probability that a test statistic exceeds a critical, if the null hypothesis is true.

      For the two-sample case, null hypothesis is \(H_0: p_1 = p_2 = p_0\), say.

      Under the null hypothesis, the difference
      \(\hat{p}_1-\hat{p}_2\) has variance
      \[
      p_0(1-p_0) \times (1/n_1 + 1/n_2)
      \]
      Of course, \(p_0\) isn't known, but given observed proportions \(\hat{p}_1\) and \(\hat{p}_2 \), would be estimated by their average (what else?)
      \[\hat{p}_0 = \frac{\hat{p}_1+\hat{p}_2}{2}
      \]
      (For the power calculation, sampsi uses the average of the hypothesized values of \(p_1\) and \(p_2\).)

      Then the test statistic for a two-sided test, ignoring the continuity correction, is:
      \[
      Z = \frac{\hat{p}_1-\hat{p}_2}{\sqrt{\hat{p}_0(1-\hat{p}_0) \times (1/n_1+1/n_2)}}
      \]
      and we reject if
      \[
      |Z| \geq z_{\alpha/2}.
      \]
      , where \(z_\alpha/2} is the upper \(\alpha/2\) Normal quantile.

      Under the null hypothesis.
      \[
      P\left(|Z| \geq z_{\alpha/2}|H_0\right)=\alpha
      \]
      The ordinary Chi Square test statistic with one degree of freedom, as computed by tabulate twoway, for example , is \(Q= Z^2\). So, the \(Z\) power calculation for sampsi will also apply to the Chi Square test.

      Bland computes sample size for a different test statistic:
      \[
      Z' = \frac{\hat{p}_1-\hat{p}_2}{\sqrt{\hat{p}_1(1-\hat{p}_1)/n_1 + \hat{p}_2(1-\hat{p}_2)/n_2}}
      \]
      The estimated standard error in the denominator of \(Z'\) can be recognized as an estimate of the standard error for \(\hat{p}_1-\hat{p}_2\), for any values of \(p_1\) and \(p_2\). It's the one you would use for a confidence interval, for example.

      It can be proven that the denominator in \(Z'\) is less than the denominator in \(Z\), so that \(|Z'|>|Z|\) and
      \[
      P\left(|Z'| \geq z_{\alpha/2} |H_0\right) = \alpha' > \alpha
      \]
      In other words, the \(Z'\) test rejects too often under the null hypothesis. The \(n\)'s given by Bland's formula will always be too small; or, if \(n\) is fixed, and a power calculation is done (with the calculator you used, for example), the estimated power will be a little too large.

      The take-home message: to assure the desired power for the specified \(\alpha\), use sampsi.
      Last edited by Steve Samuels; 19 Sep 2015, 14:58.
      Steve Samuels
      Statistical Consulting
      [email protected]

      Stata 14.2

      Comment


      • #4
        Your test posts are better than my regular ones. ;-)
        -------------------------------------------
        Richard Williams, Notre Dame Dept of Sociology
        Stata Version: 17.0 MP (2 processor)

        EMAIL: [email protected]
        WWW: https://www3.nd.edu/~rwilliam

        Comment

        Working...
        X