Hi everyone,
I'm hoping someone can provide advice about tradeoffs between estimator performance criteria.
I recently decided to do a simulation study to compare some estimators of the effect size statistic (i.e. the standardized mean difference) under some relatively uncommon conditions. I work for a state organization that does a lot of meta-analysis, so we do see these conditions now and then. One of the conditions I'm looking at is the performance of the estimators when a continuous outcome is dichotomized in the tails of the normal distribution. In this situation, the unbiased estimator of the effect size cannot be used. This is my first simulation study, and I'm currently working through the data analysis piece.
It is more important to us, at my organization, to have fewer inaccurate estimates than to be closer to truth on average. The mean square error is the most important criterion for evaluating estimator performance in my study.
My issue is that I found some unexpected results and I'm not sure how to interpret them. I found that the unbiased estimator has larger mean absolute error and mean square error than the most biased estimator. The most biased estimator has the smallest mean absolute error and mean square error. The mean value of this most biased estimator would be a totally unacceptable measure of the true value, in terms of the degree to which it would affect our predictions. Although the unbiased estimator cannot be used in the situation I described, there are other biased estimators that have mean values much closer to the true value but that have larger mean square error than the most biased estimator. How can I interpret these findings? How can I evaluate the tradeoff between biasedness and frequency of error? Are there any guidelines or resources you can point me to that might help guide my thinking?
edit: I just found a source suggesting MSE is a function of parameter values, so the minimum MSE estimator at one parameter value isn't always the minimum MSE estimator at another value of the parameter. I am re-running my simulation to calculate a wider range of parameter values to assess this.
Thank you,
Kris Bitney
I'm hoping someone can provide advice about tradeoffs between estimator performance criteria.
I recently decided to do a simulation study to compare some estimators of the effect size statistic (i.e. the standardized mean difference) under some relatively uncommon conditions. I work for a state organization that does a lot of meta-analysis, so we do see these conditions now and then. One of the conditions I'm looking at is the performance of the estimators when a continuous outcome is dichotomized in the tails of the normal distribution. In this situation, the unbiased estimator of the effect size cannot be used. This is my first simulation study, and I'm currently working through the data analysis piece.
It is more important to us, at my organization, to have fewer inaccurate estimates than to be closer to truth on average. The mean square error is the most important criterion for evaluating estimator performance in my study.
My issue is that I found some unexpected results and I'm not sure how to interpret them. I found that the unbiased estimator has larger mean absolute error and mean square error than the most biased estimator. The most biased estimator has the smallest mean absolute error and mean square error. The mean value of this most biased estimator would be a totally unacceptable measure of the true value, in terms of the degree to which it would affect our predictions. Although the unbiased estimator cannot be used in the situation I described, there are other biased estimators that have mean values much closer to the true value but that have larger mean square error than the most biased estimator. How can I interpret these findings? How can I evaluate the tradeoff between biasedness and frequency of error? Are there any guidelines or resources you can point me to that might help guide my thinking?
edit: I just found a source suggesting MSE is a function of parameter values, so the minimum MSE estimator at one parameter value isn't always the minimum MSE estimator at another value of the parameter. I am re-running my simulation to calculate a wider range of parameter values to assess this.
Thank you,
Kris Bitney
Comment