Here is an example of repeated data on 50 people for variable y at two timepoints (time 0 and time 1), with 30 people being lost to follow-up and not contributing data at time 1.
The aim is to compare the means of y at time 0 and time 1, so the model is E[Y] = beta0 + beta1*time
One method to do to this is using reg y i.time, cluster(id), which indicates that on average, y decreases by 2.69 units (95%CI 3.4 to 2.0) comparing time 1 to baseline.
What is the method of estimation? It doesn't seem to be either maximum likelihood or restricted maximum likelihood as the estimate differs from these commands:
mixed y i.time || id:, var
mixed y i.time || id:, var reml
Nor is it the same as a paired t-test (OLS?), as that method yields an estimate on the basis of only the 20 people with complete data:
reshape wide y, i(id) j(time)
ttest y1 == y0
The aim is to compare the means of y at time 0 and time 1, so the model is E[Y] = beta0 + beta1*time
One method to do to this is using reg y i.time, cluster(id), which indicates that on average, y decreases by 2.69 units (95%CI 3.4 to 2.0) comparing time 1 to baseline.
What is the method of estimation? It doesn't seem to be either maximum likelihood or restricted maximum likelihood as the estimate differs from these commands:
mixed y i.time || id:, var
mixed y i.time || id:, var reml
Nor is it the same as a paired t-test (OLS?), as that method yields an estimate on the basis of only the 20 people with complete data:
reshape wide y, i(id) j(time)
ttest y1 == y0
Code:
* Example generated by -dataex-. For more info, type help dataex clear input float id byte time int y 1 0 10 1 1 . 2 0 5 2 1 . 3 0 4 3 1 3 4 0 6 4 1 . 5 0 4 5 1 . 6 0 6 6 1 . 7 0 5 7 1 . 8 0 8 8 1 . 9 0 9 9 1 2 10 0 6 10 1 4 11 0 4 11 1 . 12 0 3 12 1 . 13 0 1 13 1 . 14 0 5 14 1 . 15 0 5 15 1 1 16 0 3 16 1 . 17 0 4 17 1 . 18 0 6 18 1 . 19 0 6 19 1 . 20 0 4 20 1 . 21 0 4 21 1 . 22 0 6 22 1 . 23 0 7 23 1 3 24 0 2 24 1 1 25 0 2 25 1 1 26 0 7 26 1 1 27 0 3 27 1 3 28 0 3 28 1 . 29 0 6 29 1 . 30 0 4 30 1 . 31 0 5 31 1 . 32 0 2 32 1 . 33 0 4 33 1 . 34 0 6 34 1 3 35 0 3 35 1 . 36 0 5 36 1 3 37 0 5 37 1 1 38 0 4 38 1 . 39 0 4 39 1 1 40 0 6 40 1 . 41 0 1 41 1 1 42 0 2 42 1 . 43 0 3 43 1 . 44 0 4 44 1 1 45 0 7 45 1 5 46 0 3 46 1 1 47 0 5 47 1 1 48 0 7 48 1 1 49 0 3 49 1 2 50 0 5 50 1 . end
Comment