Hi!
I'm having a dataset of approx 35,000 patients. The data has been imputed, resulting in 10 imputed datasets. I'm now running a model (see below) on risk of dying post surgery, with a time-dependent covariate "re-surgery at 1 year yes/no". The problem is that it's taking forever to run. I'm now wondering if anyone has any tips for making it quicker? Perhaps it's just the nature of the analysis, I know it's a lot of data to process, but would help with logistics
.
I'm having a dataset of approx 35,000 patients. The data has been imputed, resulting in 10 imputed datasets. I'm now running a model (see below) on risk of dying post surgery, with a time-dependent covariate "re-surgery at 1 year yes/no". The problem is that it's taking forever to run. I'm now wondering if anyone has any tips for making it quicker? Perhaps it's just the nature of the analysis, I know it's a lot of data to process, but would help with logistics

Code:
mi stset survivaltime, failure(deathatfollowup==1) mi estimate, saving(model, replace) post hr: stcox b1.gender##b2.agegroups i.bloodgroup i.ecmo i.ethnicity i.cancerstage i.cyt c.surgtime i.surgyear i.icupostsurg c.waitingtime i.surgmalefemale i.surgexperience i.previousabdominalsurg i.hb i.hypertension i.smoking, robust tvc(resurg_1y)
Comment