Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Fixed Effects with Fixed Explanatory Variable of Interest

    We are investigating the difference in learning outcome achievement for students registered in the same course with two modes of instruction (distance and in-person). We set up an "experiment" in the fall term, with the treatment being two 75-minute classes per week that include chapter summaries and peer-to-peer interaction through group discussions, case studies, etc. The distance sections did not include any peer-to-peer interaction.

    The course page, textbook, grading scheme, tests, etc., were identical. We could not randomly assign students to distance or in-person courses. Students were sent a letter to inform them of the "experiment" and encourage them to enrol in the model that best matched their learning style.

    The course spans four months, and the panel has three time periods approximately 40 days apart. We only want to use course-specific data for the enrolled students because we want educators to be able to replicate our analysis without requiring ethics approval to collect data via survey or administrative data. Nearly all relevant explanatory and control variables are fixed over a four-month time period (i.e., incoming math ability, annual family income, etc.) This makes a fixed effects model or a pooled OLS with student level dummy variables ideal.

    However, we cannot run a fixed effects model because our variable of interest, mode of instruction, is fixed. I welcome any and all suggestions on how to proceed.

  • #2
    Well, if you have an outcome variable that is measured at each of the three time periods, so that changes in that outcome variable from one time period to the next is a reasonable specification of course-related learning, then you can do a fixed-effects model like this:
    Code:
    xtset student_identifier
    xtreg outcome_variable i.mode_of_instruction##i.time_period other_variables, fe
    The study_arm "main" effect will be omitted because it is fixed, but that is not a problem, because the variables that specify the effect of study arm on learning are the 1.mode_of_instruction#2.time_period and 1.mode_of_instruction#3.time_period variables. (I'm assuming that time_period is coded 1, 2, 3, and mode of instruction is coded 0, 1. These interaction terms are not time-invariant, so they will not be omitted. And it is their coefficients that estimate the difference in rate of learning between the two modes of instruction.

    As I imagine you are aware, it will be very difficult to make any causal claims, and your design seems vulnerable to many sources of bias. Not only is your study non-randomized, treatment is self-selected. And if the students' intuitions about which mode of instruction would work best for them are actually somewhat correct, the self-selection will result in a diminished effect size. And can you really assert that the grading is really the same in both modes of instructions if, as I suspect is the case, the people doing the grading are aware of which mode of instruction each student used?

    I don't know what you mean by "course specific" data if it includes incoming math ability and family income. It also seems that you also chose to forego some variables that might be deemed important, such as the students' assessments in other courses taken concurrently. I gather you did that to avoid needing ethics approval to carry out the study. If your institution's ethics approval process for this kind of study/data is so cumbersome as to justify doing that, you really should press your Dean to look into this and have your IRB develop more streamlined processes. This kind of study involves no material risks to participants other than the consequences of breach of data confidentiality: if reasonable measures to maintain confidentiality are in place, the Federal research regulations allow rapid and simple IRB review. If your IRB doesn't do that, it should.

    Comment


    • #3
      Thank you, Clyde. I had not thought of comparing the rate of learning.

      This is the first step in our research project. We are not inferring any causality. The overall hypothesis is that as long as students can choose the mode of instruction that maximizes their learning outcome, there will be no statistical difference between in-person and distance education. This assumes quality in-person and distance course material and delivery. We cannot test this with our current data. We need to provide our funding agency with preliminary results before moving to the next step, which includes student surveys and accessing administrative data from student transcripts for past, current and future. It also includes basic information about program, major, number of completed credit hours, etc.

      Good news about the grading comment. We jointly created rubrics and graded as a team, with one educator or grader/marker blindly grading the same question for each student, no matter their section or mode of instruction.

      I attached the output. distance == 0 is the in-person mode of delivery. The outcome variable is in percent. No transformations were done to the data.

      If my curt interpretation for time period 2 is correct, a student in distance performs 1.299 (p=0.74) percent better than in-person delivery relative to time period 1. There is no statistical difference in the rate of learning after controlling for time-invariant differences between students.
      Attached Files

      Comment


      • #4
        I agree with your interpretation of the 1.distance#2.exam coefficient. And I think that as preliminary data to support an application for funding of a bigger and more comprehensive analysis this will be fine.

        The only thing I would add, and you are probably aware of this, that particularly since your goal is to show that there is no material difference between the two modes of instruction when the students choose, it will be important to present a power and sample size analysis in your funding proposal.

        Best of luck.

        Comment

        Working...
        X