Hi,
I have seen a lot of discussion about adding a squared term in a regression, but I could not find any answer to my query.
I am estimating a regression like this:
The variable 'expenditure' has values ranging from 20 to 80.
My interest is not just in the marginal effects within the range of values of 'expenditure (20-80) but also in the marginal effects at 0 and 100 which are out of range.
The only reason why I get some numbers for those out-of-range values using margins is because of the extrapolation allowed by liner regression.
I am wondering whether statistical significance in this case can tell me anything about the quality of this extrapolation.
In my case, for example, I see that the estimated marginal effect at zero is statistically significant, while the marginal effect at 100 is statistically insignificant.
Does this suggest that my model is powerful enough to estimate the marginal effect at 0 but not at 100? Otherwise, what is your take on this?
Many thanks,
Lukas
I have seen a lot of discussion about adding a squared term in a regression, but I could not find any answer to my query.
I am estimating a regression like this:
Code:
reg mortality c.expenditure##c.expenditure control margins, dydx(expenditure) at(expenditure=(0(10)100))
My interest is not just in the marginal effects within the range of values of 'expenditure (20-80) but also in the marginal effects at 0 and 100 which are out of range.
The only reason why I get some numbers for those out-of-range values using margins is because of the extrapolation allowed by liner regression.
I am wondering whether statistical significance in this case can tell me anything about the quality of this extrapolation.
In my case, for example, I see that the estimated marginal effect at zero is statistically significant, while the marginal effect at 100 is statistically insignificant.
Does this suggest that my model is powerful enough to estimate the marginal effect at 0 but not at 100? Otherwise, what is your take on this?
Many thanks,
Lukas
Comment