Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    #14 I don't think Rich Goldstein and I are that far apart.

    As said, I find the defaults of lpoly leave too much noise and customise with defaults that usually imply more smoothing.

    If you compare the defaults of (1) lowess, of (2) lpoly and of (3) lpoly customised by localp from SSC, I think you'll find that typically (2) is the odd one out.

    My biases include

    1. Any of these methods is for me usually exploratory. I would prefer to follow with some kind of predictive model when it seems suggested.

    2. For exploration undersmoothing is always slightly preferable to oversmoothing when that is the choice. You can always smooth mentally a little more whereas it's harder to estimate mentally what's been smoothed away. Of course, you can always re-smooth with different degrees of smoothing.

    3. My experience is about equally divided between (1) smoothing using moving averages of time series (2) kernel density estimation (3) local polynomial smoothing. In all of those there is a kernel width or equivalent at choice, measured in the units of the predictor or time variable. Lowess is the odd one out in specifying bandwidth as fraction of predictor or time range. With time series of climatic data, for example, a moving average of width 11 with binomial weights often works quite well and I would use that with say 20 years of data or 200 years of data. I have a hard time imagining how to compare a lowess smooth for 20 years and for 200 years consistently. Should the bandwidth really be the same?





    Comment


    • #17
      other than my ignorance of time series as I virtually never have such in my work, I agree with Nick Cox that we are not "that far apart"

      Comment

      Working...
      X