Dear all,
I have one question regarding how Mata -optimize- approximates gradients of the objective function if I don't provide an analytical function form of it. I run the same codes several times and found the approximation gradients were not the same. Here are the outputs (just 0-th iteration) from the same code. I can see the difference in approximated gradient is very subtle but they are not identical as I thought they should be.
I have one question regarding how Mata -optimize- approximates gradients of the objective function if I don't provide an analytical function form of it. I run the same codes several times and found the approximation gradients were not the same. Here are the outputs (just 0-th iteration) from the same code. I can see the difference in approximated gradient is very subtle but they are not identical as I thought they should be.
Code:
Iteration 0: numerical derivatives are approximate flat or discontinuous region encountered f(p) = 2521722.5 Gradient vector (length = 1267500): c1 r1 1267500 Hessian matrix: c1 r1 -1 Step length = 1267500 Parameters + step -> new parameters f(p) = 989652.87 (initial step good) (1) Stepping forward, step length = 158437.5 f(p) = 989652.87 (ignoring last step)
Code:
Iteration 0: numerical derivatives are approximate flat or discontinuous region encountered f(p) = 2521722.6 Gradient vector (length = 1267499): c1 r1 1267499 Hessian matrix: c1 r1 -1 Step length = 1267499 Parameters + step -> new parameters f(p) = 989653.43 (initial step good) (1) Stepping forward, step length = 158437.4 f(p) = 989653.43 (ignoring last step)
Comment