implicit none include 'bfgs.h' external rosenbrock, grosenbrock integer*4 n parameter(n=2) real*8 x(n), dx(n), f, gradient(n), p(n) real*8 fold, step fold = 1.d300 C We now define the starting point (read from figure 1.2.2 of C Fletcher's book and also confirmed with a google search for C rosenbrock traditional starting point) x(1) = -1.2d0 x(2) = 1.d0 C for clean looking initial output dx(1) = 0.d0 dx(2) = 0.d0 step = 1.d0 call bfgs_set(rosenbrock, grosenbrock, & n, x, f, gradient, p) do while(iter.lt.200.and.fold-f.gt.0.d0) write(*,*) 'iter, function_count, gradient_count, '// & 'step, x-solution, dx, f, gradient =' write(*,*) iter, function_count, gradient_count write(*,*) step write(*,*) x(1)-1.d0, x(2)-1.d0 write(*,*) dx write(*,*) f write(*,*) gradient fold = f C Setting fbar to a large negative number essentially disables the C bracket-limiting test and is fine for well-behaved C functions. C epsilon is a delta f expected from roundoff error to add C some robustness in the presence of round-off error. call bfgs_iterate(rosenbrock, grosenbrock, & -1.d200, 1.d-13, n, p, step, x, f, gradient, dx) enddo end real*8 function rosenbrock(n, x) implicit none integer*4 n real*8 x(n), t1 t1 = x(2) - x(1)**2 rosenbrock = 100.d0*t1*t1 + (x(1) - 1.d0)**2 end subroutine grosenbrock(n, x, g) implicit none integer*4 n real*8 x(n), g(n), t1 t1 = x(2) - x(1)**2 C Compute gradient g for the sample problem. g(1) = 2.d0*(x(1) - 1.d0) - 4.d0*100.d0*t1*x(1) g(2) = 2.d0*100.d0*t1 end