help-gsl
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Help-gsl] Re: questions about using GSL's ODE solver?


From: Robert G. Brown
Subject: [Help-gsl] Re: questions about using GSL's ODE solver?
Date: Wed, 6 Jun 2007 09:25:13 -0400 (EDT)

On Wed, 6 Jun 2007, Michael wrote:

Hi all,

I am new to GSL's ODE solver and I began to use it because Matlab is
slow and I have to call the ODE solver thousands of millions of times.

My questions are:

1. In Matlab, I can provide a vector of times, and Matlab will return
a vector of solutions at these specified times. Is there a way to do
this in GSL ODE solver? I couldn't find a good way to handle this. I
want to solve the system of ODEs at 5, 7, 9, 11 seconds, and I only
want solutions at these points. How to do that? I hope I don't need to
repeat to run the solver for different time points, otherwise that's
just too slow. Time is critical to me.

I'm not one of the GSL ODE coders, but I've coded up a number of ODE
solvers in the past (a practice I've gratefully given up in favor of the
GSL's much larger library of methods and better quality code).  But I've
used ODE solvers a lot for many years.

If you're going to use ODE solvers, it is a really good idea to learn a
bit about how they work. This is usually straightforward -- in fact the
generic answer is something like "fit a curve to the solution so far and
use the result to extrapolate the solution a step, taking measures to
assure that numerical error grows at no more than thus-and-such a rate
(varying by method) over the step."  The "measure" often involve
subdividing the interval into smaller steps and comparing until they lie
within a tolerance.

One of the best books I can recall that describes the basic process and
the way error can accumulate and is controlled is Forsythe, Malcom and
Moler's book on numerical methods.  It (IIRC -- I haven't read it for
years and don't code in Fortran any more) provides example code for how
to write a simple (but still quite good) ODE solver by writing a routine
that does "the basic mini-step" (stepping function) and then wrapping it
up in a shell that does the iterative refinement until the big-step
result is within requested tolerances.  One provides the stepping
function with a standard format subroutine to generate the vector of
derivatives from the vector of current values (and perhaps additional
information e.g. jacobeans).  It also describes in detail the sources of
error and how they are expected to accumulate.

The GSL follows this basic model AFAICT.  It provides routines for
initializing the solvers (some require more than others, and it hides
the detail behind a common interface), defining the system to be solved
(again, using a common format to all the provided ODE solvers.  Then it
provides a whole library of "stepping functions" -- the actual ODE
"solver".  Stepping functions by their nature give accurate results for
"small" steps only, not large ones, where "small" has a specific meaning
relative to the magnitude of the derivatives and requested tolerance,
just as one might expect from a Taylor series or other expansion of the
fit function that is the basis of the method.

It then provides a common shell that can take ANY of the stepping
functions and provide adaptive stepsize control -- make the step big
where the derivatives are all small and the solution is nearly flat,
make it small as the solution works its way across regions where it is
rapidly varying.  This routine does not ITSELF increase the amount of
work automagically in such a way that it varies over a macroscopic
interval, but it is a tool that is used by the tool that does (and that
can be wrapped and called by the user in the event that they want to
write their own toplevel routine.

Finally, the GSL provides the toplevel "evolution" routine.  This is the
thing you are looking for.  Using this, you can invoke the solver(s) of
your choice (varying them by basically changing a single parameter once
the rest of the solution code is written) to integrate a vector y'(y,t)
from (known) y(t_0) to (evaluated) y(t_1) >>regardless of how large the
separation is between t_0 and t_1<< compared to the underlying structure
of the ODEs and give an answer that is method-sound within some
tolerance.

With that said, managing tolerance vs the actual size of t_0 -> t_1 is
always a tricky thing, whether or not the trickiness is being hidden by
a toplevel evolution call in the GSL or in matlab.  Being old and
cynical and having read many books on numerical methodology, I know that
any ODE solver can be "fooled" into returning a result accurate within
some tolerance when in fact it is not.  It is just a matter of how
structures in the real solution line up with the sequence of points
algorithmically chosen by the ODE solver.  Even if the solver uses
random numbers (to try to beat certain fourier component errors that are
the classic exception) it merely changes the class of functions that
beat a single call of the evolution routine for a given method.

The wise programmer therefore generally learns a bit about their
PARTICULAR ODEs by a mix of judicious experimentation.  In particular, I
personally like to write even the toplevel evolution call into a loop
and work my way through the solution in a variable number of substeps --
calling it as a sort of "adaptive stepping function" that can manage
arbitrary steps, and then vary this intermediary stepsize and the ODE
solver itself at least a couple of different ways that aren't obvious
fourier multiples of one another (as in, not just "halving the step
size" -- maybe doing 1/17 the stepsize or 1/11 the stepsize).  If the
solutions I get in this way are stable and reproducible across method
and stepsize within my requested tolerance, then I gain confidence in
the methods involved for the particular ODEs being solved out to some
large stepsize -- or don't.  At THAT point I can look at timings and
pick the best, fastest, ODE solver that can manage the biggest timestep
that leaves my overall solution apparently accurate.

Sometimes I even apply this to a problem "like" my problem (e.g. solving
coupled Schrodinger equations) with a known analytic solution, which
lets me see what the REAL error is, not just the consistency/tolerance
error, to verify that the method is valid within the requested tolerance
for (one hopes) the entire class of related ODEs.

All this is for non-stiff ODEs.  If you know what that means, all is
well and good and you know that for >>stiff<< ODEs one has to be
much<< more careful.  If you don't, non-stiff ODEs are ones where the
extrapolated fit function has error terms that grow slowly, because
nearby solutions produced by perturbing function or derivative (a
perturbation that ALWAYS occurs due to numerical roundoff error if
nothing else) advance parallel to one another in a functional analytic
sense so errors are (hopefully high order) polynomial in the timestep
and can be made arbitrarily small.  Stiff ODEs, OTOH, are ones where
the nearby solutions exponentially diverge from the desired solution.

Quantum mechanical wavefunctions for bound states are all in L^2(R^D),
and hence almost always have to decay exponentially as a length
parameter gets big relative to some interaction center.  However,
eigensolutions that satisfy this boundary condition are always discrete
and the nearby solutions all DIVERGE exponentially -- this is a classic
example of stiff.  For that reason it is not so easy to numerically
integrate over the ODEs that lead to such a solution -- even if one
starts "perfectly" on a convergent square-integrable solution numerical
error always kicks one off onto one of the divergent solutions that are
infinitesimally separated from the true bound state solution.  Unless
one works very hard or uses special ODE solvers that are good for at
least some classes of stiff problems.

In summary, the GSL provides you with access to a single toplevel
evolution function that you can wrap up in a simple loop to integrate to
your list of times in a single call each.  HOWEVER, you should almost
certainly not trust it until you play the kinds of games described
above, unless you really know the ODEs in question (and their solutions)
well.  At the very least you should vary solvers looking for consistent
results and seeking out the one(s) that solve your problem fastest, if
speed is important to you.  Some solvers are much faster than others,
and speed is good IF it doesn't decrease accuracy for YOUR particular
ODEs.  The GSL also provides you with lower level routines if you need
or want to "do it yourself".  Finally, the GSL manual has a nice little
sample program you can use as a template for your own solution process.


2. Are there any ways to further speed up from internal of the solver?
I am thinking of providing a vector of parameters(hence a vector of
different ODEs) and solve them in a batch!

Sometimes this will help -- many systems will do better if you can keep
a task on CPU and in cache and keep memory access streaming sequential
instead of disjoint.  However, it depends strongly on the method used
and just what it requires in terms of derivatives and jacobeans and so
on.  Try it out.  But FIRST do some experimentation to find out which
solvers do well on the problem a priori.  You have a variety of
Runge-Kutta methods (which fit polynomials in a simple
precision-extending way) to choose from.  rkf45 is a classic that gives
good performance for a very wide range of problems, although I've had as
good or better performance from the Bulirsch-Stoer method (which
requires a jacobean matrix and not just a derivative vector, so you do
more work at one point to gain better overall performance elsewhere).
Many of the methods are there so that if your problem naturally "fools"
an rkf45 because of a naturally occurring series of fourier components
that match the adaptive interval you can try e.g. one with a gaussian
mesh instead of a rectangular one that is less likely to experience this
sort of interference on THIS problem.  Your jacobean will be diagonal,
however, if you are solving a vector of independent 1 D ODEs.  IIRC the
gear method solvers are good for weakly stiff ODEs, but it has been over
two decades since I used them (in a different library altogether) and I
don't really remember.

Once you get your basic loop written, it is really pretty easy to play
with parameters.  Just write your code from the beginning to be
generalizable, not specific.  You are likely to want to reuse any ODE
integrater shell code you write, so keep that in mind.

   rgb


Thanks a lot for your help!

Best,

Michael.


--
Robert G. Brown                        http://www.phy.duke.edu/~rgb/
Duke University Dept. of Physics, Box 90305
Durham, N.C. 27708-0305
Phone: 1-919-660-2567  Fax: 919-660-2525     email:address@hidden






reply via email to

[Prev in Thread] Current Thread [Next in Thread]