help-gsl
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Help-gsl] gsl performance


From: onefire
Subject: Re: [Help-gsl] gsl performance
Date: Mon, 7 Oct 2013 17:02:51 -0400

"I tried this technique with the ODE solvers in the GSL and it gave me
about 5% overall performance improvement so I dropped it from my code,
it was a fiddle to maintain and the user would barely notice the
difference.  I was doing quite a lot of other things, so maybe if your
overall time is dominated by malloc/free it may help."

I am not surprised by your results because, contrary to what my previous
messages might suggest, I think that the main problem is not the
allocations themselves but memory location. At least for certain problems,
the machine is just much more efficient at accessing stack memory.

Unfortunately, it seems that it is not trivial to implement the init
functions that I suggested previously. This is because the minimizer has to
accept different types of objects depending on the problem. The library
currently uses void pointers, but that does not work if you need the
objects to be known at compile-time. Here the lack of overloading in C
really kills it (one would need many more names for types and functions,
which could have a deep impact in the API.
G


On Mon, Oct 7, 2013 at 3:39 PM, Sam Mason <address@hidden>wrote:

> Hi,
>
> On 7 October 2013 18:22, onefire <address@hidden> wrote:
> > One, which I like, is that the routines give the user low-level control
> > over their progress in the sense that you can create an object, manually
> > iterate and observe their progress. I prefer this over having a single
> > function that does the work until convergence (but see below).
>
> Yes, everybody wants to terminate at different places—flat gradient,
> max number of steps, some combination or other condition...  Having
> this outside of the GSL makes sense.  You could let this be evaluated
> by another callback, but then this would be three callbacks now?
>
> Reading your earlier messages, implies that you want to perform many
> minimizations (or other algorithms?).  Could you not just allocate one
> minimizer (or one per thread) and "reset" it as needed? That way you
> don't need to be free()ing/malloc()ing same "same" memory all the
> time.  I guess it depends on whether the number of variables changes.
>
> I tried this technique with the ODE solvers in the GSL and it gave me
> about 5% overall performance improvement so I dropped it from my code,
> it was a fiddle to maintain and the user would barely notice the
> difference.  I was doing quite a lot of other things, so maybe if your
> overall time is dominated by malloc/free it may help.
>
> Not sure whether it would be worth trying a different memory
> allocator.  There used to be faster ones available, but they could
> well be folded into the standard libraries now.
>
> Hope some of that is useful for you!
>
>   Sam
>


reply via email to

[Prev in Thread] Current Thread [Next in Thread]