help-gsl
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Help-gsl] Stochastic descent in multimin functions


From: Brian Gough
Subject: Re: [Help-gsl] Stochastic descent in multimin functions
Date: Fri, 14 Jan 2005 18:48:31 +0000

James Bergstra writes:
 > I was wondering if is safe / sane to provide a function for minimization
 > that uses a stochastic gradient estimate.  My gradient estimation would
 > thus produce different estimates for the same point, depending on the
 > internal state of the 'function'.  Aside from yeilding a noisy gradient,
 > would this interfere with the optimization?  Would one optimization type
 > be more appropriate than another in this case? (eg: grad descent vs.
 > conjugate transpose vs ?)

While it might work, I don't think it can be recommended except as a
hack.  I guess it might work if your gradient is reasonably accurate
and the function is mostly quadratic...

There must be some algorithms in the literature for minimisation with
stochastic gradients.  Presumably you can get an error estimate for
each gradient and this can be used to make the minimisation more
reliable with a corresponding algorithm.... that would be my
suggestion.

-- 
best regards

Brian Gough

Network Theory Ltd,
Publishing Free Software Manuals --- http://www.network-theory.co.uk/




reply via email to

[Prev in Thread] Current Thread [Next in Thread]