help-gsl
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Help-gsl] Stochastic descent in multimin functions


From: James Bergstra
Subject: [Help-gsl] Stochastic descent in multimin functions
Date: Wed, 12 Jan 2005 18:00:45 -0500
User-agent: Mutt/1.4.1i

Hi,

I was wondering if is safe / sane to provide a function for minimization
that uses a stochastic gradient estimate.  My gradient estimation would
thus produce different estimates for the same point, depending on the
internal state of the 'function'.  Aside from yeilding a noisy gradient,
would this interfere with the optimization?  Would one optimization type
be more appropriate than another in this case? (eg: grad descent vs.
conjugate transpose vs ?)

James Bergstra




reply via email to

[Prev in Thread] Current Thread [Next in Thread]