|
From: | John Lamb |
Subject: | Re: [Help-gsl] Stochastic descent in multimin functions |
Date: | Tue, 18 Jan 2005 20:51:24 +0000 |
User-agent: | Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7.2) Gecko/20040906 |
James Bergstra wrote:
Hi, I was wondering if is safe / sane to provide a function for minimization that uses a stochastic gradient estimate. My gradient estimation would thus produce different estimates for the same point, depending on the internal state of the 'function'. Aside from yeilding a noisy gradient, would this interfere with the optimization? Would one optimization type be more appropriate than another in this case? (eg: grad descent vs. conjugate transpose vs ?) James Bergstra
You might have a look at some of the papers of WB Liu et al on SSC optimization methods.
-- JDL
[Prev in Thread] | Current Thread | [Next in Thread] |