[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [gnugo-devel] nando_3.9.4c ?

From: Arend Bayer
Subject: Re: [gnugo-devel] nando_3.9.4c ?
Date: Fri, 27 Sep 2002 23:37:30 +0200 (CEST)

> > The question : if I can confirm that the engine's behavior is favorably
> > modified (regarding groups safety) by this patch, do you think it is worth
> > the 6% additional owl nodes or not ?
That depends a bit. I don't trust numbers of PASSes very much, but 3
PASSes doesn't sound like it being worth it; 14 PASSes would be worth it.

In any case, I think we don't need to make the decision now. It is easy
to disable your code, and the infrastructure you built up could be
useful for other cases, too (tails being cut off without actually being
captured). The only thing we should not to at this point is merge a
version that brings the engine out of tune without good reason, as this
will make other tuning more difficult. (I.e. rather put in a version
that causes 3 PASSes and no FAILs than 14 PASSes and 9 FAILs).

One way to reduce the owl node damage might actually be to run the owl
code twice; in the first run, we don't care about loosing tails, only
set some global variable in case this happens. Only if
* this variable got set in the first run,
* the dragon is declared alive, and
* we still have spare owl nodes
we would make a 2nd run in which we care about trying to live without
loosing a tail. It would need some tricks with the caching though.
Definitely only worth worrying about once the patch is in, and if we
decide that the increase in owl nodes is too big.

> I think we can safely increase the owl nodes a little
> if there is a good reason. Possibly some of your FAILS
> would go away in that case.
Increasing owl nodes to 2000 might be a good idea anyway. For nngs.tst,
I measured an increase in owl nodes by 10% and in reading nodes by 4%.
This looks acceptably to me -- some more performance measurements seem
to be called for, of course.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]