gnugo-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [gnugo-devel] reading.c / special_rescue_moves


From: Evan Berggren Daniel
Subject: Re: [gnugo-devel] reading.c / special_rescue_moves
Date: Sun, 28 Dec 2003 10:47:40 -0500 (EST)

On Sun, 28 Dec 2003, Martin Holters wrote:

> Hi all!
>
> Starting from my aforementioned example looking like
>
> XXXXXO
> XO...O
> XO.O.O
> XXXXXO
>
> I have investigated reading.c to see whether it could be easily modified
> to see that the double-bamboo-joint would save the left-most string. I
> have stumbled upon special_rescue_moves(), which supposedly adds
> second-order liberties as move candidates - exactly what I want.
> Unfortunately, it also verifies that an X stone at the respective
> first-order liberty could be trivially captured, which is not the case
> for my example, but works well for the examples given in the source. As
> this test is only meant to optimise away unnecessary branches, removing
> it should be a conservative modification.

This looks like the correct approach.  You might also consider changing
the restrictions instead of removing them altogether; we don't want to be
trying huge numbers of moves here.

>
> Indeed, the above example now works out fine, but also
>
> ./regress.sh . reading.tst
> 174 unexpected PASS!
> ./regress.sh . owl.tst
> ./regress.sh . owl_rot.tst
> ./regress.sh . ld_owl.tst
> 182 unexpected FAIL: Correct '2 S1', got '1 S1'
> ./regress.sh . optics.tst
> ./regress.sh . filllib.tst
> ./regress.sh . atari_atari.tst
> 13 unexpected FAIL: Correct 'D8', got 'PASS'
> ./regress.sh . connection.tst
> 102 unexpected PASS!
> ./regress.sh . break_in.tst
> ./regress.sh . blunder.tst
> ./regress.sh . trevora.tst
> 150 unexpected FAIL: Correct 'F6', got 'E6'
> 370 unexpected PASS!
> ./regress.sh . nngs1.tst
> ./regress.sh . strategy.tst
> 26 unexpected PASS!
>
> That's four unexpected PASSES against three unexpected FAILS - at least
> a net gain. What puzzles me, however, is how considering more
> possibilities in a brute-force search can worsen the result in any case
> at all. Am I missing some assumption here about what defences should
> _not_ be found for some reason?

There are a number of factors at work here.  When you add new defending
moves, you need to make sure that the corresponding attacking moves get
added too; all the variations need to be considered, and if you unbalance
things then it will actually get worse by considering more moves.  This
isn't normally something to worry about too much, but should be given some
consideration.

More likely is that in the FAILs, the reading has improved in spots, but
is still largely or partially incorrect, through no fault of your patch.
We call these accidental fails.  If the test case was passing through a
combination of mistakes, and you remove one of them, it is not surprising
that the testcase now fails.  However, you do need to do at least a
cursory examination before deciding that this is the case.

> Furthermore, is there a nice possibility to do some automated
> benchmarking to get some measure of how big the performance impact of
> this modification is?

The simplest way is to run some or all of the regressions using the
regress.pike script; it will provide counts of the reading nodes used,
which can be compared to the counts beforehand.

Also, it looks like you only ran the first batch of the regressions (make
first_batch).  Are there changes in the other batches? (Make all_batches
or using regress.pike will report all changes.

Thanks

Evan Daniel




reply via email to

[Prev in Thread] Current Thread [Next in Thread]