[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [gnugo-devel] Semeai

From: Gunnar Farneback
Subject: Re: [gnugo-devel] Semeai
Date: Thu, 24 Jan 2002 18:07:25 +0100
User-agent: EMH/1.14.1 SEMI/1.14.3 (Ushinoya) FLIM/1.14.2 (Yagi-Nishiguchi) APEL/10.3 Emacs/20.7 (sparc-sun-solaris2.7) (with unibyte mode)

Dan wrote:
> There is a problem with the compactness of the
> hash data. BTW, I haven't addressed the problem
> Gunnar described in:
> but I'll say that one issue is that we'd like to
> save a bit more data than we have room for in the
> compressed data. There is a similar issue with the
> semeai, which I'll explain.

For caching of readconnect results we have a similar issue in that two
strings are involved.

> The function returns through pointers statuses for
> two dragons and so I split the result field in half.
> These statuses can be ALIVE, DEAD, ALIVE_IN_SEKI or
> UNKNOWN. Four bits is enough to cache these.

Four bits should be more than enough for those, but we probably want
to be able to handle ko contingent results as well.

> However there is still a problem that the hash key
> only depends on one dragon, which could conceivably
> be involved in multiple semeais. The solution I found
> to this is to run reading_cache_clear() from the
> non-recursive owl_analyze_semeai() after
> do_owl_analyze_semeai() returns.

This is clearly not acceptable as a long-term solution.

If we look at struct read_result_t, we can see that it contains three

  unsigned int compressed_data;
  int result_ri_rj;
  struct read_result_t *next;

We can't do anything constructive with the next pointer, which leaves
us with (on most architectures) 64 useful bits. The current
distribution of these is

    komaster:  2 bits
    kom_pos : 10 bits
    routine :  4 bits
    str     : 10 bits
    stackp  :  5 bits
    status  :  8 bits
    result  :  8 bits
    move    : 16 bits

We can notice that compressed_data contains all information needed to
identify which reading problem the struct pertains too (input data),
while result_ri_rj holds the returned values (output data) plus a
status field (for internal use by the caching code). This is practical
because it allows a simple comparison "result->compressed_data ==
search_for" to identify a reading problem. It's not necessary,
however, to have all input data in compressed_data. We may freely use
these 64 bits however we want, as long as we revise the code

Let's look closer at what we have:

komaster: 2 bits is a minimum, may need one more if we make the
          komaster scheme more complex
kom_pos:  10 bits allows up to 31x31 boards. We could save one bit by
          limiting ourselves to maximum 21x21 boards.
routine:  3 bits may suffice, but probably need 4 sooner or later
str:      10 bits, same as kom_pos
stackp:   5 bits, might do with 4 but prefer not
status:   2 bits suffice
result:   2 bits suffice for the basic ko scheme, the semeai needs
move:     10 bits, same as kom_pos

Not counting the requirements of semeai or a revised komaster scheme,
we should get by with 45 bits. This leaves 19 bits for expansion. The
connection code needs to store one more string, for 10 bits, but this
is all that's required.

How many bits does the semeai code need? (Actually it's not necessary
for different routines to have the same storage layout, as long as the
routine and status fields appear in fixed positions.)

> If this call to reading_cache_clear() is moved before 
> the do_owl_analyze_semeai() call we get a different
> set of crashes. I looked at one of those.  It was not
> in semeai() at all but a later call to the owl
> code. The crash was in hashtable_partially_clear().

That's bad. Might be a problem with data corruption.

> I would like to understand these crashes.
> At least one of the crashes of the current patch
> disappears if reading_cache_clear() is called both
> before and after the call to do_owl_analyze_semeai().
> This is a crash in lazarus:8. It does not occur if
> that test is isolated from the rest of lazarus.tst.

Also sounds strange.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]