glob2-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [glob2-devel] gradient improvement


From: Kai Antweiler
Subject: Re: [glob2-devel] gradient improvement
Date: Wed, 05 Apr 2006 01:12:10 +0200
User-agent: Gnus/5.1002 (Gnus v5.10.2) XEmacs/21.4 (Jumbo Shrimp, linux)

> Alright, there is a possible problem. I did hope to keep it secret :P for the
> sake of the performance. 

Bad choice.  That's only a waste of (my) time.  There shoud be a comment in
code.  I knew glob2 had a documentation problem, but I didn't think this was
intended.


> Now, what do we do ?
>
> 1) All parts of the code but the AI provide "clean" listedAddr. That mean that
> all "gameplay" gradients are computed correctly.

Ok, so in human only games we have no problems.
And in games with AIs they might make a bad move, but probably won't.
At least that's what I hope.  The only persons who can judge this are
the programmers of the AIs - so they should know about it.


> Increasing its size would only decrease performance by increasing
> the cache misses.

btw: I recently thought of a way to half the size :-)
And and some Isle maps we might even go further on noswim.


> 2) It's not likely to get a wrong gradient computation. To do so, all the
> listedAddr[size] has to be filled. It's unlikely because the gradient 
> propagate
> as a "growing circle", and then is proportionally linear to the map size
> O(sizeX+sizeY). While the listedAddr[size] has size=sizeX*sizeY, which is much
> bigger.

Ok, but they also depent on the number of wavefronts - and those can get
really messy if the init is.


> 3) At worse, if the AI provide a list which is too bad, and lead to a wrong
> gradient, it will make a sub-optimal choice.

Ok, but right now we have to patch the simon, kai algorithms by adding
&(size-1), or worse things could happen.


> 4) For an error to happen, the AI need to provide a source with high
> differences. Which I don't.

Yes, exactly that those worry about it who know the AI.
But they have to know.

> 5) I added some code to check how this behave. I compute the biggest size of 
> the
> queue at any moment of the gradient computation, and report it once per
> gradient. Here is the result for all gradients of a game:
>
> listCountSizeStatsOver=0
> listCountSizeStats:
> [    0->  255]: 9906
> [  256->  511]: 4244
> [  512->  767]: 6177
> [  768-> 1023]: 1648
> [ 1024-> 1279]: 1802
> [ 1280-> 1535]:    0
> [ 1536-> 1791]:    2
> [ 1792-> 2047]:    3
> [ 2048-> 2303]:    0
> [ 2304-> 2559]:    0
> [ 2560-> 2815]:    1
> [ 2816-> 3071]:    1
> [ 3072-> 3327]:    2
> [ 3328-> 3583]:    0
> [ 3584-> 3839]:    0
> [ 3840-> 4095]:    0
> [ 4096-> 4351]:    0
> [ 4352-> 4607]:    0
> [ 4608-> 4863]:   13
> [ 4864-> 5119]:    0
> [ 5120-> 5375]:    3
> [ 5376-> 5631]:    2
> [ 5632-> 5887]:    1
> [ 5888-> 6143]:    3
> [ 6144-> 6399]:   15
> [ 6400-> 6655]:    2
> [ 6656-> 6911]:    1
> [ 6912-> 7167]:    4
> [ 7168-> 7423]:    0
> [ 7424-> 7679]:    1
> [ 7680-> 7935]:    0
> [ 7936-> 8191]:   13
> [ 8192-> 8447]:    0
> [ 8448-> 8703]:    3
> [ 8704-> 8959]:    2
> [ 8960-> 9215]:    1
> [ 9216-> 9471]:    1
> [ 9472-> 9727]:    0
> [ 9728-> 9983]:   14
> [ 9984->10239]:    5
> [10240->10495]:    0
> [10496->10751]:    1
> [10752->11007]:    1
> [11008->11263]:    3
> [11264->11519]:    1
> [11520->11775]:    0
> [11776->12031]:    0
> [12032->12287]:    0
> [12288->12543]:    0
> [12544->12799]:    0
> [12800->13055]:    0
> [13056->13311]:    0
> [13312->13567]:    0
> [13568->13823]:    0
> [13824->14079]:    0
> [14080->14335]:    0
> [14336->14591]:    0
> [14592->14847]:    0
> [14848->15103]:    0
> [15104->15359]:    0
> [15360->15615]:    0
> [15616->15871]:    0
> [15872->16127]:    0
> [16128->16383]:    0
>
> The worst is 11456 over 16384. Note that it never "smash" the queue. You can
> play with it by adding:
> #define check_gradient_error_probability
> in Map.h

I might use this for optimization too.
If we are cutting the size for performance we might reduce it even more,
but we have to make sure this will not compromize crucial parts.
What do the three number per row stand for?


> If you can notice any "smash", we can add a simple argument "const size_t
> listedAddrSize" which will be big enough for everything. The method caller 
> will
> use a bigger size in case it may smash, like the AI. Well, maybe you want this
> to be added now anway ? Or you see another nice solution ?

I have to think about it.
But again: I think that belongs to the AI programmers since they exactly
know what this algorithm must tolerate and (more important) how bad the
consequences will be.


> See, I don't want to change the AI call, because now, it works, it's simple, 
> and
> it's powerful. I mean, "updateGlobalGradientSlow()" do what I want, and do it
> nicely. It's very convenient to use it for the AI.

If that is the same for every present and future AI, we don't have to
change anything - except what I mentioned earlier.

> Well, I don't mind creating a
> special case for this one, if this worth it.


> ps: I added some comments here too:
>
>> Imagine that the fields (x=0,y=0) (x=1,y=1) have gradient 9.
> I guess you mean (x=0; y=0) and (x=1; y=0) have gradient 9.

I actually messed up my example even more.
I had two examples and merged them accidentaly - I won't correct it.


-- 
Kai Antweiler





reply via email to

[Prev in Thread] Current Thread [Next in Thread]