gnugo-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [gnugo-devel] Parallelizing GNU Go


From: Evan Berggren Daniel
Subject: Re: [gnugo-devel] Parallelizing GNU Go
Date: Thu, 3 Apr 2003 18:26:58 -0500 (EST)

On Fri, 4 Apr 2003, Paul Pogonyshev wrote:

> at least for the main project (gnu go that is, i'm not aware of any forks
> yet) my opinion is that we should stick to portable, reasonably fast (must
> run on medium-class machines) and of course as strong as possible user-
> oriented program. this is of course only my point of view but it seems
> to me that the whole gnu project is about common solutions, i.e. i doubt
> it will ever include nuclear-explosion-evaluation-program-that-runs-on-
> room-sized-clusters-only.

In my opinion, we have a fundamental problem at present.  If you produced
a computer tomorrow that was a few orders of magnitude faster than present
machines, we would not know how to make use of it to make GNU Go much
stronger.  Modern chess programs are based on an alpha-beta search; with
faster hardware, they can search deeper, and this makes a difference to
their strength in a noticeable fashion.

One way to deal with this is to simply keep working on GNU Go and assume
that by the time faster computers arrive, GNU Go will be slower and
stronger as a result of various improvements.

Another is to observe that we have access to machines that are 50x faster
than the ones on our desktops, and try seeing what we can do with them.

>
> > > - gnu go is very time consuming, meaning that speed is our second
> > >   priority after engine strength. making it threaded would certainly
> > >   slow it down on all machines but that little fraction enjoying
> > >   multiple cpus.
> >
> > Depending on what happens with hyper-threading...
> > Also depends on whether gnugo is cpmpute bound or memory bound.
>
> i believe it's mostly memory-bound, but who knows :) hyper-threading might
> become a good reason for going threaded but it is not at the moment certainly.

Hyper threading is, IMHO, the way that future fast CPUs will work.  In ten
years, my guess is that you won't be able to buy a desktop computer that
has speed as even a minor design goal with only one logical processor.
This may be a result of HyperThreading or other SMT variants, or Chip
Multi-Processing (multiple cores on one die), or other solutions like the
Power4 (two dies in one package).  My guess is that most new systems will
be like this in 3-5 years, maybe sooner.

I would further venture that performance CPUs will be highly parallel.
Sun's Niagara processor, for example, is intended to be high performance /
power consumption; it will have 8 cores on one die, each capable of
running a 4 threads at a time.  Sun says it will be out in a couple years
(I think... don't have a source handy), and will be 60 times faster than
their current offerings.

If we want GNU Go to perform well on these sorts of platforms, it will
have to be multi-threaded.  We don't have to deal with that now, clearly,
but why not?  A 64-way system makes a good preview of such computers.


Evan Daniel




reply via email to

[Prev in Thread] Current Thread [Next in Thread]