guile-user
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Guile fibers return values


From: Zelphir Kaltstahl
Subject: Re: Guile fibers return values
Date: Mon, 6 Jan 2020 23:45:10 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Icedove/52.9.1

Hello John!

Thanks for your reply!

On 06.01.2020 22:47, John Cowan wrote:
> Conceptually, parallelism and concurrency are two different and partly
> independent things.  Parallelism refers to physically simultaneous
> execution, as when you throw a ball into the air in each hand and
> catch it in the same hand.  Each throw-catch cycle is a parallel
> process (using "process" in the broad sense of the term). 
> Concurrency, on the other hand, is *logically* simultaneous execution,
> as when you juggle three balls in one or two hands.  Now the
> throw-catch cycle of each ball from one hand to the other is a
> concurrent process, and it is also a parallel process if you use two
> hands.  If you are using one hand, however, there is no parallelism in
> juggling.

Yes, that is pretty clear. Parallelism and concurrency are not the same,
I know that. One could also say, that one can have concurrent execution
on a single core, even multiple processes, but cannot have parallel
execution on that single core. With concurrency, even on one core, one
needs to look out for many things like concurrent updates or mutating
state, as one, in the general case, does not know when one process will
be running and when the other.

> To make matters more confusing, "futures" in Racket are for
> parallelism, whereas in Guile they are for concurrency.  Guile
> "parallel" and friends are executed on futures (which are executed on
> OS threads), but use at most as many futures as there are CPUs, so
> physically simultaneous execution is at least encouraged if not
> actually guaranteed.

I think that is a typo maybe? The futures guide says:

"The (ice-9 futures) module provides futures, a construct for fine-grain
parallelism."

Otherwise that would indeed be very confusing. Or do you say this,
because on a single core machine, there would be no parallelism and thus
one cannot say, that Guile's futures will enable parallelism in general,
but can say, that they in general enable concurrency?

> Racket parallelism only operates until one of the parallel processes
> blocks or needs to synchronize (which includes things like allocating
> memory): they are not implemented on top of Racket threads, which are
> for concurrency (and have nothing to do with OS threads). 

Yes, as far as I understand, Racket threads are so called "green
threads" (like in Python). To use multiple cores in the general case,
one needs to make use of Racket's "places" instead, which are additional
Racket VMs running.

> A Scheme promise can be viewed as a type of parallel process that
> doesn't actually provide parallelism (and in fact my parallel pre-SRFI
> is called "parallel promises" and treats ordinary promises as a
> degenerate case) or as a future that doesn't start to execute until
> you wait for it to finish (and my futures pre-SRFI also treats
> promises as a degenerate case).

I think of "promises" as something that enables asynchronous execution.
Don't beat me for this: "Just like in JavaScript" basically :D I don't
know, if that notion is wrong in the Scheme context though. So far I
found the following approaches to do things in parallel in GNU Guile:

1. futures
2. parallel forms (built on futures, probably "just" convenience)
3. fibers library

I have not considered to look for "promises" yet, as I did not think
them to be a parallelism construct or concept. However, you are
mentioning them. Does that mean, that I should look into them as well or
is it rather a general explanation to get the concepts cleanly
separated? It seems not like they could parallelize any algorithm. At
least not, if they share the character of JavaScript promises.

And hello Chris!

I have just checked, whether futures run in parallel for the example of
the Racket docs and they seem to run in parallel, although the CPU is
not 100% busy, probably because of other factors, like allocations:

--------8<--------8<--------
(use-modules
 (ice-9 futures)
 ;; SRFI 19 for time related procedures
 (srfi srfi-19))


;; Just defining a timing macro here to conveniently measure elapsed time of
;; evaluating expressions.
(define-syntax time
  (syntax-rules ()
    [(time expr expr* ...)
     (begin
       (define start-time (current-time time-monotonic))
       expr
       expr* ...
       (define end-time (current-time time-monotonic))
       (let* ([diff (time-difference end-time start-time)]
              [elapsed-ns (+ (/ (time-nanosecond diff) 1e9)
                             (time-second diff))])
         (display (format #t "~fs~%" elapsed-ns))))]))


(define (mandelbrot iterations x y n)
  (let ([ci (- (/ (* 2.0 y) n) 1.0)]
        [cr (- (/ (* 2.0 x) n) 1.5)])
    (let loop ([i 0] [zr 0.0] [zi 0.0])
      (if (> i iterations)
          i
          (let ([zrq (* zr zr)]
                [ziq (* zi zi)])
            (cond
              [(> (+ zrq ziq) 4.0) i]
              [else (loop (+ i 1)
                          (+ (- zrq ziq) cr)
                          (+ (* 2.0 zr zi) ci))]))))))


(time
 (let ([f (future (lambda () (mandelbrot 10000000 62 501 1000)))])
  (list (mandelbrot 10000000 62 500 1000)
        (touch f))))

(time
 (mandelbrot 10000000 62 501 1000))
--------8<--------8<--------

On my machine both timed expressions run in approximately the same time,
which I conclude from, that 2 cores were used and the work was done in
parallel.

Regards,
Zelphir



reply via email to

[Prev in Thread] Current Thread [Next in Thread]