help-gsl
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Help-gsl] Usage of GSL random in parallel code [OpenMP]


From: M A
Subject: Re: [Help-gsl] Usage of GSL random in parallel code [OpenMP]
Date: Sun, 4 May 2014 19:53:14 +0100 (BST)

Ok, I think I got the idea.
I don't use c++11, however I think that the ansi c code attached works in the 
same way.
I allocated memory for a number = omp_get_max_threads() of generators.

Thank you very much,

Al.
On Sunday, 4 May 2014, 14:14, Klaus Huthmacher <address@hidden> wrote:
 
Dear Altro,

I would simply use a wrapper around a C++ std::vector who holds as many
C++11 random number generators (engines) as you use threads in your code.
By calling a random number, the engine in the ith index of the vector is
used, where i is the thread id.

Please see the appended minimal example mb.cpp with the wrapper
rng-omp.{h,cpp}.

Best wishes and keep us informed,
-- Klaus.




> Dear All
>
> I wish to use effectively use the gsl random number generator jointly
> with OpenMP on a HPC cluster.
>
> The simulation I have to do are quite simple: just let evolve an
> "ensemble" of system with the same dynamics. Parallelize such a code
> with OpenMP "seems" straightforward.
> As the GSL does not support parallel processing, I suppose that we must
> use a different random number generator for each element of the ensemble;
> Indeed I don't know if this is a good procedure.
>
> The ensemble could be very big and it would be simply impossible to
> allocate memory for all generators associated with the Systems of the
> ensemble.
> Then my question is:
> Is this still a correct way to use the GSL random generator on OpenMP?
>
> An idea of how the allocation could be done is given in the code below.
> Does it make any sense? If not, which is the correct way?
>
> Best wishes,
>
> Al.
>
>
>
> #include <stdlib.h>
> #include <stdio.h>
> #include <gsl/gsl_rng.h>
> #include <gsl/gsl_randist.h>
> #include <omp.h>
>
> int main (){
>      int i;
>
>      int DIM_ENSEMBLE = 10000000000;
>
>      double *d;
>      gsl_rng **r ;
>
>      d = malloc(DIM_ENSEMBLE*sizeof(double));
>      r = malloc(DIM_ENSEMBLE*sizeof(gsl_rng*))
> ;                                            ;
>      for (i=0;i<DIM_ENSEMBLE;i++){
>          r[i] = gsl_rng_alloc (gsl_rng_mt19937);
> //        r[i] = gsl_rng_alloc (gsl_rng_taus);
>          gsl_rng_set(r[i],i);
>      }
>
>
>      size_t n = gsl_rng_size (r[0]);  //size in byte (the size of
> tausworthe is 24 ; the size of mersenne twister is 5000)
>
> #pragma omp  parallel for default(none) shared(r,d) shared(DIM_ENSEMBLE)
>      for  (i =0; i< DIM_ENSEMBLE; i++) {
>          d[i] = gsl_rng_uniform(r[i]);
>      }
>
> #pragma omp  parallel for default(none) shared(d) shared(DIM_ENSEMBLE)
>      for (i=0;i<DIM_ENSEMBLE;i++){
>          printf("%d %f\n",i,d[i]);
>      }
>
>      printf("The size in Mb of the vector r is %lu
> Mb\n",n*DIM_ENSEMBLE/1000000);
>
>      free(d);
>
>     for (i=0;i<DIM_ENSEMBLE;i++){
>          gsl_rng_free (r[i]);
>     }
>     free(r);
>
> return 0;
> }
>
>

Attachment: gsl-rng-parallel.c
Description: Text Data


reply via email to

[Prev in Thread] Current Thread [Next in Thread]