[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: guile-user Digest, Vol 193, Issue 29
Re: guile-user Digest, Vol 193, Issue 29
Fri, 28 Dec 2018 15:17:42 +0100
Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.2.1
On 26.12.18 20:51, Tk wrote:
> ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
> On Wednesday, 26 December 2018 12:43, Zelphir Kaltstahl <address@hidden>
>> On 25.12.18 18:00, address@hidden wrote:
>>>> Hello Guile Users,
>>>> Is there some library, that enables high performance matrix operations
>>>> or even n-dimensional array operations? I am thinking of something like
>>>> NumPy in the Python ecosystem. I think NumPy in turn also uses some
>>>> lower level thing to do what it does. I think OpenBLAS or MKL, depending
>>>> on the architecture. I wonder if there is any wrapper around OpenBLAS
>>>> for Guile or something similar.
>>>> I am writing a little code for matrix operations and currently I am
>>>> using Guile arrays, as they are made of vectors and have constant access
>>>> time, which is already great. My guess is, that this would be the right
>>>> choice if using pure Guile. I am writing data abstraction procedures, so
>>>> that later on I could exchange what is used to represent the data.
>>>> Maybe, if there is something like NumPy or lower level, I should use
>>>> that instead? (Would I have to learn how to use FFI first?)
>>>> Or maybe Guile's implementation is already so fast, that it would not
>>>> make that much difference to use a lower level thing?
>>>> Currently I have only a little experimental program, started today, so
>>>> no huge plan. OK, one can fantasize about stuff like Pandas data frames
>>>> in Guile, but I have no illusion, that it is a work of a few days or
>>>> even weeks. It would be nice to learn, how to use a low level thing or
>>>> maybe even Pandas, if there are any such bindings for Guile. I could
>>>> make the implementation use different representations, depending on a
>>>> parameter or something like that.
>>>> I took a different route. Instead of merely binding functionality in a
>>>> lower level language, I use Guile to generate lean-and-mean modern Fortran
>>>> code. The building blocks can be found here:
>>>> https://gitlab.com/codetk/schemetran (wanted to save this for potluck, but
>>>> here it goes ... ). Fortran compilers take care of efficient execution on
>>>> HPC platforms. Actually, it is beyond me why anyone would bother with any
>>>> other programming language when it comes to expressing maths efficiently.
>>> I built a pseudo-spectral Navier-stokes solver that can work on MPI,
>>> shared-mem (OpenMP), and hopefully soon GPU/Xeon accelerators (OpenMP 4)
>>> atop of schemetran. I still need to see about publishing it under an Libre
>> Hi Tk,
>> That looks interesting. I've only used OpenMP once in a university
>> lecture, in a pretty simple way (just adding some annotations to some
>> loops to parallelize them) on a normal desktop machine. I have a few
>> questions about your library:
>> What are the requirements in terms of installed software? (For example:
>> Do I need something Fortran installed? Or is GCC handling it all?)
>> What do you mean by "on HPC platforms"? Does that imply, that it would
>> not produce efficient execution on desktop machines?
>> Thanks for sharing your project, I will have to take a look at how to
>> use it.
> Hi Zelphir,
> The requirements are guile >=2.0, fortran compiler that understands bits of
> 2003/2008 standards used in the "library" (which should rather be thought of
> as a set of lego pieces) and GNU Make. The oldest gfortran compiler that I
> know works for certain with schemetran is gfortran 4.8.2. As for Intel, I've
> only tested it with v17 and greater, but I'm fairly confident anything >v13
> should work, too. In addition, I ran tests with flang 7.0.0 (from clang/AOCC
> suite) and that suprisingly worked, too (suprisingly because they claim they
> fully support only Fortran 2003).
> Schemetran itself is for the moment only helping you write pure Fortran code.
> So, no OpenMP directives are currently included. This was a design decision,
> because compiler directives, even when standardised, are dirty. I have some
> plans in that direction, but this will be something separate from schemetran
> (once I pull it from the guts of my incompressible MHD code).
> That being said, schemetran can be used to generate coarray fortran (part of
> 2008 and later standard extensions) which current fortran compilers usually
> implement on top of MPI parallelism (at least, that used to be the case when
> I last looked into this).
> Some fortran compilers (say, Intel Fortran) can auto-parallelise your code
> even if coarray extensions are not used. This is almost always done using
> Even if a compiler doesn't generate parallel code, it should be able to
> efficiently optimise array operations so that all vector based capabilities
> of present day processors are properly exploited (SSE, AVX). Some care needs
> to be taken when writing code that compilers can recognise as SIMD
> vectorisable and schemetran helps you with that (chiefly by not
> overcomplicating things much).
> I realise I deviated from your questions a bit, so let me get back on track.
> Yes, all of the things mentioned can run on a normal CPU. Vector instructions
> such as SIMD can be used anywhere, OpenMP on any multicore CPU (so,
> realistically anywhere these days), even MPI which was designed for
> distributed computing can be used on a single CPU (since mainstream MPI
> distributions try hard to turn your message passing into shared memory
> accesses behind your back).
> Hope this helps,
> PS Nothing prevents you of adding OpenMP statements by hand into the mix by
> modifying some of the schemetran directives.
> PPS I hope nobody on Guile mailing list is going to shoot me for advertising
> Fortran as a scientific language of choice ;-P .
> PPPS For those guys who want ndarray functionality in Guile .. do you
> actually intend to write complicated algebraic expressions using LISP-like
> notation? That's brave. I hope no nuclear facility melts down as a
> consequence of someone messing up transcription of normal math into brackets
> :) .
Thanks for all that info. It certainly seems like an interesting project.
I am not sure it would be less work using it, compared to interfacing
with CBLAS though for example and I don't think I could figure out most
of the performance optimizations, that is already in BLAS or ATLAS. I
guess I would use you project, when I have something that is not plain
matrix multiplication and still want to have Fortran level parallelization.
As I can see from all the responses, there are quite a few options and
now I am not sure what I should use. I will have to try a few things.