freepooma-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [pooma-dev] OpenMP status


From: Jeffrey D. Oldham
Subject: Re: [pooma-dev] OpenMP status
Date: Wed, 25 Aug 2004 06:39:44 -0700
User-agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.6) Gecko/20040413 Debian/1.6-5

Richard Guenther wrote:

On Tue, 24 Aug 2004, Jeffrey D. Oldham wrote:

Richard Guenther wrote:

Together with the last fixes OpenMP with the Intel Compiler 8.0
on a 4-processor Itanium passes all regression tests in optimized
mode apart from:

- array_test5: compiler problem, if compiling with -mp it's fine
- ScalarCode: compiler problem, sometimes works, sometimes generates
unaligned access and abort()s (look for kernel messages)


If it sometimes works, are we sure it is a compiler problem?  Is it
instead a race condition?

I'm sure it's not a race condition, but a problem in the generated
code as that does unaligned memory access which the Itanium seems to
do not like:

dmesg
ScalarCode(23238): unaligned access to 0x2000000001200ca5,
ip=0x20000000003fe670
ScalarCode(23238): unaligned access to 0x2000000001200cad,
ip=0x20000000003fe671
ScalarCode(23238): unaligned access to 0x2000000001200c9d,
ip=0x20000000003fe690
ScalarCode(23267): unaligned access to 0x2000000000566b06,
ip=0x20000000003fe7a1

It's at the lowest optimization level the compiler does any
OpenMP stuff, so I can't really check otherwise.

From gdb I see it's

(gdb) run
Starting program:
/net/alwazn/home/rguenth/src/pooma-bk/r2/src/Field/tests/LINUXICC/ScalarCode
[Thread debugging using libthread_db enabled]
[New Thread 2305843009213887952 (LWP 23722)]
[New Thread 2305843009219836112 (LWP 23723)]
[New Thread 2305843009224030416 (LWP 23724)]
[New Thread 2305843009228224720 (LWP 23725)]
[New Thread 2305843009232419024 (LWP 23726)]
ScalarCode(23722): unaligned access to 0x2000000000566b07,
ip=0x20000000003fe7a1

Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 2305843009213887952 (LWP 23722)]
0x20000000003fd630 in _int_malloc () from /lib/tls/libc.so.6.1
(gdb) bt
#0  0x20000000003fd630 in _int_malloc () from /lib/tls/libc.so.6.1
#1  0x20000000003fb760 in malloc () from /lib/tls/libc.so.6.1
#2  0x2000000000268d10 in operator new () from /usr/local/lib/libcxa.so.6
#3  0x4000000000119c90 in
_ZN22UniformRectilinearMeshILi2EdEC9ERKS0_RK8IntervalILi2EE ()
#4  0x40000000001a3640 in
_ZN11FieldEngineI22UniformRectilinearMeshILi2EdEd9BrickViewEC9Id10MultiPatchI7GridTag5BrickEEERKS_IS1_T_T0_ERK5INodeILi2EE
()
#5  0x400000000018aab0 in
View1Implementation<Field<UniformRectilinearMesh<2, double>, double,
MultiPatch<GridTag, Brick> >, INode<2>, false>::make<INode<2>,
CombineDomainOpt<TemporaryNewDomain1<Interval<2>, INode<2> >, false> > ()
#6  0x400000000018adb0 in View1<Field<UniformRectilinearMesh<2, double>,
double, MultiPatch<GridTag, Brick> >, INode<2> >::make ()
#7  0x40000000001cd3a0 in
MultiArgEvaluator<MultiPatchEvaluatorTag>::evaluate<MultiArg2<Field<UniformRectilinearMesh<2,
double>, double, MultiPatch<GridTag, Brick> >,
Field<UniformRectilinearMesh<2, double>, double, Brick> >,
AllFaceToCellAverage<2>, 2, EvaluateLocLoop<AllFaceToCellAverage<2>, 2> >
()
#8  0x40000000001374e0 in
MultiArgEvaluator<MainEvaluatorTag>::evaluate<MultiArg2<Field<UniformRectilinearMesh<2,
double>, double, MultiPatch<GridTag, Brick> >,
Field<UniformRectilinearMesh<2, double>, double, Brick> >,
AllFaceToCellAverage<2>, 2, EvaluateLocLoop<AllFaceToCellAverage<2>, 2> >
()
#9  0x40000000000d9bd0 in ScalarCode<AllFaceToCellAverage<2>
::operator()<Field<UniformRectilinearMesh<2, double>, double,
MultiPatch<GridTag, Brick> >, Field<UniformRectilinearMesh<2, double>,
double, Brick> > ()
#10 0x40000000000082d0 in main ()

not inside any OpenMP parallelized region (only maybe directly preceding).
And it may be bad interaction between the installed libc and the Intel
Compiler.  Who knows.
OK.  Thanks for the analysis.

--
Jeffrey D. Oldham
address@hidden

reply via email to

[Prev in Thread] Current Thread [Next in Thread]