gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] Segmentation fault with afr on dapper using fuse 2.6


From: Bernhard J. M. Grün
Subject: Re: [Gluster-devel] Segmentation fault with afr on dapper using fuse 2.6.3 kernel module
Date: Mon, 30 Apr 2007 09:43:23 +0200

I am here again.
I just tried to compile glusterfs with CC="gcc -g".
Here is the back trace from the same error:
#0  0x00002aaaab182954 in memcpy () from /lib/libc.so.6
#1  0x00002aaaab45a943 in afr_opendir_cbk (frame=0x54eae0,
   prev_frame=0x54ee70, xl=0x54ce40, op_ret=0, op_errno=9, file_ctx=0x54f4e0,
   stbuf=0x0) at afr.c:1202
#2  0x00002aaaab351897 in client_opendir_cbk (frame=0x54ee70, args=0x510520)
   at client-protocol.c:2031
#3  0x00002aaaab3534d0 in client_protocol_interpret (trans=0x54e600,
   blk=0x54f660) at client-protocol.c:2705
#4  0x00002aaaab353143 in client_protocol_notify (this=0x54be60,
   trans=0x54e600, event=1) at client-protocol.c:2558
#5  0x00002aaaaabd0c66 in transport_notify (this=0x54e600, event=1)
   at transport.c:148
#6  0x00002aaaaabd13b5 in epoll_notify (eevent=1, data=0x54e600) at epoll.c:53
#7  0x00002aaaaabd16a0 in sys_epoll_iteration (ctx=0x7fffff879320)
   at epoll.c:145
#8  0x00002aaaaabd0e24 in poll_iteration (ctx=0x7fffff879320)
   at transport.c:251
#9  0x0000000000403792 in main (argc=4, argv=0x7fffff879458) at glusterfs.c:415

I hope this helps a lot more.

Bernhard

2007/4/30, Anand Avati <address@hidden>:
>
> > You can download the logs from http://h0t.de/dapper-logs.tar.gz
>
> I saw the logs, they only indicate that there was a coredump.
>
> > I already tried it with the latest tla checkout. I think it was
> > patch-131 that I tried. And that version worked on feisty (that uses
> > the same fuse version).
>
> Do you mean that patch-131 did NOT work on dapper? you should have a
> coredump from the segfault, is it possible to get a backtrace from that?
>
> avati





reply via email to

[Prev in Thread] Current Thread [Next in Thread]