gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Gluster-devel] crash of one client while another client runs bonnie++ a


From: Mark Mielke
Subject: [Gluster-devel] crash of one client while another client runs bonnie++ and the first client monitors bonnie++
Date: Mon, 07 Sep 2009 02:39:28 -0400
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.1) Gecko/20090814 Fedora/3.0-2.6.b3.fc11 Thunderbird/3.0b3

Had a client crash with bonnie++ running on one client, and the other client listing a directory that was removed by Bonnie++ before or during the list request:


frame : type(1) op(GETXATTR)

patchset: git://git.sv.gnu.org/gluster.git
signal received: 11
time of crash: 2009-09-07 02:31:30
configuration details:
argp 1
backtrace 1
bdb->cursor->get 1
db.h 1
dlfcn 1
fdatasync 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 2.1.0git
/lib64/libc.so.6[0x3133a33370]
/opt/glusterfs/lib/libglusterfs.so.0(dict_foreach+0x18)[0x7f66c043c858]
/opt/glusterfs/lib/glusterfs/2.1.0git/xlator/cluster/replicate.so(__filter_xattrs+0x2a)[0x7f66bf771d3a]
/opt/glusterfs/lib/glusterfs/2.1.0git/xlator/cluster/replicate.so(afr_getxattr_cbk+0x54)[0x7f66bf771e04]
/opt/glusterfs/lib/glusterfs/2.1.0git/xlator/protocol/client.so(client_getxattr_cbk+0x145)[0x7f66bf99f9a5]
/opt/glusterfs/lib/glusterfs/2.1.0git/xlator/protocol/client.so(protocol_client_pollin+0xca)[0x7f66bf996dba]
/opt/glusterfs/lib/glusterfs/2.1.0git/xlator/protocol/client.so(notify+0xe8)[0x7f66bf9a0ae8]
/opt/glusterfs/lib/libglusterfs.so.0(xlator_notify+0x43)[0x7f66c04441c3]
/opt/glusterfs/lib/glusterfs/2.1.0git/transport/socket.so(socket_event_handler+0xc8)[0x7f66be318be8]
/opt/glusterfs/lib/libglusterfs.so.0[0x7f66c045dd5d]
/opt/glusterfs/sbin/glusterfs(main+0x75d)[0x403bbd]
/lib64/libc.so.6(__libc_start_main+0xfd)[0x3133a1ea2d]
/opt/glusterfs/sbin/glusterfs[0x402579]

This is for a 3-node cluster/replicate cluster. Hope the info helps...

Cheers,
mark

--
Mark Mielke<address@hidden>





reply via email to

[Prev in Thread] Current Thread [Next in Thread]