|
From: | Deepak Shetty |
Subject: | Re: [Gluster-devel] Behaviour of glfs_fini() affecting QEMU |
Date: | Thu, 17 Apr 2014 19:56:19 +0530 |
glfs_fini(glfs)if (glfs)out:goto out;if (glfs_init(glfs))goto out;if (glfs_set_logging() < 0)goto out;glfs = glfs_new();Hi,In QEMU, we initialize gfapi in the following manner:
********************if (!glfs)
goto out;if (glfs_set_volfile_server() < 0)
...
*********************Now if either glfs_set_volfile_server() or glfs_set_logging() fails, we end up doing glfs_fini() which eventually hangs in glfs_lock().
#0 0x00007ffff554a595 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1 0x00007ffff79d312e in glfs_lock (fs=0x555556331310) at glfs-internal.h:176
#2 0x00007ffff79d5291 in glfs_active_subvol (fs=0x555556331310) at glfs-resolve.c:811
#3 0x00007ffff79c9f23 in glfs_fini (fs=0x555556331310) at glfs.c:753
Note that we haven't done glfs_init() in this failure case.- Is this failure expected ? If so, what is the recommended way of releasing the glfs object ?
- Does glfs_fini() depend on glfs_init() to have worked successfully ?
- Since QEMU-GlusterFS driver was developed when libgfapi was very new, can gluster developers take a look at the order of the glfs_* calls we are making in QEMU and suggest any changes, improvements or additions now given that libgfapi has seen a lot of development ?
Regards,
Bharata.
_______________________________________________
Gluster-devel mailing list
address@hidden
https://lists.nongnu.org/mailman/listinfo/gluster-devel
[Prev in Thread] | Current Thread | [Next in Thread] |