gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] [libvirt] [RFC PATCH v1 0/2] Qemu/Gluster support in


From: Joe Julian
Subject: Re: [Gluster-devel] [libvirt] [RFC PATCH v1 0/2] Qemu/Gluster support in Libvirt
Date: Wed, 29 Aug 2012 22:32:40 -0700
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:12.0) Gecko/20120430 Thunderbird/12.0.1

Is it the nfs setting, or server.rpc-allow-insecure ? I'm leaning more
toward the latter.

On 08/29/2012 10:28 PM, Harsh Bora wrote:
> On 08/30/2012 08:27 AM, Yin Yin wrote:
>> Hi, Harsh:
>>    I make some break in glusterd , and can gdb the qemu-kvm forked from 
>> libvirtd.
>>
>> break in glusterd:
>>
>> (gdb) i b
>> Num     Type           Disp Enb Address            What
>> 1       breakpoint     keep y   0x00007f903ef1a0a0 in server_getspec at 
>> glusterd-handshake.c:122
>> 2       breakpoint     keep y   0x00000034f4607070 in 
>> rpcsvc_program_actor at rpcsvc.c:137
>> breakpoint already hit 2 times
>> 3       breakpoint     keep y   0x00007f903ef199f0 in 
>> glusterd_set_clnt_mgmt_program at glusterd-handshake.c:359
>> 4       breakpoint     keep y   0x00007f903ef1a0a0 in server_getspec at 
>> glusterd-handshake.c:122
>>
>> in rpcsvc_handle_rpc_call fun, it call rpcsvc_program_actor and return 
>> right.
>> (gdb) p *actor
>> $13 = {procname = "GETSPEC", '\000' <repeats 24 times>, procnum = 2, 
>> actor = 0x7f903ef1a0a0 <server_getspec>, vector_sizer = 0, unprivileged 
>> = _gf_false}
>>
>> but in
>>
>> if(0==svc->allow_insecure&&unprivileged&&!actor->unprivileged){
>> /* Non-privileged user, fail request */
>> gf_log("glusterd",GF_LOG_ERROR,
>> "Request received from non-"
>> "privileged port. Failing request");
>> rpcsvc_request_destroy(req);
>> return-1;
>> }
>>
>> so the server_getspec on server not be called, which cause qemu-kvm 
>> progress failed.
>>
>> my question:
>> 1.(0==svc->allow_insecure&&unprivileged&&!actor->unprivileged) which one 
>> wrong here ?
>>
> You should be able to check this, step next in gdb and print value of
> each var (to check which one is false). However, I think its more about
> configuring glusterd correctly and less about the libvirt/qemu part of
> it. I am willing to be corrected on this.
>
> Let us know if Deepak's suggestion to set nfs.ports-insecure option
> (might affect svc->allow_insecure in above statement) on gluster volume
> works for you.
>
> Thanks for testing my patch though !
>
> regards,
> Harsh
>
>> Best Regards,
>> Yin Yin
>>
>> On Thu, Aug 30, 2012 at 9:14 AM, Yin Yin <address@hidden 
>> <mailto:address@hidden>> wrote:
>>
>>     Hi, Harsh:
>>            I've try your patch, but can't boot the vm.
>>     address@hidden qemu-glusterfs]# virsh create gluster-libvirt.xml
>>     错误:从 gluster-libvirt.xml 创建域失败
>>     错误:Unable to read from monitor: Connection reset by peer
>>
>>     the libvirt build the qemu/gluster command correctly, the qemu-kvm
>>     try to run, but faile after a while, that cause the libvirt monitor
>>     connect failed.
>>
>>     the /var/libvirt/qemu/gluster-vm.log follow:
>>     2012-08-30 01:03:08.418+0000: starting up
>>     LC_ALL=C PATH=/sbin:/usr/sbin:/bin:/usr/bin QEMU_AUDIO_DRV=spice
>>     /usr/libexec/qemu-kvm -S -M rhel6.2.0 -enable-kvm -m 512 -smp
>>     1,sockets=1,cores=1,threads=1 -name gluster-vm -uuid
>>     f65bd812-45fb-cc2d-75fd-84206248e026  -nodefconfig -nodefaults
>>     -chardev
>>     
>> socket,id=charmonitor,path=/var/lib/libvirt/qemu/gluster-vm.monitor,server,nowait
>>     -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc
>>     -no-shutdown -device
>>     virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x3 -device
>>     piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
>>     
>> file=gluster://10.1.81.111:24007/dht/windows7-32-DoubCards-iotest-qcow2.img,if=none,id=drive-virtio-disk0,format=qcow2,cache=none,aio=native
>>     
>> <http://10.1.81.111:24007/dht/windows7-32-DoubCards-iotest-qcow2.img,if=none,id=drive-virtio-disk0,format=qcow2,cache=none,aio=native>
>>     -device
>>     
>> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
>>     -device usb-tablet,id=input0 -spice
>>     port=30038,addr=0.0.0.0,disable-ticketing -vga cirrus -device
>>     virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
>>     2012-08-30 01:03:08.423+0000: 4452: debug : virCommandHook:2041 :
>>     Run hook 0x48f160 0x7f433ba0e570
>>     2012-08-30 01:03:08.423+0000: 4452: debug : qemuProcessHook:2475 :
>>     Obtaining domain lock
>>     2012-08-30 01:03:08.423+0000: 4452: debug :
>>     virDomainLockManagerNew:123 : plugin=0x7f43300b7980
>>     dom=0x7f43240022b0 withResources=1
>>     2012-08-30 01:03:08.423+0000: 4452: debug : virLockManagerNew:291 :
>>     plugin=0x7f43300b7980 type=0 nparams=4 params=0x7f433ba0d9d0 flags=0
>>     2012-08-30 01:03:08.423+0000: 4452: debug :
>>     virLockManagerLogParams:98 :   key=uuid type=uuid
>>     value=f65bd812-45fb-cc2d-75fd-84206248e026
>>     2012-08-30 01:03:08.423+0000: 4452: debug :
>>     virLockManagerLogParams:94 :   key=name type=string value=gluster-vm
>>     2012-08-30 01:03:08.423+0000: 4452: debug :
>>     virLockManagerLogParams:82 :   key=id type=uint value=1
>>     2012-08-30 01:03:08.423+0000: 4452: debug :
>>     virLockManagerLogParams:82 :   key=pid type=uint value=4452
>>     2012-08-30 01:03:08.423+0000: 4452: debug :
>>     virDomainLockManagerNew:135 : Adding leases
>>     2012-08-30 01:03:08.423+0000: 4452: debug :
>>     virDomainLockManagerNew:140 : Adding disks
>>     2012-08-30 01:03:08.423+0000: 4452: debug :
>>     virLockManagerAcquire:337 : lock=0x7f4324001ba0 state='(null)'
>>     flags=3 fd=0x7f433ba0db3c
>>     2012-08-30 01:03:08.423+0000: 4452: debug : virLockManagerFree:374 :
>>     lock=0x7f4324001ba0
>>     2012-08-30 01:03:08.423+0000: 4452: debug : qemuProcessHook:2500 :
>>     Moving process to cgroup
>>     2012-08-30 01:03:08.423+0000: 4452: debug : virCgroupNew:603 : New
>>     group /libvirt/qemu/gluster-vm
>>     2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupDetect:262 :
>>     Detected mount/mapping 0:cpu at /cgroup/cpu in
>>     2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupDetect:262 :
>>     Detected mount/mapping 1:cpuacct at /cgroup/cpuacct in
>>     2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupDetect:262 :
>>     Detected mount/mapping 2:cpuset at /cgroup/cpuset in
>>     2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupDetect:262 :
>>     Detected mount/mapping 3:memory at /cgroup/memory in
>>     2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupDetect:262 :
>>     Detected mount/mapping 4:devices at /cgroup/devices in
>>     2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupDetect:262 :
>>     Detected mount/mapping 5:freezer at /cgroup/freezer in
>>     2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupDetect:262 :
>>     Detected mount/mapping 6:blkio at /cgroup/blkio in
>>     2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupMakeGroup:524 :
>>     Make group /libvirt/qemu/gluster-vm
>>     2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupMakeGroup:546 :
>>     Make controller /cgroup/cpu/libvirt/qemu/gluster-vm/
>>     2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupMakeGroup:546 :
>>     Make controller /cgroup/cpuacct/libvirt/qemu/gluster-vm/
>>     2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupMakeGroup:546 :
>>     Make controller /cgroup/cpuset/libvirt/qemu/gluster-vm/
>>     2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupMakeGroup:546 :
>>     Make controller /cgroup/memory/libvirt/qemu/gluster-vm/
>>     2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupMakeGroup:546 :
>>     Make controller /cgroup/devices/libvirt/qemu/gluster-vm/
>>     2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupMakeGroup:546 :
>>     Make controller /cgroup/freezer/libvirt/qemu/gluster-vm/
>>     2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupMakeGroup:546 :
>>     Make controller /cgroup/blkio/libvirt/qemu/gluster-vm/
>>     2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupSetValueStr:320
>>     : Set value '/cgroup/cpu/libvirt/qemu/gluster-vm/tasks' to '4452'
>>     2012-08-30 01:03:08.426+0000: 4452: debug : virCgroupSetValueStr:320
>>     : Set value '/cgroup/cpuacct/libvirt/qemu/gluster-vm/tasks' to '4452'
>>     2012-08-30 01:03:08.429+0000: 4452: debug : virCgroupSetValueStr:320
>>     : Set value '/cgroup/cpuset/libvirt/qemu/gluster-vm/tasks' to '4452'
>>     2012-08-30 01:03:08.432+0000: 4452: debug : virCgroupSetValueStr:320
>>     : Set value '/cgroup/memory/libvirt/qemu/gluster-vm/tasks' to '4452'
>>     2012-08-30 01:03:08.435+0000: 4452: debug : virCgroupSetValueStr:320
>>     : Set value '/cgroup/devices/libvirt/qemu/gluster-vm/tasks' to '4452'
>>     2012-08-30 01:03:08.437+0000: 4452: debug : virCgroupSetValueStr:320
>>     : Set value '/cgroup/freezer/libvirt/qemu/gluster-vm/tasks' to '4452'
>>     2012-08-30 01:03:08.439+0000: 4452: debug : virCgroupSetValueStr:320
>>     : Set value '/cgroup/blkio/libvirt/qemu/gluster-vm/tasks' to '4452'
>>     2012-08-30 01:03:08.442+0000: 4452: debug :
>>     qemuProcessInitCpuAffinity:1731 : Setting CPU affinity
>>     2012-08-30 01:03:08.443+0000: 4452: debug :
>>     qemuProcessInitCpuAffinity:1760 : Set CPU affinity with specified cpuset
>>     2012-08-30 01:03:08.443+0000: 4452: debug : qemuProcessHook:2512 :
>>     Setting up security labelling
>>     2012-08-30 01:03:08.443+0000: 4452: debug :
>>     virSecurityDACSetProcessLabel:637 : Dropping privileges of DEF to
>>     107:107
>>     2012-08-30 01:03:08.443+0000: 4452: debug : qemuProcessHook:2519 :
>>     Hook complete ret=0
>>     2012-08-30 01:03:08.443+0000: 4452: debug : virCommandHook:2043 :
>>     Done hook 0
>>     2012-08-30 01:03:08.443+0000: 4452: debug : virCommandHook:2056 :
>>     Notifying parent for handshake start on 24
>>     2012-08-30 01:03:08.443+0000: 4452: debug : virCommandHook:2077 :
>>     Waiting on parent for handshake complete on 25
>>     2012-08-30 01:03:08.495+0000: 4452: debug : virCommandHook:2093 :
>>     Hook is done 0
>>     Gluster connection failed for server=10.1.81.111 port=24007
>>     volume=dht image=windows7-32-DoubCards-iotest-qcow2.img transport=socket
>>     qemu-kvm: -drive
>>     
>> file=gluster://10.1.81.111:24007/dht/windows7-32-DoubCards-iotest-qcow2.img,if=none,id=drive-virtio-disk0,format=qcow2,cache=none,aio=native
>>     
>> <http://10.1.81.111:24007/dht/windows7-32-DoubCards-iotest-qcow2.img,if=none,id=drive-virtio-disk0,format=qcow2,cache=none,aio=native>:
>>     could not open disk image
>>     gluster://10.1.81.111:24007/dht/windows7-32-DoubCards-iotest-qcow2.img
>>     <http://10.1.81.111:24007/dht/windows7-32-DoubCards-iotest-qcow2.img>:
>>     No data available
>>     2012-08-30 01:03:11.565+0000: shutting down
>>
>>     I can boot the vm with the command:
>>     LC_ALL=C PATH=/sbin:/usr/sbin:/bin:/usr/bin QEMU_AUDIO_DRV=spice
>>     /usr/libexec/qemu-kvm -S -M rhel6.2.0 -enable-kvm -m 512 -smp
>>     1,sockets=1,cores=1,threads=1 -name gluster-vm -uuid
>>     f65bd812-45fb-cc2d-75fd-84206248e026  -nodefconfig -nodefaults
>>     -chardev
>>     
>> socket,id=charmonitor,path=/var/lib/libvirt/qemu/gluster-vm.monitor,server,nowait
>>     -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc
>>     -no-shutdown -device
>>     virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x3 -device
>>     piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
>>     
>> file=gluster://10.1.81.111:24007/dht/windows7-32-DoubCards-iotest-qcow2.img,if=none,id=drive-virtio-disk0,format=qcow2,cache=none,aio=native
>>     
>> <http://10.1.81.111:24007/dht/windows7-32-DoubCards-iotest-qcow2.img,if=none,id=drive-virtio-disk0,format=qcow2,cache=none,aio=native>
>>     -device
>>     
>> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
>>     -device usb-tablet,id=input0 -spice
>>     port=30038,addr=0.0.0.0,disable-ticketing -vga cirrus -device
>>     virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
>>
>>     my question:
>>     1.what's the libvirt hook funciton? could it affect the qemu-kvm
>>     command?
>>     2.It's hard to debug the qemu-kvm progress from libvirt, I try to
>>     hang the glusterd for a moment, then to gdb the qemu-kvm, do your
>>     have better methods?
>>
>>     Best Regards,
>>     Yin Yin
>>
>>     You have new mail in /var/spool/mail/root
>>     On Fri, Aug 24, 2012 at 5:44 PM, Deepak C Shetty
>>     <address@hidden <mailto:address@hidden>>
>>     wrote:
>>
>>         On 08/24/2012 12:22 PM, Harsh Bora wrote:
>>
>>             On 08/24/2012 12:05 PM, Daniel Veillard wrote:
>>
>>                 On Thu, Aug 23, 2012 at 04:31:50PM +0530, Harsh Prateek
>>                 Bora wrote:
>>
>>                     This patchset provides support for Gluster protocol
>>                     based network disks.
>>                     It is based on the proposed gluster support in Qemu
>>                     on qemu-devel:
>>                     
>> http://lists.gnu.org/archive/__html/qemu-devel/2012-08/__msg01539.html
>>                     
>> <http://lists.gnu.org/archive/html/qemu-devel/2012-08/msg01539.html>
>>
>>
>>                     Just to be clear, that qemu feature didn't make the
>>                 deadline for 1.2,
>>                 right ? I don't think we can add support at the libvirt
>>                 level until
>>                 the patches are commited in QEmu, but that doesn't
>>                 prevent reviewing
>>                 them in advance . Right now we are in freeze for 0.10.0,
>>
>>
>>
>>         I am working on enabling oVirt/VDSM to be able to exploit this,
>>         using harsh's RFC patches.
>>         VDSM patch @ http://gerrit.ovirt.org/#/c/__6856/
>>         <http://gerrit.ovirt.org/#/c/6856/>
>>
>>         An early feedback would help me, especially on the xml spec
>>         posted here. My VDSM patch
>>         depends on it.
>>
>>         thanx,
>>         deepak
>>
>>
>>
>>         _________________________________________________
>>         Gluster-devel mailing list
>>         address@hidden <mailto:address@hidden>
>>         https://lists.nongnu.org/__mailman/listinfo/gluster-devel
>>         <https://lists.nongnu.org/mailman/listinfo/gluster-devel>
>>
>>
>>
>
> _______________________________________________
> Gluster-devel mailing list
> address@hidden
> https://lists.nongnu.org/mailman/listinfo/gluster-devel



reply via email to

[Prev in Thread] Current Thread [Next in Thread]