qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Virtio-fs] [PATCH] virtiofsd: Disable remote posix locks by default


From: Vivek Goyal
Subject: Re: [Virtio-fs] [PATCH] virtiofsd: Disable remote posix locks by default
Date: Thu, 6 Aug 2020 13:46:45 -0400

On Thu, Aug 06, 2020 at 06:41:29PM +0100, Dr. David Alan Gilbert wrote:
> * misono.tomohiro@fujitsu.com (misono.tomohiro@fujitsu.com) wrote:
> > > Right now we enable remote posix locks by default. That means when guest 
> > > does a posix lock it sends request to server
> > > (virtiofsd). But currently we only support non-blocking posix lock and 
> > > return -EOPNOTSUPP for blocking version.
> > > 
> > > This means that existing applications which are doing blocking posix 
> > > locks get -EOPNOTSUPP and fail. To avoid this,
> > > people have been running virtiosd with option "-o no_posix_lock". For new 
> > > users it is still a surprise and trial and error
> > > takes them to this option.
> > > 
> > > Given posix lock implementation is not complete in virtiofsd, disable it 
> > > by default. This means that posix locks will work
> > > with-in applications in a guest but not across guests. Anyway we don't 
> > > support sharing filesystem among different guests
> > > yet in virtiofs so this should not lead to any kind of surprise or 
> > > regression and will make life little easier for virtiofs users.
> > > 
> > > Reported-by: Aa Aa <jimbothom@yandex.com>
> > > Suggested-by: Miklos Szeredi <mszeredi@redhat.com>
> > > Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
> > > ---
> > >  tools/virtiofsd/passthrough_ll.c | 2 +-
> > >  1 file changed, 1 insertion(+), 1 deletion(-)
> > 
> > We should update docs/tools/virtiofsd.rst as well. Given that:
> >  Reviewed-by: Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
> 
> Fixed up the doc.

Aha.. Looks like we were looking at this at the same time.

Thanks for taking care of this Dave.

Vivek




reply via email to

[Prev in Thread] Current Thread [Next in Thread]