qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] block/file-posix: Fix problem with fallocate(PUNCH_HOLE) on


From: Kevin Wolf
Subject: Re: [PATCH] block/file-posix: Fix problem with fallocate(PUNCH_HOLE) on GPFS
Date: Mon, 19 Apr 2021 15:13:15 +0200

Am 19.04.2021 um 07:06 hat Thomas Huth geschrieben:
> On 16/04/2021 22.34, Nir Soffer wrote:
> > On Fri, Apr 16, 2021 at 8:23 AM Thomas Huth <thuth@redhat.com> wrote:
> > > 
> > > A customer reported that running
> > > 
> > >   qemu-img convert -t none -O qcow2 -f qcow2 input.qcow2 output.qcow2
> > > 
> > > fails for them with the following error message when the images are
> > > stored on a GPFS file system:
> > > 
> > >   qemu-img: error while writing sector 0: Invalid argument
> > > 
> > > After analyzing the strace output, it seems like the problem is in
> > > handle_aiocb_write_zeroes(): The call to fallocate(FALLOC_FL_PUNCH_HOLE)
> > > returns EINVAL, which can apparently happen if the file system has
> > > a different idea of the granularity of the operation. It's arguably
> > > a bug in GPFS, since the PUNCH_HOLE mode should not result in EINVAL
> > > according to the man-page of fallocate(), but the file system is out
> > > there in production and so we have to deal with it. In commit 294682cc3a
> > > ("block: workaround for unaligned byte range in fallocate()") we also
> > > already applied the a work-around for the same problem to the earlier
> > > fallocate(FALLOC_FL_ZERO_RANGE) call, so do it now similar with the
> > > PUNCH_HOLE call.
> > > 
> > > Signed-off-by: Thomas Huth <thuth@redhat.com>
> > > ---
> > >   block/file-posix.c | 7 +++++++
> > >   1 file changed, 7 insertions(+)
> > > 
> > > diff --git a/block/file-posix.c b/block/file-posix.c
> > > index 20e14f8e96..7a40428d52 100644
> > > --- a/block/file-posix.c
> > > +++ b/block/file-posix.c
> > > @@ -1675,6 +1675,13 @@ static int handle_aiocb_write_zeroes(void *opaque)
> > >               }
> > >               s->has_fallocate = false;
> > >           } else if (ret != -ENOTSUP) {
> > > +            if (ret == -EINVAL) {
> > > +                /*
> > > +                 * File systems like GPFS do not like unaligned byte 
> > > ranges,
> > > +                 * treat it like unsupported (so caller falls back to 
> > > pwrite)
> > > +                 */
> > > +                return -ENOTSUP;
> > 
> > This skips the next fallback, using plain fallocate(0) if we write
> > after the end of the file. Is this intended?
> > 
> > We can treat the buggy EINVAL return value as "filesystem is buggy,
> > let's not try other options", or "let's try the next option". Since falling
> > back to actually writing zeroes is so much slower, I think it is better to
> > try the next option.
> 
> I just did the same work-around as in commit 294682cc3a7 ... so if we agree
> to try the other options, too, we should change that spot, too...

Yes, changing both places to fall back to the next option feels right to
me.

> However, what is not clear to me, how would you handle s->has_write_zeroes
> and s->has_discard in such a case? Set them to "false"? ... but it could
> still work for some blocks with different alignment ... but if we keep them
> set to "true", the code tries again and again to call these ioctls, maybe
> wasting other precious cycles for this?

That it could still work for other requests is a good point. So I think
EINVAL shouldn't disable s->has_*, but otherwise behave the same as
ENOTSUP.

You're right that we're potentially wasting cycles for trying unaligned
requests again and again, but that they fail isn't our fault and the
benefit of having efficient zero writes with aligned requests seems more
important that losing a few cycles on unaligned requests.

> Maybe we should do a different approach instead: In case we hit a EINVAL
> here, print an error a la:
> 
>  error_report_once("You are running on a buggy file system, please complain
> to the file system vendor");
> 
> and return -ENOTSUP ... then it's hopefully clear to the users why they are
> getting a bad performance, and that they should complain to the file system
> vendor instead to get their problem fixed.

Sounds like a reasonable thing to do (probably in addition to the above)
when we know that a file system bug prevents us from getting optimal
performance.

Kevin




reply via email to

[Prev in Thread] Current Thread [Next in Thread]