bug-bash
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 5.1] zread: read files in 4k chunks


From: L A Walsh
Subject: Re: [PATCH 5.1] zread: read files in 4k chunks
Date: Tue, 23 Jun 2020 12:29:42 -0700

The 'stat(2)' system call returns an optimal i/o size for files since some
files in addition to being on disks with a 4k sector size (making 128bytes
a slow choice for reads, and a real slow choice for writes), also can be on
a RAID with it's own optimal i/o size.

I think the 'stat(1)' prog shows the same value with "%o": "stat -c%o" .
65536


On Mon, Jun 22, 2020 at 1:19 PM Jason A. Donenfeld <Jason@zx2c4.com> wrote:

> On Mon, Jun 22, 2020 at 2:16 PM Ilkka Virta <itvirta@iki.fi> wrote:
> >
> > On 22.6. 19.35, Chet Ramey wrote:
> > > On 6/22/20 1:53 AM, Jason A. Donenfeld wrote:
> > >> Currently a static sized buffer is used for reading files. At the
> moment
> > >> it is extremely small, making parsing of large files extremely slow.
> > >> Increase this to 4k for improved performance.
> > >
> > > I bumped it up to 1024 initially for testing.
> >
> > It always struck me as odd that Bash used such a small read of 128
> > bytes. Most of the GNU utils I've looked at on Debian use 8192, and a
> > simple test program seems to indicate glibc's stdio reads 4096 bytes at
> > one read() call.
>
> Plus most other shells people use...
>
>


reply via email to

[Prev in Thread] Current Thread [Next in Thread]