[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH 5.1] zread: read files in 4k chunks
From: |
Chet Ramey |
Subject: |
Re: [PATCH 5.1] zread: read files in 4k chunks |
Date: |
Mon, 22 Jun 2020 17:07:27 -0400 |
User-agent: |
Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0) Gecko/20100101 Thunderbird/68.9.0 |
On 6/22/20 4:16 PM, Ilkka Virta wrote:
> On 22.6. 19.35, Chet Ramey wrote:
>> On 6/22/20 1:53 AM, Jason A. Donenfeld wrote:
>>> Currently a static sized buffer is used for reading files. At the moment
>>> it is extremely small, making parsing of large files extremely slow.
>>> Increase this to 4k for improved performance.
>>
>> I bumped it up to 1024 initially for testing.
>
> It always struck me as odd that Bash used such a small read of 128 bytes.
> Most of the GNU utils I've looked at on Debian use 8192, and a simple test
> program seems to indicate glibc's stdio reads 4096 bytes at one read() call.
Yes, 128 is too small for modern systems. It made more sense when the code
was written.
--
``The lyf so short, the craft so long to lerne.'' - Chaucer
``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRU chet@case.edu http://tiswww.cwru.edu/~chet/