[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Performance bug of {1..1000000}?

From: Eric Blake
Subject: Re: Performance bug of {1..1000000}?
Date: Mon, 9 Mar 2020 11:41:05 -0500
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.5.0

On 3/7/20 10:39 AM, Peng Yu wrote:
See the following run time comparison. {1..1000000} is slower than
$(seq 1000000).

Since seq involves an external program, I'd expect the latter to be
slower. But the comparison shows the opposite.

I guess seq did some optimization?

seq does not have to store the entire sequence in memory. As it outputs things to stdout (and the other end of the pipeline consumes that output), seq can forget what it has previously done.

Bash, on the other hand, computes the entire expansion in memory prior to proceeding to use that expansion. It is the cost of memory allocation and increased memory usage that slows bash down.

Can the performance of {1..1000000} be improved so that it is faster
than $(seq 1000000)?

Not without someone writing a patch. Are you volunteering? But in general, we don't recommend trying to make bash do expansions like that, when it is already more efficient to use other means of iteration that do not require bash to keep the entire sequence in memory.

Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3226
Virtualization:  qemu.org | libvirt.org

reply via email to

[Prev in Thread] Current Thread [Next in Thread]