|
From: | Matthew Woehlke |
Subject: | Re: fts: avoid O(n^2) ext3/ext4 performance hit on million+-entry dirs |
Date: | Fri, 26 Sep 2008 17:47:04 -0500 |
User-agent: | Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.8.1.16) Gecko/20080723 Fedora/2.0.0.16-1.fc9 Thunderbird/2.0.0.16 Mnenhy/0.7.5.0 |
Bruno Haible wrote:
Jim Meyering wrote:+ if (nitems > _FTS_INODE_SORT_DIR_ENTRIES_THRESHOLD + && !sp->fts_compar + && dirent_inode_sort_may_be_useful (sp)) { + sp->fts_compar = fts_compare_ino; + head = fts_sort (sp, head, nitems); + sp->fts_compar = NULL; + }This uses fts_sort, which uses qsort, which is O(n^2) in the worst case. (Even the implementation in glibc can be O(n^2), if it guesses that mergesort takes too much memory.) Couldn't this be avoided by using Paul's mpsort module, and changing fts_sort to allocate 50% more room, as scratch space for mpsort?
You're talking about inode numbers (i.e. fixed-sized keys), yes? Could this be a case for a radix sort?
-- Matthew Please do not quote my e-mail address unobfuscated in message bodies. -- "Who wants to sing?" -- Orcs (Warcraft II)
[Prev in Thread] | Current Thread | [Next in Thread] |