bug-bash
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

slow bgp_delete


From: Vladimir Marek
Subject: slow bgp_delete
Date: Mon, 18 May 2015 11:45:54 +0200
User-agent: Mutt/1.5.22.1-rc1 (2013-10-16)

Hi,

Our customer updated from bash 3.0.16 to 3.2.57 and claims that it
caused decreased performance. The report is slightly vague and says that
bash now consumes more memory and is slower after running ~ 30000
commands. My current belief (and I want to have it confirmed form
customer) is that the problem is that bash is now eating more CPU
cycles.

I did some experiments and tracked it down to the bgp_delete function.
Once the number of records in 'bgpids' list reaches 30k (== ulimit -u on
Solaris), every new allocated pid results in bgp_delete call to make
sure that the list does not contain duplicate entries. And that means
walking all 30k entries in single linked list because the new pid is
most probably unique. I believe that customer use case involves rapid
execution of external commands so the 'bgpids' is traversed many times
and the performance hit becomes significant.

I am thinking about improving the situation. Number of options come into
mind. In order to get the fix to the official sources I would like to
discuss the approach in advance.


a) decreasing size of the 'bgpids' list. Why do we need 30k entries if
we don't trust that the IDs are unique? Maybe configuration or runtime
option?

b) create hash map containing the IDs to decrease the search time. We
still need to maintain linked list to know which PID is oldest (to be
removed). The linked list might benefit from being double linked one
so that we are able to remove elements without traversing it again.

c) some means to clear the 'bgpids' list during runtime.

To me a) seems to be simplest fix, b) trading speed for memory usage and
c) is too hackish to my taste. Thoughts, ideas? :)


For the record, as a workaround I suggested to run less then 30k
commands in a subshell, spawn new subshell and run another <30k
commands, but that was not accepted as viable solution.

Thank you
-- 
        Vlad



reply via email to

[Prev in Thread] Current Thread [Next in Thread]