[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Thomas Bushnell, BSG
15 May 2001 15:15:32 -0700
Gnus/5.0803 (Gnus v5.8.3) Emacs/20.7
Roland McGrath <email@example.com> writes:
> In general, what's most useful to profile are real programs running real
> workloads to see what spots actually matter in practice. Profiling
> targetted benchmarks (like the fork tester) mostly just confirms the
> spots of slowness you already know about.
Well, in general true indeed. But in the case of the fork tester, it
would be nice to have hard data on which parts of the library are
taking all the time. I think my guess is right, but I want to know
for sure. (The reason here is that fork is a big complex thing, and
details would be nice.)
> OTOH, the real-world cases where libc gets beat on a lot are mostly
> short-lived processes. It may be difficult to get helpful data from these,
> though of course it never hurts to try profiling anything just to see what
> you can see. We all know that running a configure script is, as Thomas
> likes to put it, slow as paste. But that involves many different
> processes, each of which would collect only their own individual profiling
Ah, and here we have a hack I'd like to try. Have an option for libc
to automatically profile *itself* whenever some env variable is set,
and keep collective statistics somewhere. Then we can see what in
libc is taking up the time for these kinds of cases.
> For the Hurd itself, you should get a lot of info out of profiling any
> server you are interested in. The most obvious thing is to build a
> profiled ext2fs, use it for the filesystem on a spare partition (or in a
> file) and do something fs-intensive, like a large build, on that filesystem.
Yuppers. Also do proc and auth, which are very important servers for
global system performance.