[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Mon, 14 May 2001 21:41:10 -0400 (EDT)
> Yep, builds gcrt0.o as it should. Haven't tried a make install, but that's
> a no-brainer.
I've checked in changes to the libc makefiles (I did some cleanup in the
process of making gcrt0.o get built and installed). I did a build, but you
should test it too, and I'll surely hear about it if I've broken anything
> * (note to self) Have to fix the specs file for the cross compiler
I think I had to diddle thing by hand for other reasons to make by
cross-build environment work right too. make-cross needs a reexamination
with the current gcc world.
> * I wondered why the _pc_samples interface is used in profil(), rather than
> the quite straight forward mach_sample_task() (I could only find
> documentation for the latter, in kernel_interfaces.ps).
Good question. Those interfaces use entirely different implementations in
the kernel. The pc_sampling interfaces were added in Mach4 and the new
code is more flexible than the old mach_sample_task interface. If both
work, I'm not sure it matters which one we use. mach_sample_task is
obviously intended specifically to implement profil.
> * What parts of the Hurd are good candidates for profiling? Thomas' fork
> test. File system operations. Servers.
In general, what's most useful to profile are real programs running real
workloads to see what spots actually matter in practice. Profiling
targetted benchmarks (like the fork tester) mostly just confirms the
spots of slowness you already know about.
OTOH, the real-world cases where libc gets beat on a lot are mostly
short-lived processes. It may be difficult to get helpful data from these,
though of course it never hurts to try profiling anything just to see what
you can see. We all know that running a configure script is, as Thomas
likes to put it, slow as paste. But that involves many different
processes, each of which would collect only their own individual profiling
For the Hurd itself, you should get a lot of info out of profiling any
server you are interested in. The most obvious thing is to build a
profiled ext2fs, use it for the filesystem on a spare partition (or in a
file) and do something fs-intensive, like a large build, on that filesystem.
You'll want to statically link all the hurd _p libraries into whatever
server you build (I guess that's what the *.prof targets do anyway),
because so much of what is interesting to profile is in the libraries.
Have you looked at sprof? I have never used it, but it exists and it is
supposed to be easy to use. I think you just put LD_PROFILE=libfoo.so.1 in
the environment and it writes /var/tmp/libfoo.so.1.profile with some data
that you can examine with sprof. It diddles with the PIC jump table of the
library so that calls through the PLT implicitly do the call-counting that
is normally done by the prologue/epilogue code of functions compiled with -pg.
You can only profile one shared library at a time this way, and you can't
profile the main program at the same time. But, if it works, it's a very
easy thing to do on any binaries you have without recompiling.