bug-binutils
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Bug ld/22831] ld causes massive thrashing if object files are not fully


From: giovanni.lostumbo at gmail dot com
Subject: [Bug ld/22831] ld causes massive thrashing if object files are not fully memory-resident: new algorithm needed
Date: Sat, 17 Sep 2022 01:17:02 +0000

https://sourceware.org/bugzilla/show_bug.cgi?id=22831

Giovanni Lostumbo <giovanni.lostumbo at gmail dot com> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
                 CC|                            |giovanni.lostumbo at gmail dot 
com

--- Comment #36 from Giovanni Lostumbo <giovanni.lostumbo at gmail dot com> ---
Binutils Version 0.001-2.13 (1988-2002)
http://www.mirrorservice.org/sites/sources.redhat.com/pub/binutils/old-releases/

Binutils 2.30 was released in 2018. As Luke mentions in comment #20, he spoke
with Stallman back then, who confirmed that the code that helped ld stay within
resident memory was removed in the late 1990s. If the source code is indeed in
that University of Kent mirror ^ for all the legacy versions of binutils
pre-2003, one should quickly be able to locate the last version of binutils
with the original code that Stallman used. I cannot interpret code, but I'm
entering this discussion from a hardware design viewpoint. Thrashing results in
increased power consumption, and quickly depletes battery and disk life, if it
is even successful at compiling. 

To help organize a solution to this bug, I propose that the original
algorithmic code be identified and analyzed here or somewhere where it can be
compared and contrasted to the swap mechanism that replaced it. Then, if the
code can be reimplemented (not rewritten, since Luke claims it was already
working code- no need to reinvent the wheel here), it can be tested in both 32
bit and 64 bit systems. While I do not understand programming languages, I do
understand that there is a possibility that the legacy code had algorithms
intrinsic to 32 bit, and may require some adapting for 64 bit. I'm guessing it
could also extrapolate to 64 bit, independent of architecture, but that is more
of a mathematical question beyond my capabilities.

I can also imagine other use-cases where restoring the original ld algorithms
could be immensely efficient/beneficial. Say one is testing an array of new
builds, and is modifying a select number of lines in source to test a new
functionality or performance. One might develop 20 copies of source, save for a
few lines of experimental code. Compiling each may take 24-48 hrs for each
compile if it goes into swap space. That's 3-7 weeks of running a laptop or
desktop. But if it uses code that never runs into swap, it can complete the
compile much faster and and with much less power. Now multiply that by 1000x
users, with a server running 20,000 virtual machines (e.g Amazon EC2). The Kwh
can add up very quickly, especially for those who cannot test their device
locally, and can't afford to rent a server with that many VMs for 24-48 hrs.
Swap space can also be less secure. Sensitive data stored on swap could get
stuck, especially on a remote server, which could experience a power outage and
be accessed at a later time. Data is relatively less vulnerable to theft in
RAM.  

If Stallman's code prevents the thrashing that arose out of the swap mechanism,
then, this bug report would NOT an enhancement, it would be restoring the
original, concisely operating functionality.

-- 
You are receiving this mail because:
You are on the CC list for the bug.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]