bug-gnulib
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: read-write locks, throttling


From: Bruno Haible
Subject: Re: read-write locks, throttling
Date: Fri, 06 Jan 2017 14:17:56 +0100
User-agent: KMail/4.8.5 (Linux/3.8.0-44-generic; KDE/4.8.5; x86_64; ; )

I asked:
> > So, what is the fairness-ensuring mechanism that will make use of locks
> > starvation-free? Can it be *easily* and *portably* put in place?

Textbooks from university teacher state that alternating between reader
preference and writer preference with solve both writer starvation and
writer starvation:
  - [1] page 12
  - [2] slides 62..63

These publications predate the filing of patent [3], so this mechanism
is OK to use.

I think this fits my requirements of being easy to implement and robust
in all circumstances.

Torvald Riegel proposed:
> You control the incoming work.
> ...
> you can easily throttle that

When throttling is used as an approach to avoid disaster, someone (sys admin
or user) has to monitor dynamic parameters. This is time-consuming and cannot
be done for programs that run on remote build machines (Kamil Dudka's point).
Therefore, if it can be avoided, I prefer to avoid it.

I'm using throttling already as a workaround in the following situations:

* I have a laptop that, when it has high load for 5 minutes, reboots.
  (The Linux OS notices that the temperature goes too high and prefers to
  shutdown, than to damage the hardware.)
  Workaround: Constantly monitor the load, stop processes.

* When I 'rm -rf' more than ca. 500 GB at once on a disk of my NAS, it
  just reboots. It's Linux with ext2fs and some ftruncate() bug.
  Workaround: Delete directories in chunks of max. 100 GB at once.

* When I create and use databases (Apache Derby or DB2) on a particular SSD
  disk, the machine reboots. The SSD is buggy hardware - proved by the fact
  that it works on a different SSD in the same machine.
  Workaround: Throttling is impossible here - I don't control the program that
  creates the databases.

* When I move a 3 GB file from an internal disk (on Mac OS X 10.5) to an SMB
  file system, the machine becomes unusable for ca. 15 minutes, because of
  the memory management algorithm in the OS.
  Workaround: Stop the 'mv' command after 1 GB and restart it soon afterwards.

* When I download files for more than 2 minutes, it saturates my DSL connection
  bandwidth, and some programs lose their internet connection because they
  cannot access DNS any more.
  Workaround: Use wget with the --limit option.

You see, throttling becomes necessary when engineers have either not
constructed robust solutions or not tested it for some supposedly "extreme"
parameters.

Bruno

[1] https://www.doc.ic.ac.uk/research/technicalreports/1999/DTR99-3.pdf
[2] http://www0.cs.ucl.ac.uk/staff/W.Emmerich/lectures/Z01-99-00/z01_day4.pdf
[3] http://www.google.ch/patents/US20040255086




reply via email to

[Prev in Thread] Current Thread [Next in Thread]