[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: read-write locks, throttling

From: Torvald Riegel
Subject: Re: read-write locks, throttling
Date: Fri, 06 Jan 2017 15:40:32 +0100

On Fri, 2017-01-06 at 14:17 +0100, Bruno Haible wrote:
> I asked:
> > > So, what is the fairness-ensuring mechanism that will make use of locks
> > > starvation-free? Can it be *easily* and *portably* put in place?
> Textbooks from university teacher state that alternating between reader
> preference and writer preference with solve both writer starvation and
> writer starvation:
>   - [1] page 12
>   - [2] slides 62..63

Turning on writer preference doesn't help if recursive rdlocks are
allowed, which POSIX does.

> Torvald Riegel proposed:
> > You control the incoming work.
> > ...
> > you can easily throttle that
> When throttling is used as an approach to avoid disaster, someone (sys admin
> or user) has to monitor dynamic parameters. This is time-consuming and cannot
> be done for programs that run on remote build machines (Kamil Dudka's point).

It seems it's still not clear what I mean by throttling.  Maybe the term
is confusing for you.  This isn't at all about prevening disasters or
any of that.

There are parallel/concurrent compute problems that need fairness or
no-starvation guarantees, and there are problems that do not need that.
For example, all kinds of fork/join parallelism do not because
sequential execution is an allowed execution.  That's the most common
case of parallelism, I'd say.
Anything where you have a dependency structure building up automatically
doesn't need fairness typically, because in the end, you'll be blocked
on the single task that didn't get executed so far.  Such problems
"throttle" automatically.

If your problem should not happen to have that characteristic, which is
not the common case I'd say, then it's easy to turn it into that
something that has the characteristic: introduce a dependency that
represents the kind of fairness you require.  If your fairness
requirement is that you want to complete a writer at least after 1000
readers but no other writer have completed, make that a dependency in
your program.
Doing that has the additional benefit that you'll see whether this is
actually possible for the workload you have at hand.  For example, if
you allow recursive readers, you'll see that this is not possible unless
you track whether a reader has acquired a specific rdlock as a reader
already, or the user provides this information to you.  You can see that
by the fact that your fairness dependency would introduce a deadlock
when combined with the other dependencies.

> Therefore, if it can be avoided, I prefer to avoid it.
> I'm using throttling already as a workaround in the following situations:
> * I have a laptop that, when it has high load for 5 minutes, reboots.
>   (The Linux OS notices that the temperature goes too high and prefers to
>   shutdown, than to damage the hardware.)
>   Workaround: Constantly monitor the load, stop processes.
> * When I 'rm -rf' more than ca. 500 GB at once on a disk of my NAS, it
>   just reboots. It's Linux with ext2fs and some ftruncate() bug.
>   Workaround: Delete directories in chunks of max. 100 GB at once.
> * When I create and use databases (Apache Derby or DB2) on a particular SSD
>   disk, the machine reboots. The SSD is buggy hardware - proved by the fact
>   that it works on a different SSD in the same machine.
>   Workaround: Throttling is impossible here - I don't control the program that
>   creates the databases.
> * When I move a 3 GB file from an internal disk (on Mac OS X 10.5) to an SMB
>   file system, the machine becomes unusable for ca. 15 minutes, because of
>   the memory management algorithm in the OS.
>   Workaround: Stop the 'mv' command after 1 GB and restart it soon afterwards.
> * When I download files for more than 2 minutes, it saturates my DSL 
> connection
>   bandwidth, and some programs lose their internet connection because they
>   cannot access DNS any more.
>   Workaround: Use wget with the --limit option.
> You see, throttling becomes necessary when engineers have either not
> constructed robust solutions or not tested it for some supposedly "extreme"
> parameters.

I hope this isn't meant to be a serious argument.  Maybe it's caused
because you got confused by me using the term throttling.

reply via email to

[Prev in Thread] Current Thread [Next in Thread]