emacs-bug-tracker
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

bug#52516: closed (Avoiding race condition between partprobe and getting


From: GNU bug Tracking System
Subject: bug#52516: closed (Avoiding race condition between partprobe and getting PARTUUID using lsblk?)
Date: Wed, 15 Dec 2021 16:56:03 +0000

Your message dated Wed, 15 Dec 2021 08:54:50 -0800
with message-id <Ybod2uWIcb7Q5fB3@ohop.brianlane.com>
and subject line Re: bug#52516: Avoiding race condition between partprobe and 
getting PARTUUID using lsblk?
has caused the debbugs.gnu.org bug report #52516,
regarding Avoiding race condition between partprobe and getting PARTUUID using 
lsblk?
to be marked as done.

(If you believe you have received this mail in error, please contact
help-debbugs@gnu.org.)


-- 
52516: http://debbugs.gnu.org/cgi/bugreport.cgi?bug=52516
GNU Bug Tracking System
Contact help-debbugs@gnu.org with problems
--- Begin Message --- Subject: Avoiding race condition between partprobe and getting PARTUUID using lsblk? Date: Wed, 15 Dec 2021 13:12:43 +0000
Hi.

So I've been struggling a bit with optimizing applications calling
partprobe right after lsblk to retrieve PARTUUID of newly generated
partitions using parted.

Assuming the following loop:
* partprobe
* lsblk
* if !result: sleep + loop

It feels as if every partprobe (indirectly or directly) flushes a
cache/struct of information somewhere, which makes lsblk "fail" to
retrieve certain information if executed too quickly in succession. A
normal approach to solving this is adding a delay between partprobe
and lsblk, something I wanted to avoid by putting the sleep at the
very end of the loop, and instead do many quick loops until the
information is available. I could also put partprobe before the loop,
and continue looping until the information becomes available. But that
poses an issue if the disk-operation (parted) completes after
partprobe performs it's tasks. Leaving the loop in a dead state until
a new partprobe is called (or that seems to be the issue at least).

My question boils down to:
 1. Is there a way to get an indication of when the partprobe (or
kernel) has completed its task of populating the partition
information?
 2. Is there a way to tell partprobe/kernel to not clear the old
struct before the new information is available (leaving "old
information" intact until new exists, rather than wiping it and then
populating)

If none of the two above is possible, I would like to consider those
being a feature as it would greatly help to verify when disk
operations are 100% complete. I don't mind optionally hanging
applications/scripts until parted is complete, as I would like to
avoid continuing based on parted exit code if the exit code is not a
guarantee of the process being completed in this case.

Best wishes:
Anton Hvornum



--- End Message ---
--- Begin Message --- Subject: Re: bug#52516: Avoiding race condition between partprobe and getting PARTUUID using lsblk? Date: Wed, 15 Dec 2021 08:54:50 -0800
On Wed, Dec 15, 2021 at 01:12:43PM +0000, Anton Hvornum wrote:

[snip]

> My question boils down to:
>  1. Is there a way to get an indication of when the partprobe (or
> kernel) has completed its task of populating the partition
> information?

udevadm settle should help. The problem is that partprobe tells the
kernel about all the partitions, the kernel then tells udev about the
changes and then udev updates device nodes. So you cannot depend on
anything being stable until udev is finished.

>  2. Is there a way to tell partprobe/kernel to not clear the old
> struct before the new information is available (leaving "old
> information" intact until new exists, rather than wiping it and then
> populating)

No. It has to refresh everything, otherwise some parts of it might get
out of sync.

> If none of the two above is possible, I would like to consider those
> being a feature as it would greatly help to verify when disk
> operations are 100% complete. I don't mind optionally hanging
> applications/scripts until parted is complete, as I would like to
> avoid continuing based on parted exit code if the exit code is not a
> guarantee of the process being completed in this case.

There's not much you can do other than wait for udev, or check to make
sure device nodes you expect to be present are there. We used to hit
problems like this in the parted test suite all the time, until we added
loops to wait for the partitions to appear.

Good Luck!

Brian

-- 
Brian C. Lane (PST8PDT) - weldr.io - lorax - parted - pykickstart



--- End Message ---

reply via email to

[Prev in Thread] Current Thread [Next in Thread]