bug-guix
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

bug#39774: guix incorrectly says "No space left on device"


From: Jesse Gibbons
Subject: bug#39774: guix incorrectly says "No space left on device"
Date: Mon, 24 Feb 2020 20:01:45 -0700
User-agent: Evolution 3.32.4

I have a laptop with two drives. A few days ago, when I ran `df -h` it
outputs:
Filesystem      Size  Used Avail Use% Mounted on
none             16G     0   16G   0% /dev
/dev/sdb1       229G  189G   29G  87% /
/dev/sda1       458G  136G  299G  32% /gnu/store
tmpfs            16G     0   16G   0% /dev/shm
none             16G   64K   16G   1% /run/systemd
none             16G     0   16G   0% /run/user
cgroup           16G     0   16G   0% /sys/fs/cgroup
tmpfs           3.2G   16K  3.2G   1% /run/user/983
tmpfs           3.2G   60K  3.2G   1% /run/user/1001

As you can see, /dev/sda1 is the drive mounted on /gnu/store.
Everything in the store is written to it, and it has plenty of space
available.

Guix sometimes says there is "No space left on device". This always
happens in particular when I try `guix gc --optimize`, but it sometimes
happens when I call `guix pull` or `guix upgrade`. When guix pull or
guix upgrade fails with this message, I can clear up more space by
deleting ~/.cache and emtpying my trash and it works.


Today I have also seen this happen when I'm trying to upgrade a large
profile. It said it could not build anything because there was no more
disk space, even after I cleaned up /dev/sdb1 to 40% use. It finally
recognized the empty disk space when I called guix gc and it deleted a
few of the dependencies needed for the upgrades. But it didn't take
long to trigger this bug again. Here's the new output of `df -h`:

Filesystem      Size  Used Avail Use% Mounted on
none             16G     0   16G   0% /dev
/dev/sdb1       229G   86G  131G  40% /
/dev/sda1       458G  182G  253G  42% /gnu/store
tmpfs            16G     0   16G   0% /dev/shm
none             16G   80K   16G   1% /run/systemd
none             16G     0   16G   0% /run/user
cgroup           16G     0   16G   0% /sys/fs/cgroup
tmpfs           3.2G   24K  3.2G   1% /run/user/983
tmpfs           3.2G   12K  3.2G   1% /run/user/1000
tmpfs           3.2G   60K  3.2G   1% /run/user/1001

Any clues why this happens and what can be done to fix it? Could it be
related to how /dev/sdb1 is 229G large, and the total used space in /
and /gnu/store is more than that?

-Jesse






reply via email to

[Prev in Thread] Current Thread [Next in Thread]