guix-patches
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[bug#45734] [PATCH v2] gnu: update zfs.


From: raid5atemyhomework
Subject: [bug#45734] [PATCH v2] gnu: update zfs.
Date: Mon, 11 Jan 2021 10:23:16 +0000

For the patch to 2.0.1, I did the following testing:

* Included patches https://issues.guix.gnu.org/45692 
https://issues.guix.gnu.org/45722 https://issues.guix.gnu.org/45723
  * Created a new VM image that includes ZFS using `(service zfs-service-type 
...)`.  Linux Libre 5.4 though.
    * Expanded this image +10G and created a new partition and created a ZFS 
pool there, with a ZFS dataset, and wrote some text files and also downloaded 
the ZFS source release into the ZFS filesystem.  Then rebooted the VM and 
checked that the ZFS filesystem was still automounted, the contents look like 
they are as expected.
    * Created three extra disk images and booted the same image with the extra 
disks.  Added two of them as a mirror SLOG and the third as a L2ARC.  Then 
rebooted and checked that the pool still mounts fine.
    * Started the VM again with the extra disk images rearranged.  Checked ZFS 
pool status, the L2ARC and SLOG devices were correctly rearranged as well.  Did 
a few more rearrangements and checked that ZFS assigned the ZFS device to the 
correct use.
    * Started the VM again with one of the mirror SLOG devices missing.  
Checked ZFS pool status, confirmed that the SLOG mirror was degraded but the 
pool is still up.

So all of it seems to be working fine so far.  I'm mostly satisfied with this. 
I'll probably need to add more code to make it work closer to how ZFS on other 
systems works (the current patches scan all devices rather than use 
`/etc/zfs/zpool.cache`, because I don't really know what 
`/etc/zfs/zpool.cache`).

With all those patches, ZFS on Guix supports:

* Automatic importing and mounting of ZFS filesystems (does not use 
`/etc/zfs/zpool.cache`; this theoretically speeds up the case where the 
computer has dozens or hundreds of disks, and protects in a setting where 
someone could potentially gain physical access to the computer and override 
sensitive locations by plugging in a USB that gets auto-imported (and 
auto-mounted) at boot by ZFS).
* `/home` on ZFS.
* L2ARC and SLOG.
* ZVOLs, accessible over `/dev/zvol/*` hierarchy.
* Can have pools on LUKS containers by adding them as dependencies of the 
`zfs-service-type` (untested).
* `file-system` declarations mounted on ZVOLs (untested).

Some other stuff is not supported yet:

* `zpool.cache` file, which replaces `fstab` but is not user-editable, for 
faster importing of ZFS pools.
* ZFS Event Daemon.  Traditionally this is configured by having the sysad 
manage a `/etc/zfs/zed.d/` directory; some bits of ZFS automation are provided 
by the ZFS release and the sysad is supposed to either symlink to those,  or 
copy it and modify, or remove, or replace with their own script.
* ZFS sharing over the network.  Probably need to go look at how NFS and Samba 
are started on Guix then figure this part out; NFS and Samba need to get 
started first, but I'm not sure how ZFS talks to those to get its filesystems 
shared.
* `/` on ZFS. Probably we need to have some kind of 
`initrd-kernel-module-service-type`, 
`initrd-kernel-module-loader-service-type`, and have kernel module parameter 
configuration passed in either by the kernel command line, or by the early 
`initrd` module loader (which isn't modprobe, by the way).
* Mounting in "legacy" mode where datasets are declared via `(file-system ...)` 
declarations.  Actually https://issues.guix.gnu.org/45643#3 has a patch for 
this as well.






reply via email to

[Prev in Thread] Current Thread [Next in Thread]