guix-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Ensuring daemon interop, maybe also store layer standardization?


From: Florian Klink
Subject: Re: Ensuring daemon interop, maybe also store layer standardization?
Date: Thu, 16 Nov 2023 22:52:48 +0200

On 23-11-16 14:23:29, John Ericson wrote:
On Thu, Nov 16, 2023, at 10:14 AM, Ludovic Courtès wrote:
I have been talking a lot to Florian about these things too. Long ago I
emailed Ludo and some others about the IPFS & Nix work. Since then RFC
133 was accepted,
https://github.com/NixOS/rfcs/blob/master/rfcs/0133-git-hashing.md,
which prepares the way for a lot of that, and more recently
https://github.com/NixOS/nix/pull/9294 (merged) has begun the
implementation.

It sounds like this ERIS plan and Tvix's content addressing are fairly
similar --- use improved content addressing "underneath the hood". The
thing I have been working on is trying to expose content-addressing all
the way into the store path for end-to-end trustlessless.

tvix-castore uses /only/ the BLAKE3 digest, and thanks to its verified
streaming properties, chunking parameters etc don't "bleed" into the
to-be-signed datastructure (and its content hash), but can be delivered
(and verified) out-of-band, without having to fetch the entire data.

That's also one of the main reasons as to why Iroh (The IPFS Rust
"Rewrite") entirely ditched IPLD and all other hash functions in favor
of raw blake3-hashed blobs [1]), and why they, S5 [2] and Tvix uses them
to address blobs.

Among these projects, there's different areas of priority and focus,
but strictly speaking, they're all just different transport protocols
for the same identifier (blake3 digests of raw contents) and could
interop with each other.

For file system structures, tvix-castore also defines an encoding
similar to git trees, but using the blake3 digest of (a slightly more
sane) serialization as an identifier.

ERIS defines its own addressing scheme, using "ERIS capability URNs",
and in addition to the contents, chunksize (two modes) and
convergence secret make up the identifier.
Internally, it constructs its own Merkle Tree using ChaCha20 and
Blake2b-256.
https://issues.guix.gnu.org/52555 uses it with the larger chunk size and
a null convergence secret.

I think for build artifacts, either way is fine. But for source code
the end-to-end trustlessness is nice for treating things like Software
Heritage as a substituter of last resort. (I made
https://docs.softwareheritage.org/devel/swh-web/uri-scheme-api.html#get--api-1-raw-(swhid)-
forto spit out raw git objects for
https://blog.obsidian.systems/software-heritage-bridge/, but the
pipelining latency issues mean something like SWH's "Vault API" is
probably better.)

I hope some day SWH will also support looking up blobs not just by their
SWHID (which is just using a lot of lipstick to describe how git encodes
blobs), but also by these nice new hash functions of the raw contents,
which would allow SWH to be plugged as a "last resort" source directly-
without having to keep carrying along and calculating additional
git-based hashes / SWHIDs to prepare for the eventuality of having to
reach out to there.

That's a big wall of text, but glad it's all out there. Stuff is really
cooking in all our ecosystems these days, and I'm very excited for
where things are going!

Definitely!

--
flokli

[1]: https://github.com/n0-computer/iroh/discussions/707
[2]: https://github.com/n0-computer/iroh/discussions/709



reply via email to

[Prev in Thread] Current Thread [Next in Thread]