From 1fff4187869578546a28f317943424e89f9050ca Mon Sep 17 00:00:00 2001
From: Henrique Dias
Date: Wed, 20 Sep 2023 13:25:24 +0200
Subject: [PATCH 01/13] chore: move UNIXFS.md (preserve history)
---
UNIXFS.md => src/architecture/unixfs.md | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename UNIXFS.md => src/architecture/unixfs.md (100%)
diff --git a/UNIXFS.md b/src/architecture/unixfs.md
similarity index 100%
rename from UNIXFS.md
rename to src/architecture/unixfs.md
From 86b93cf01d11628b89c8094e830cdfe91fe05b7a Mon Sep 17 00:00:00 2001
From: Henrique Dias
Date: Wed, 20 Sep 2023 13:26:31 +0200
Subject: [PATCH 02/13] chore: add UNIXFS.md to link to new website
---
UNIXFS.md | 3 +++
1 file changed, 3 insertions(+)
create mode 100644 UNIXFS.md
diff --git a/UNIXFS.md b/UNIXFS.md
new file mode 100644
index 000000000..00444fe49
--- /dev/null
+++ b/UNIXFS.md
@@ -0,0 +1,3 @@
+# UnixFS
+
+Moved to https://specs.ipfs.tech/architecture/unixfs/
From 97abffc67a5ad8a169babad93d436cfef55e9fae Mon Sep 17 00:00:00 2001
From: Jorropo
Date: Mon, 10 Oct 2022 17:05:17 +0200
Subject: [PATCH 03/13] docs: Write UNIXFSv1 spec
---
src/architecture/unixfs.md | 382 +++++++++++++++++++++++++++++++------
1 file changed, 324 insertions(+), 58 deletions(-)
diff --git a/src/architecture/unixfs.md b/src/architecture/unixfs.md
index a53c7af2c..4130ac6fc 100644
--- a/src/architecture/unixfs.md
+++ b/src/architecture/unixfs.md
@@ -7,9 +7,7 @@
**Abstract**
-UnixFS is a [protocol-buffers](https://developers.google.com/protocol-buffers/) based format for describing files, directories, and symlinks in IPFS. The current implementation of UnixFS has grown organically and does not have a clear specification document. See [“implementations”](#implementations) below for reference implementations you can examine to understand the format.
-
-Draft work and discussion on a specification for the upcoming version 2 of the UnixFS format is happening in the [`ipfs/unixfs-v2` repo](https://github.com/ipfs/unixfs-v2). Please see the issues there for discussion and PRs for drafts. When the specification is completed there, it will be copied back to this repo and replace this document.
+UnixFS is a [protocol-buffers](https://developers.google.com/protocol-buffers/) based format for describing files, directories, and symlinks as merkle-dags in IPFS.
## Table of Contents
@@ -29,19 +27,41 @@ Draft work and discussion on a specification for the upcoming version 2 of the U
- [Side trees](#side-trees)
- [Side database](#side-database)
-## Implementations
+## How to read a Node
-- JavaScript
- - Data Formats - [unixfs](https://github.com/ipfs/js-ipfs-unixfs)
- - Importer - [unixfs-importer](https://github.com/ipfs/js-ipfs-unixfs-importer)
- - Exporter - [unixfs-exporter](https://github.com/ipfs/js-ipfs-unixfs-exporter)
-- Go
- - [`ipfs/go-ipfs/unixfs`](https://github.com/ipfs/go-ipfs/tree/b3faaad1310bcc32dc3dd24e1919e9edf51edba8/unixfs)
- - Protocol Buffer Definitions - [`ipfs/go-ipfs/unixfs/pb`](https://github.com/ipfs/go-ipfs/blob/b3faaad1310bcc32dc3dd24e1919e9edf51edba8/unixfs/pb/unixfs.proto)
+To read a node, first get a CID. This is what we will decode.
+
+To recap, every [CID](https://github.com/multiformats/cid) includes:
+1. A [multicodec](https://github.com/multiformats/multicodec), also called codec.
+1. A [Multihash](https://github.com/multiformats/multihash) used to specify a hashing algorithm, the hashing parameters and the hash digest.
+
+The first step is to get the block, that means the actual bytes which when hashed (using the hash function specified in the multihash) gives you the same multihash value back.
+
+### Multicodecs
-## Data Format
+With Unixfs we deal with two codecs which will be decoded differently:
+- `Raw`, single block files
+- `dag-pb`, can be any nodes
-The UnixfsV1 data format is represented by this protobuf:
+#### `Raw` blocks
+
+The simplest nodes use `Raw` encoding.
+
+They are always implicitly of type `file`.
+
+They can be recognized because their CIDs have `Raw` codec.
+
+The file content is purely the block body.
+
+They never have any children nodes, and thus are also known as single block files.
+
+Their sizes (both `dagsize` and `blocksize`) is the length of the block body.
+
+#### `dag-pb` nodes
+
+##### Data Format
+
+The UnixfsV1 `Data` message format is represented by this protobuf:
```protobuf
message Data {
@@ -74,15 +94,235 @@ message UnixTime {
}
```
-This `Data` object is used for all non-leaf nodes in Unixfs.
+##### IPLD `dag-pb`
+
+A very important other spec for unixfs is the [`dag-pb`](https://ipld.io/specs/codecs/dag-pb/spec/) IPLD spec:
+
+```protobuf
+message PBLink {
+ // binary CID (with no multibase prefix) of the target object
+ optional bytes Hash = 1;
+
+ // UTF-8 string name
+ optional string Name = 2;
+
+ // cumulative size of target object
+ optional uint64 Tsize = 3; // also known as dagsize
+}
+
+message PBNode {
+ // refs to other objects
+ repeated PBLink Links = 2;
+
+ // opaque user data
+ optional bytes Data = 1;
+}
+```
+
+The two different schemas plays together and it is important to understand their different effect,
+- The `dag-pb` / `PBNode` protobuf is the "outside" protobuf message; in other words, it is the first message decoded. This protobuf contains the list of links and some "opaque user data".
+- The `Data` message is the "inside" protobuf message. After the "outside" `dag-pb` (also known as `PBNode`) object is decoded, `Data` is decoded from the bytes inside the `PBNode.Data` field. This contains the rest of information.
+
+In other words, we have a serialized protobuf message stored inside another protobuf message.
+For clarity, the spec document may represents these nested protobufs as one object. In this representation, it is implied that the `PBNode.Data` field is encoded in a prototbuf.
+
+##### Different Data types
+
+`dag-pb` nodes supports many different types, which can be found in `decodeData(PBNode.Data).Type`. Every type is handled differently.
+
+###### `File` type
+
+####### The _sister-lists_ `PBNode.Links` and `decodeMessage(PBNode.Data).blocksizes`
+
+The _sister-lists_ are the key point of why `dag-pb` is important for files.
+
+This allows us to concatenate smaller files together.
+
+Linked files would be loaded recursively with the same process following a DFS (Depth-First-Search) order.
+
+Child nodes must be of type file (so `dag-pb` where type is `File` or `Raw`)
+
+For example this example pseudo-json block:
+```json
+{
+ "Links": [{"Hash":"Qmfoo"}, {"Hash":"Qmbar"}],
+ "Data": {
+ "Type": "File",
+ "blocksizes": [20, 30]
+ }
+}
+```
+
+This indicates that this file is the concatenation of the `Qmfoo` and `Qmbar` files.
+
+When reading a file represented with `dag-pb`, the `blocksizes` array gives us the size in bytes of the partial file content present in child DAGs.
+Each index in `PBNode.Links` MUST have a corresponding chunk size stored at the same index in `decodeMessage(PBNode.Data).blocksizes`.
+
+Implementers need to be extra careful to ensure the values in `Data.blocksizes` are calculated by following the definition from [Blocksize](#blocksize) section.
+
+This allows to do fast indexing into the file, for example if someone is trying to read bytes 25 to 35 we can compute an offset list by summing all previous indexes in `blocksizes`, then do a search to find which indexes contain the range we are intrested in.
+
+For example here the offset list would be `[0, 20]` and thus we know we only need to download `Qmbar` to get the range we are intrested in.
+
+UnixFS parser MUST error if `blocksizes` or `Links` are not of the same length.
+
+####### `decodeMessage(PBNode.Data).Data`
+
+This field is an array of bytes, it is file content and is appended before the links.
+
+This must be taken into a count when doing offsets calculations (the len of the `Data.Data` field define the value of the zeroth element of the offset list when computing offsets).
+
+####### `PBNode.Links[].Name` with Files
+
+This field makes sense only in directory contexts and MUST be absent when creating a new file `PBNode`.
+For historic reasons, implementations parsing third-party data SHOULD accept empty value here.
+
+If this field is present and non empty, the file is invalid and parser MUST error.
+
+####### `Blocksize` of a dag-pb file
+
+This is not a field present in the block directly, but rather a computable property of `dag-pb` which would be used in parent node in `decodeMessage(PBNode.Data).blocksizes`.
+It is the sum of the length of the `Data.Data` field plus the sum of all link's blocksizes.
+
+####### `PBNode.Data.Filesize`
+
+If present, this field must be equal to the `Blocksize` computation above, else the file is invalid.
+
+####### Path resolution
+
+A file terminates UnixFS content path.
+
+Any attempt of path resolution on `File` type MUST error.
+
+###### `Directory` Type
+
+A directory node is a named collection of nodes.
+
+The minimum valid `PBNode.Data` field for a directory is (pseudo-json): `{"Type":"Directory"}`, other values are covered in Metadata.
+
+Every link in the Links list is an entry (children) of the directory, and the `PBNode.Links[].Name` field give you the name.
+
+####### Link ordering
+
+The cannonical sorting order is lexicographical over the names.
+
+In theory there is no reason an encoder couldn't use an other ordering, however this lose some of it's meaning when mapped into most file systems today (most file systems consider directories are unordered-key-value objects).
+
+A decoder SHOULD if it can, preserve the order of the original files in however it consume thoses names.
+
+However when some implementation decode, modify then reencode some, the orignal links order fully lose it's meaning. (given that there is no way to indicate which sorting was used originally)
+
+####### Path Resolution
+
+Pop the left most component of the path, and try to match it to one of the Name in Links.
+
+If you find a match you can then remember the CID. You MUST continue your search, however if you find a match again you MUST error.
+
+Assuming no errors were raised, you can continue to the path resolution on the mainaing component and on the CID you poped.
+
+####### Duplicate names
+
+Duplicate names are not allowed, if two identical names are present in an directory, the decoder MUST error.
-For files that are comprised of more than a single block, the 'Type' field will be set to 'File', the 'filesize' field will be set to the total number of bytes in the file (not the graph structure) represented by this node, and 'blocksizes' will contain a list of the filesizes of each child node.
+###### `Symlink` type
-This data is serialized and placed inside the 'Data' field of the outer merkledag protobuf, which also contains the actual links to the child nodes of this object.
+Symlinks MUST NOT have childs.
-For files comprised of a single block, the 'Type' field will be set to 'File', 'filesize' will be set to the total number of bytes in the file and the file data will be stored in the 'Data' field.
+Their Data.Data field is a POSIX path that maybe appended in front of the currently remaining path component stack.
-## Metadata
+####### Path resolution on symlinks
+
+There is no current consensus on how pathing over symlinks should behave.
+Some implementations return symlinks objects and fail if a consumer tries to follow it through.
+
+Following the POSIX spec over the current unixfs path context is probably fine.
+
+###### `HAMTDirectory`
+
+Thoses nodes are also sometimes called sharded directories, they allow to split directories into many blocks when they are so big that they don't fit into one single block anymore.
+
+- `node.Data.hashType` indicates a multihash function to use to digest path components used for sharding.
+It MUST be murmur3-x64-64 (multihash `0x22`).
+- `node.Data.Data` is some bitfield, ones indicates whether or not the links are part of this HAMT or leaves of the HAMT.
+The usage of this field is unknown given you can deduce the same information from the links names.
+- `node.Data.fanout` MUST be a power of two. This encode the number of hash permutations that will be used on each resolution step.
+The log base 2 of the fanout indicate how wide the bitmask will be on the hash at for that step. `fanout` MUST be between 8 and probably 65536.
+
+####### `node.Links[].Name` on HAMTs
+
+They start by some uppercase hex encoded prefix which is `log2(fanout)` bits wide
+
+####### Path resolution on HAMTs
+
+Steps:
+1. Take the current path component then hash it using the multihash id provided in `Data.hashType`.
+2. Pop the `log2(fanout)` lowest bits from the path component hash digest, then hex encode (using 0-F) thoses bits using little endian thoses bits and find the link that starts with this hex encoded path.
+3. If the link name is exactly as long as the hex encoded representation, follow the link and repeat step 2 with the child node and the remaining bit stack. The child node MUST be a hamt directory else the directory is invalid, else continue.
+4. Compare the remaining part of the last name you found, if it match the original name you were trying to resolve you successfully resolved a path component, everything past the hex encoded prefix is the name of that element (usefull when listing childs of this directory).
+
+
+###### `TSize` / `DagSize`
+
+This is an optional field for Links of `dag-pb` nodes, **it does not represent any meaningfull information of the underlying structure** and no known usage of it to this day (altho some implementation emit thoses).
+
+To compute the `dagsize` of a node (which would be stored in the parents) you sum the length of the dag-pb outside message binary length, plus the blocksizes of all child files.
+
+An example of where this could be usefull is as a hint to smart download clients, for example if you are downloading a file concurrently from two sources that have radically different speeds, it would probably be more efficient to download bigger links from the fastest source, and smaller ones from the slowest source.
+
+
+There is no failure mode known for this field, so your implementation should be able to decode nodes where this field is wrong (not the value you expect), partially or completely missing. This also allows smarter encoder to give a more accurate picture (for example don't count duplicate blocks, ...).
+
+### Paths
+
+Paths first start with `/`or `/ipfs//` where `` is a [multibase](https://github.com/multiformats/multibase) encoded [CID](https://github.com/multiformats/cid).
+The CID encoding MUST NOT use a multibase alphabet that have `/` (`0x2f`) unicode codepoints however CIDs may use a multibase encoding with a `/` in the alphabet if the encoded CID does not contain `/` once encoded.
+
+Everything following the CID is a collection of path component (some bytes) seperated by `/` (`0x2f`), read from left to right.
+This is inspired by POSIX paths.
+
+- Components MUST NOT contain `/` unicode codepoints because else it would break the path into two components.
+- Components SHOULD be UTF8 unicode.
+- Components are case sensitive.
+
+#### Escaping
+
+The `\` may be supposed to trigger an escape sequence.
+
+This might be a thing, but is broken and inconsistent current implementations.
+So until we agree on a new spec for this, you SHOULD NOT use any escape sequence and non ascii character.
+
+#### Relative path components
+
+Thoses path components must be resolved before trying to work on the path.
+
+- `.` points to the current node, those path components must be removed.
+- `..` points to the parent, they must be removed first to last however when you remove a `..` you also remove the previous component on the left. If there is no component on the left to remove leave the `..` as-is however this is an attempt for an out-of-bound path resolution which mean you MUST error.
+
+#### Restricted names
+
+Thoses names SHOULD NOT be used:
+
+- The `.` string. This represents the self node in POSIX pathing.
+- The `..` string. This represents the parent node in POSIX pathing.
+- nothing (the empty string) We don't actually know the failure mode for this, but it really feels like this shouldn't be a thing.
+- Any string containing a NUL (0x00) byte, this is often used to signify string terminations in some systems (such as most C compatible systems), and many unix file systems don't accept this character in path components.
+
+### Glossary
+
+- Node, Block
+ A node is a word from graph theory, this is the smallest unit present in the graph.
+ Due to how unixfs work, there is a 1 to 1 mapping between nodes and blocks.
+- File
+ A file is some container over an arbitrary sized amounts of bytes.
+ Files can be said to be single block, or multi block, in the later case they are the concatenation of multiple children files.
+- Directory, Folder
+ A named collection of child nodes.
+- HAMT Directory
+ This is a [Hashed-Array-Mapped-Trie](https://en.wikipedia.org/wiki/Hash_array_mapped_trie) data structure representing a Directory, those may be used to split directories into multiple blocks when they get too big, and the list of children does not fit in a single block.
+- Symlink
+ This represents a POSIX Symlink.
+
+### Metadata
UnixFS currently supports two optional metadata fields:
@@ -112,45 +352,9 @@ UnixFS currently supports two optional metadata fields:
- When no `mtime` is specified or the resulting `UnixTime` is negative: implementations must assume `0`/`1970-01-01T00:00:00Z` ( note that such values are not merely academic: e.g. the OpenVMS epoch is `1858-11-17T00:00:00Z` )
- When the resulting `UnixTime` is larger than the targets range ( e.g. 32bit vs 64bit mismatch ) implementations must assume the highest possible value in the targets range ( in most cases that would be `2038-01-19T03:14:07Z` )
-### Deduplication and inlining
-
-Where the file data is small it would normally be stored in the `Data` field of the UnixFS `File` node.
-
-To aid in deduplication of data even for small files, file data can be stored in a separate node linked to from the `File` node in order for the data to have a constant [CID] regardless of the metadata associated with it.
-
-As a further optimization, if the `File` node's serialized size is small, it may be inlined into its v1 [CID] by using the [`identity`](https://github.com/multiformats/multicodec/blob/master/table.csv) [multihash].
-
-## Importing
-
-Importing a file into unixfs is split up into two parts. The first is chunking, the second is layout.
-
-### Chunking
-
-Chunking has two main parameters, chunking strategy and leaf format.
-
-Leaf format should always be set to 'raw', this is mainly configurable for backwards compatibility with earlier formats that used a Unixfs Data object with type 'Raw'. Raw leaves means that the nodes output from chunking will be just raw data from the file with a CID type of 'raw'.
-
-Chunking strategy currently has two different options, 'fixed size' and 'rabin'. Fixed size chunking will chunk the input data into pieces of a given size. Rabin chunking will chunk the input data using rabin fingerprinting to determine the boundaries between chunks.
-
-
-### Layout
-
-Layout defines the shape of the tree that gets built from the chunks of the input file.
-
-There are currently two options for layout, balanced, and trickle.
-Additionally, a 'max width' must be specified. The default max width is 174.
-
-The balanced layout creates a balanced tree of width 'max width'. The tree is formed by taking up to 'max width' chunks from the chunk stream, and creating a unixfs file node that links to all of them. This is repeated until 'max width' unixfs file nodes are created, at which point a unixfs file node is created to hold all of those nodes, recursively. The root node of the resultant tree is returned as the handle to the newly imported file.
-
-If there is only a single chunk, no intermediate unixfs file nodes are created, and the single chunk is returned as the handle to the file.
-
-## Exporting
-
-To read the file data out of the unixfs graph, perform an in order traversal, emitting the data contained in each of the leaves.
-
## Design decision rationale
-### Metadata
+### `mtime` and `mode` metadata support in UnixFSv1.5
Metadata support in UnixFSv1.5 has been expanded to increase the number of possible use cases. These include rsync and filesystem based package managers.
@@ -227,7 +431,69 @@ Fractional values are effectively a random number in the range 1 ~ 999,999,999.
2^28 nanoseconds ( 268,435,456 ) in most cases. Therefore, the fractional part is represented as a 4-byte
`fixed32`, [as per Google's recommendation](https://developers.google.com/protocol-buffers/docs/proto#scalar).
-[multihash]: https://tools.ietf.org/html/draft-multiformats-multihash-00
-[CID]: https://docs.ipfs.io/guides/concepts/cid/
+## References
+
+[multihash]: https://tools.ietf.org/html/draft-multiformats-multihash-05
+[CID]: https://github.com/multiformats/cid/
[Bitswap]: https://github.com/ipfs/specs/blob/master/BITSWAP.md
-[MFS]: https://docs.ipfs.io/guides/concepts/mfs/
+
+# Notes for Implementers
+
+This section and included subsections are not authoritative.
+
+## Implementations
+
+- JavaScript
+ - Data Formats - [unixfs](https://github.com/ipfs/js-ipfs-unixfs)
+ - Importer - [unixfs-importer](https://github.com/ipfs/js-ipfs-unixfs-importer)
+ - Exporter - [unixfs-exporter](https://github.com/ipfs/js-ipfs-unixfs-exporter)
+- Go
+ - Protocol Buffer Definitions - [`ipfs/go-unixfs/pb`](https://github.com/ipfs/go-unixfs/blob/707110f05dac4309bdcf581450881fb00f5bc578/pb/unixfs.proto)
+ - [`ipfs/go-unixfs`](https://github.com/ipfs/go-unixfs/)
+ - `go-ipld-prime` implementation [`ipfs/go-unixfsnode`](https://github.com/ipfs/go-unixfsnode)
+- Rust
+ - [`iroh-unixfs`](https://github.com/n0-computer/iroh/tree/b7a4dd2b01dbc665435659951e3e06d900966f5f/iroh-unixfs)
+ - [`unixfs-v1`](https://github.com/ipfs-rust/unixfsv1)
+
+## Simple `Raw` Example
+
+In this example, we will build a `Raw` file with the string `test` as its content.
+
+1. First hash the data:
+```console
+$ echo -n "test" | sha256sum
+9f86d081884c7d659a2feaa0c55ad015a3bf4f1b2b0b822cd15d6c15b0f00a08 -
+```
+
+2. Add the CID prefix:
+```
+f this is the multibase prefix, we need it because we are working with a hex CID, this is omitted for binary CIDs
+ 01 the CID version, here one
+ 55 the codec, here we MUST use Raw because this is a Raw file
+ 12 the hashing function used, here sha256
+ 20 the digest length 32 bytes
+ 9f86d081884c7d659a2feaa0c55ad015a3bf4f1b2b0b822cd15d6c15b0f00a08 the digest we computed earlier
+```
+
+3. Profit
+Assuming we stored this block in some implementation of our choice which makes it accessible to our client, we can try to decode it:
+```console
+$ ipfs cat f015512209f86d081884c7d659a2feaa0c55ad015a3bf4f1b2b0b822cd15d6c15b0f00a08
+test
+```
+
+
+### Offset list
+
+The offset list isn't the only way to use blocksizes and reach a correct implementation, it is a simple cannonical one, python pseudo code to compute it looks like this:
+```python
+def offsetlist(node):
+ unixfs = decodeDataField(node.Data)
+ if len(node.Links) != len(unixfs.Blocksizes):
+ raise "unmatched sister-lists" # error messages are implementation details
+
+ cursor = len(unixfs.Data) if unixfs.Data else 0
+ return [cursor] + [cursor := cursor + size for size in unixfs.Blocksizes[:-1]]
+```
+
+This will tell you which offset inside this node the children at the corresponding index starts to cover. (using `[x,y)` ranging)
From c4e812a74f7ea44438d9359f1b4682f63e8b7393 Mon Sep 17 00:00:00 2001
From: Henrique Dias
Date: Wed, 20 Sep 2023 13:36:52 +0200
Subject: [PATCH 04/13] chore: editorial fixes
---
src/architecture/unixfs.md | 155 ++++++++++++++++++++-----------------
1 file changed, 83 insertions(+), 72 deletions(-)
diff --git a/src/architecture/unixfs.md b/src/architecture/unixfs.md
index 4130ac6fc..2d6dec8f2 100644
--- a/src/architecture/unixfs.md
+++ b/src/architecture/unixfs.md
@@ -1,32 +1,43 @@
-# ![](https://img.shields.io/badge/status-wip-orange.svg?style=flat-square) UnixFS
-
-**Author(s)**:
-- NA
-
-* * *
-
-**Abstract**
+---
+title: UnixFS
+description: >
+ UnixFS is a Protocol Buffers-based format for describing files, directories,
+ and symlinks as DAGs in IPFS.
+date: 2022-10-10
+maturity: reliable
+editors:
+ - name: David Dias
+ github: daviddias
+ affiliation:
+ name: Protocol Labs
+ url: https://protocol.ai/
+ - name: Jeromy Johnson
+ github: whyrusleeping
+ affiliation:
+ name: Protocol Labs
+ url: https://protocol.ai/
+ - name: Alex Potsides
+ github: achingbrain
+ affiliation:
+ name: Protocol Labs
+ url: https://protocol.ai/
+ - name: Peter Rabbitson
+ github: ribasushi
+ affiliation:
+ name: Protocol Labs
+ url: https://protocol.ai/
+ - name: Hugo Valtier
+ github: jorropo
+ affiliation:
+ name: Protocol Labs
+ url: https://protocol.ai/
+
+tags: ['architecture']
+order: 1
+---
UnixFS is a [protocol-buffers](https://developers.google.com/protocol-buffers/) based format for describing files, directories, and symlinks as merkle-dags in IPFS.
-## Table of Contents
-
-- [Implementations](#implementations)
-- [Data Format](#data-format)
-- [Metadata](#metadata)
- - [Deduplication and inlining](#deduplication-and-inlining)
-- [Importing](#importing)
- - [Chunking](#chunking)
- - [Layout](#layout)
-- [Exporting](#exporting)
-- [Design decision rationale](#design-decision-rationale)
- - [Metadata](#metadata-1)
- - [Separate Metadata node](#separate-metadata-node)
- - [Metadata in the directory](#metadata-in-the-directory)
- - [Metadata in the file](#metadata-in-the-file)
- - [Side trees](#side-trees)
- - [Side database](#side-database)
-
## How to read a Node
To read a node, first get a CID. This is what we will decode.
@@ -65,32 +76,32 @@ The UnixfsV1 `Data` message format is represented by this protobuf:
```protobuf
message Data {
- enum DataType {
- Raw = 0;
- Directory = 1;
- File = 2;
- Metadata = 3;
- Symlink = 4;
- HAMTShard = 5;
- }
-
- required DataType Type = 1;
- optional bytes Data = 2;
- optional uint64 filesize = 3;
- repeated uint64 blocksizes = 4;
- optional uint64 hashType = 5;
- optional uint64 fanout = 6;
- optional uint32 mode = 7;
- optional UnixTime mtime = 8;
+ enum DataType {
+ Raw = 0;
+ Directory = 1;
+ File = 2;
+ Metadata = 3;
+ Symlink = 4;
+ HAMTShard = 5;
+ }
+
+ required DataType Type = 1;
+ optional bytes Data = 2;
+ optional uint64 filesize = 3;
+ repeated uint64 blocksizes = 4;
+ optional uint64 hashType = 5;
+ optional uint64 fanout = 6;
+ optional uint32 mode = 7;
+ optional UnixTime mtime = 8;
}
message Metadata {
- optional string MimeType = 1;
+ optional string MimeType = 1;
}
message UnixTime {
- required int64 Seconds = 1;
- optional fixed32 FractionalNanoseconds = 2;
+ required int64 Seconds = 1;
+ optional fixed32 FractionalNanoseconds = 2;
}
```
@@ -100,22 +111,22 @@ A very important other spec for unixfs is the [`dag-pb`](https://ipld.io/specs/c
```protobuf
message PBLink {
- // binary CID (with no multibase prefix) of the target object
- optional bytes Hash = 1;
+ // binary CID (with no multibase prefix) of the target object
+ optional bytes Hash = 1;
- // UTF-8 string name
- optional string Name = 2;
+ // UTF-8 string name
+ optional string Name = 2;
- // cumulative size of target object
- optional uint64 Tsize = 3; // also known as dagsize
+ // cumulative size of target object
+ optional uint64 Tsize = 3; // also known as dagsize
}
message PBNode {
- // refs to other objects
- repeated PBLink Links = 2;
+ // refs to other objects
+ repeated PBLink Links = 2;
- // opaque user data
- optional bytes Data = 1;
+ // opaque user data
+ optional bytes Data = 1;
}
```
@@ -145,11 +156,11 @@ Child nodes must be of type file (so `dag-pb` where type is `File` or `Raw`)
For example this example pseudo-json block:
```json
{
- "Links": [{"Hash":"Qmfoo"}, {"Hash":"Qmbar"}],
- "Data": {
- "Type": "File",
- "blocksizes": [20, 30]
- }
+ "Links": [{"Hash":"Qmfoo"}, {"Hash":"Qmbar"}],
+ "Data": {
+ "Type": "File",
+ "blocksizes": [20, 30]
+ }
}
```
@@ -368,7 +379,7 @@ This was ultimately rejected for a number of reasons:
1. You would always need to retrieve an additional node to access file data which limits the kind of optimizations that are possible.
- For example many files are under the 256KiB block size limit, so we tend to inline them into the describing UnixFS `File` node. This would not be possible with an intermediate `Metadata` node.
+ For example many files are under the 256KiB block size limit, so we tend to inline them into the describing UnixFS `File` node. This would not be possible with an intermediate `Metadata` node.
2. The `File` node already contains some metadata (e.g. the file size) so metadata would be stored in multiple places which complicates forwards compatibility with UnixFSv2 as to map between metadata formats potentially requires multiple fetch operations
@@ -398,7 +409,7 @@ Downsides to this approach are:
1. Two users adding the same file to IPFS at different times will have different [CID]s due to the `mtime`s being different.
- If the content is stored in another node, its [CID] will be constant between the two users but you can't navigate to it unless you have the parent node which will be less available due to the proliferation of [CID]s.
+ If the content is stored in another node, its [CID] will be constant between the two users but you can't navigate to it unless you have the parent node which will be less available due to the proliferation of [CID]s.
2. Metadata is also impossible to remove without changing the [CID], so metadata becomes part of the content.
@@ -448,12 +459,12 @@ This section and included subsections are not authoritative.
- Importer - [unixfs-importer](https://github.com/ipfs/js-ipfs-unixfs-importer)
- Exporter - [unixfs-exporter](https://github.com/ipfs/js-ipfs-unixfs-exporter)
- Go
- - Protocol Buffer Definitions - [`ipfs/go-unixfs/pb`](https://github.com/ipfs/go-unixfs/blob/707110f05dac4309bdcf581450881fb00f5bc578/pb/unixfs.proto)
+ - Protocol Buffer Definitions - [`ipfs/go-unixfs/pb`](https://github.com/ipfs/go-unixfs/blob/707110f05dac4309bdcf581450881fb00f5bc578/pb/unixfs.proto)
- [`ipfs/go-unixfs`](https://github.com/ipfs/go-unixfs/)
- - `go-ipld-prime` implementation [`ipfs/go-unixfsnode`](https://github.com/ipfs/go-unixfsnode)
+ - `go-ipld-prime` implementation [`ipfs/go-unixfsnode`](https://github.com/ipfs/go-unixfsnode)
- Rust
- - [`iroh-unixfs`](https://github.com/n0-computer/iroh/tree/b7a4dd2b01dbc665435659951e3e06d900966f5f/iroh-unixfs)
- - [`unixfs-v1`](https://github.com/ipfs-rust/unixfsv1)
+ - [`iroh-unixfs`](https://github.com/n0-computer/iroh/tree/b7a4dd2b01dbc665435659951e3e06d900966f5f/iroh-unixfs)
+ - [`unixfs-v1`](https://github.com/ipfs-rust/unixfsv1)
## Simple `Raw` Example
@@ -488,12 +499,12 @@ test
The offset list isn't the only way to use blocksizes and reach a correct implementation, it is a simple cannonical one, python pseudo code to compute it looks like this:
```python
def offsetlist(node):
- unixfs = decodeDataField(node.Data)
- if len(node.Links) != len(unixfs.Blocksizes):
- raise "unmatched sister-lists" # error messages are implementation details
+ unixfs = decodeDataField(node.Data)
+ if len(node.Links) != len(unixfs.Blocksizes):
+ raise "unmatched sister-lists" # error messages are implementation details
- cursor = len(unixfs.Data) if unixfs.Data else 0
- return [cursor] + [cursor := cursor + size for size in unixfs.Blocksizes[:-1]]
+ cursor = len(unixfs.Data) if unixfs.Data else 0
+ return [cursor] + [cursor := cursor + size for size in unixfs.Blocksizes[:-1]]
```
This will tell you which offset inside this node the children at the corresponding index starts to cover. (using `[x,y)` ranging)
From d2d9f670d813a028350dd67c7a031038305e4d4e Mon Sep 17 00:00:00 2001
From: Henrique Dias
Date: Wed, 20 Sep 2023 15:31:41 +0200
Subject: [PATCH 05/13] chore: further editorial changes
---
src/architecture/unixfs.md | 602 ++++++++++++++++++++++---------------
1 file changed, 355 insertions(+), 247 deletions(-)
diff --git a/src/architecture/unixfs.md b/src/architecture/unixfs.md
index 2d6dec8f2..0b7887f71 100644
--- a/src/architecture/unixfs.md
+++ b/src/architecture/unixfs.md
@@ -36,43 +36,71 @@ tags: ['architecture']
order: 1
---
-UnixFS is a [protocol-buffers](https://developers.google.com/protocol-buffers/) based format for describing files, directories, and symlinks as merkle-dags in IPFS.
+UnixFS is a [protocol-buffers][protobuf]-based format for describing files,
+directories and symlinks as DAGs in IPFS.
-## How to read a Node
+## Nodes
-To read a node, first get a CID. This is what we will decode.
+A :dfn[Node] is the smallest unit present in a graph, and it comes from graph
+theory. In UnixFS, there is a 1 to 1 mapping between nodes and blocks. Therefore,
+they are used interchangeably in this document.
-To recap, every [CID](https://github.com/multiformats/cid) includes:
-1. A [multicodec](https://github.com/multiformats/multicodec), also called codec.
-1. A [Multihash](https://github.com/multiformats/multihash) used to specify a hashing algorithm, the hashing parameters and the hash digest.
+A node is addressed by a [CID]. In order to be able to read a node, its [CID] is
+required. A [CID] includes two important information:
-The first step is to get the block, that means the actual bytes which when hashed (using the hash function specified in the multihash) gives you the same multihash value back.
+1. A [multicodec], also known as simply codec.
+2. A [multihash] used to specify the hashing algorithm, the hash parameters and
+ the hash digest.
-### Multicodecs
+Thus, the block must be retrieved, that is, the bytes which when hashed using the
+hash function specified in the multihash gives us the same multihash value back.
-With Unixfs we deal with two codecs which will be decoded differently:
-- `Raw`, single block files
-- `dag-pb`, can be any nodes
+In UnixFS, a node can be encoded using two different multicodecs, which we give
+more details about in the following sections:
-#### `Raw` blocks
+- `raw` (`0x55`), which are single block :ref[Files].
+- `dag-pb` (`0x70`), which can be of any other type.
-The simplest nodes use `Raw` encoding.
+## `Raw` Nodes
-They are always implicitly of type `file`.
+The simplest nodes use `raw` encoding and are implicitly a :ref[File]. They can
+be recognized because their CIDs are encoded using the `raw` codec:
-They can be recognized because their CIDs have `Raw` codec.
+- The file content is purely the block body.
+- They never have any children nodes, and thus are also known as single block files.
+- Their size (both `dagsize` and `blocksize`) is the length of the block body.
-The file content is purely the block body.
+## `dag-pb` Nodes
-They never have any children nodes, and thus are also known as single block files.
+More complex nodes use the `dag-pb` encoding. These nodes require two steps of
+decoding. The first step is to decode the outer container of the block, which
+is encoded using the IPLD [`dag-pb`][ipld-dag-pb] specification, which can be
+summarized as follows:
-Their sizes (both `dagsize` and `blocksize`) is the length of the block body.
+```protobuf
+message PBLink {
+ // binary CID (with no multibase prefix) of the target object
+ optional bytes Hash = 1;
+
+ // UTF-8 string name
+ optional string Name = 2;
+
+ // cumulative size of target object
+ optional uint64 Tsize = 3;
+}
-#### `dag-pb` nodes
+message PBNode {
+ // refs to other objects
+ repeated PBLink Links = 2;
-##### Data Format
+ // opaque user data
+ optional bytes Data = 1;
+}
+```
-The UnixfsV1 `Data` message format is represented by this protobuf:
+After decoding the node, we obtain a `PBNode`. This `PBNode` contains a field
+`Data` that contains the bytes that require the second decoding. These are also
+a protobuf message specified in the UnixFSV1 format:
```protobuf
message Data {
@@ -105,55 +133,35 @@ message UnixTime {
}
```
-##### IPLD `dag-pb`
-
-A very important other spec for unixfs is the [`dag-pb`](https://ipld.io/specs/codecs/dag-pb/spec/) IPLD spec:
-
-```protobuf
-message PBLink {
- // binary CID (with no multibase prefix) of the target object
- optional bytes Hash = 1;
-
- // UTF-8 string name
- optional string Name = 2;
-
- // cumulative size of target object
- optional uint64 Tsize = 3; // also known as dagsize
-}
-
-message PBNode {
- // refs to other objects
- repeated PBLink Links = 2;
-
- // opaque user data
- optional bytes Data = 1;
-}
-```
-
-The two different schemas plays together and it is important to understand their different effect,
-- The `dag-pb` / `PBNode` protobuf is the "outside" protobuf message; in other words, it is the first message decoded. This protobuf contains the list of links and some "opaque user data".
-- The `Data` message is the "inside" protobuf message. After the "outside" `dag-pb` (also known as `PBNode`) object is decoded, `Data` is decoded from the bytes inside the `PBNode.Data` field. This contains the rest of information.
-
-In other words, we have a serialized protobuf message stored inside another protobuf message.
-For clarity, the spec document may represents these nested protobufs as one object. In this representation, it is implied that the `PBNode.Data` field is encoded in a prototbuf.
+Summarizing, a `dag-pb` UnixFS node is an IPLD [`dag-pb`][ipld-dag-pb] protobuf,
+whose `Data` field is a UnixFSV1 Protobuf message. For clarity, the specification
+document may represent these nested Protobufs as one object. In this representation,
+it is implied that the `PBNode.Data` field is encoded in a protobuf.
-##### Different Data types
+### Data Types
-`dag-pb` nodes supports many different types, which can be found in `decodeData(PBNode.Data).Type`. Every type is handled differently.
+A `dag-pb` UnixFS node supports different types, which are defined in
+`decode(PBNode.Data).Type`. Every type is handled differently.
-###### `File` type
+#### `File` type
-####### The _sister-lists_ `PBNode.Links` and `decodeMessage(PBNode.Data).blocksizes`
+A :dfn[File] is a container over an arbitrary sized amount of bytes. Files can be
+said to be either single block or multi block. When multi block, a File is then a
+concatenation of multiple children files
-The _sister-lists_ are the key point of why `dag-pb` is important for files.
+##### The _sister-lists_ `PBNode.Links` and `decode(PBNode.Data).blocksizes`
-This allows us to concatenate smaller files together.
+The _sister-lists_ are the key point of why IPLD `dag-pb` is important for files. They
+allow us to concatenate smaller files together.
-Linked files would be loaded recursively with the same process following a DFS (Depth-First-Search) order.
+Linked files would be loaded recursively with the same process following a DFS
+(Depth-First-Search) order.
-Child nodes must be of type file (so `dag-pb` where type is `File` or `Raw`)
+Child nodes must be of type file, so either a [`dag-pb` File](#file-type), or a
+[`raw` block](#raw-blocks).
For example this example pseudo-json block:
+
```json
{
"Links": [{"Hash":"Qmfoo"}, {"Hash":"Qmbar"}],
@@ -166,287 +174,377 @@ For example this example pseudo-json block:
This indicates that this file is the concatenation of the `Qmfoo` and `Qmbar` files.
-When reading a file represented with `dag-pb`, the `blocksizes` array gives us the size in bytes of the partial file content present in child DAGs.
-Each index in `PBNode.Links` MUST have a corresponding chunk size stored at the same index in `decodeMessage(PBNode.Data).blocksizes`.
+When reading a file represented with `dag-pb`, the `blocksizes` array gives us the
+size in bytes of the partial file content present in children DAGs. Each index in
+`PBNode.Links` MUST have a corresponding chunk size stored at the same index
+in `decode(PBNode.Data).blocksizes`.
-Implementers need to be extra careful to ensure the values in `Data.blocksizes` are calculated by following the definition from [Blocksize](#blocksize) section.
+Implementers need to be extra careful to ensure the values in `Data.blocksizes`
+are calculated by following the definition from [`Blocksize`](#decodepbnodedatablocksize).
-This allows to do fast indexing into the file, for example if someone is trying to read bytes 25 to 35 we can compute an offset list by summing all previous indexes in `blocksizes`, then do a search to find which indexes contain the range we are intrested in.
+This allows to do fast indexing into the file, for example if someone is trying
+to read bytes 25 to 35 we can compute an offset list by summing all previous
+indexes in `blocksizes`, then do a search to find which indexes contain the
+range we are interested in.
For example here the offset list would be `[0, 20]` and thus we know we only need to download `Qmbar` to get the range we are intrested in.
UnixFS parser MUST error if `blocksizes` or `Links` are not of the same length.
-####### `decodeMessage(PBNode.Data).Data`
-
-This field is an array of bytes, it is file content and is appended before the links.
-
-This must be taken into a count when doing offsets calculations (the len of the `Data.Data` field define the value of the zeroth element of the offset list when computing offsets).
-
-####### `PBNode.Links[].Name` with Files
-
-This field makes sense only in directory contexts and MUST be absent when creating a new file `PBNode`.
-For historic reasons, implementations parsing third-party data SHOULD accept empty value here.
+##### `decode(PBNode.Data).Data`
-If this field is present and non empty, the file is invalid and parser MUST error.
+This field is an array of bytes, it is the file content and is appended before
+the links. This must be taken into account when doing offset calculations, that is
+the length of `decode(PBNode.Data).Data` defines the value of the zeroth element
+of the offset list when computing offsets.
-####### `Blocksize` of a dag-pb file
+##### `PBNode.Links[].Name`
-This is not a field present in the block directly, but rather a computable property of `dag-pb` which would be used in parent node in `decodeMessage(PBNode.Data).blocksizes`.
-It is the sum of the length of the `Data.Data` field plus the sum of all link's blocksizes.
+This field makes sense only in :ref[Directories] contexts and MUST be absent
+when creating a new file. For historical reasons, implementations parsing
+third-party data SHOULD accept empty values here.
-####### `PBNode.Data.Filesize`
+If this field is present and non-empty, the file is invalid and the parser MUST
+error.
-If present, this field must be equal to the `Blocksize` computation above, else the file is invalid.
+##### `decode(PBNode.Data).Blocksize`
-####### Path resolution
+This field is not directly present in the block, but rather a computable property
+of a `dag-pb`, which would be used in the parent node in `decode(PBNode.Data).blocksizes`.
+It is the sum of the length of `decode(PBNode.Data).Data` field plus the sum
+of all link's `blocksizes`.
-A file terminates UnixFS content path.
+##### `decode(PBNode.Data).filesize`
-Any attempt of path resolution on `File` type MUST error.
+If present, this field MUST be equal to the `Blocksize` computation above.
+Otherwise, this file is invalid.
-###### `Directory` Type
+##### Path Resolution
-A directory node is a named collection of nodes.
+A file terminates a UnixFS content path. Any attempt to resolve a path past a
+file MUST error.
-The minimum valid `PBNode.Data` field for a directory is (pseudo-json): `{"Type":"Directory"}`, other values are covered in Metadata.
+#### `Directory` Type
-Every link in the Links list is an entry (children) of the directory, and the `PBNode.Links[].Name` field give you the name.
+A :dfn[Directory], also known as folder, is a named collection of child :ref[Nodes]:
-####### Link ordering
+- Every link in `PBNode.Links` is an entry (child) of the directory, and
+ `PBNode.Links[].Name` gives you the name of such child.
+- Duplicate names are not allowed. Therefore, two elements of `PBNode.Link` CANNOT
+ have the same `Name`. if two identical names are present in a directory, the
+ decoder MUST fail.
-The cannonical sorting order is lexicographical over the names.
+The minimum valid `PBNode.Data` field for a directory is as follows:
-In theory there is no reason an encoder couldn't use an other ordering, however this lose some of it's meaning when mapped into most file systems today (most file systems consider directories are unordered-key-value objects).
+```json
+{
+ "Type": "Directory"
+}
+```
-A decoder SHOULD if it can, preserve the order of the original files in however it consume thoses names.
+The remaining relevant values are covered in [Metadata](#metadata).
-However when some implementation decode, modify then reencode some, the orignal links order fully lose it's meaning. (given that there is no way to indicate which sorting was used originally)
+##### Link Ordering
-####### Path Resolution
+The canonical sorting order is lexicographical over the names.
-Pop the left most component of the path, and try to match it to one of the Name in Links.
+In theory there is no reason an encoder couldn't use an other ordering, however
+this lose some of its meaning when mapped into most file systems today (most file
+systems consider directories are unordered-key-value objects).
-If you find a match you can then remember the CID. You MUST continue your search, however if you find a match again you MUST error.
+A decoder SHOULD, if it can, preserve the order of the original files in however
+it consume those names. However when, some implementation decode, modify then
+re-encode some, the original links order fully lose it's meaning (given that there
+is no way to indicate which sorting was used originally).
-Assuming no errors were raised, you can continue to the path resolution on the mainaing component and on the CID you poped.
+##### Path Resolution
-####### Duplicate names
+Pop the left-most component of the path, and try to match it to the `Name` of
+a child under `PBNode.Links`. If you find a match, you can then remember the CID.
+You MUST continue the search. If you find another match, you MUST error since
+duplicate names are not allowed.
-Duplicate names are not allowed, if two identical names are present in an directory, the decoder MUST error.
+Assuming no errors were raised, you can continue to the path resolution on the
+remaining components and on the CID you popped.
-###### `Symlink` type
-Symlinks MUST NOT have childs.
+#### `Symlink` type
-Their Data.Data field is a POSIX path that maybe appended in front of the currently remaining path component stack.
+A :dfn[Symlink] represents a POSIX [symbolic link](https://pubs.opengroup.org/onlinepubs/9699919799/functions/symlink.html).
+A symlink MUST NOT have children.
-####### Path resolution on symlinks
+The `PBNode.Data.Data` field is a POSIX path that MAY be appended in front of the
+currently remaining path component stack.
-There is no current consensus on how pathing over symlinks should behave.
-Some implementations return symlinks objects and fail if a consumer tries to follow it through.
+##### Path Resolution
-Following the POSIX spec over the current unixfs path context is probably fine.
+There is no current consensus on how pathing over symlinks should behave. Some
+implementations return symlink objects and fail if a consumer tries to follow them
+through.
-###### `HAMTDirectory`
+Following the POSIX specification over the current UnixFS path context is probably fine.
-Thoses nodes are also sometimes called sharded directories, they allow to split directories into many blocks when they are so big that they don't fit into one single block anymore.
+#### `HAMTDirectory`
-- `node.Data.hashType` indicates a multihash function to use to digest path components used for sharding.
-It MUST be murmur3-x64-64 (multihash `0x22`).
-- `node.Data.Data` is some bitfield, ones indicates whether or not the links are part of this HAMT or leaves of the HAMT.
-The usage of this field is unknown given you can deduce the same information from the links names.
-- `node.Data.fanout` MUST be a power of two. This encode the number of hash permutations that will be used on each resolution step.
-The log base 2 of the fanout indicate how wide the bitmask will be on the hash at for that step. `fanout` MUST be between 8 and probably 65536.
+A :dfn[HAMT Directory] is a [Hashed-Array-Mapped-Trie](https://en.wikipedia.org/wiki/Hash_array_mapped_trie)
+data structure representing a :ref[Directory]. It is generally used to represent
+directories that cannot fit inside a single block. They are also known as "sharded
+directories" since they allow to split large directories into multiple blocks, the "shards".
-####### `node.Links[].Name` on HAMTs
+- `decode(PBNode.Data).hashType` indicates the [multihash] function to use to digest
+ the path components used for sharding. It MUST be `murmur3-x64-64` (`0x22`).
+- `decode(PBNode.Data).Data.Data` is a bit field, which indicates whether or not
+ links are part of this HAMT, or its leaves. The usage of this field is unknown given
+ that you can deduce the same information from the link names.
+- `decode(PBNode.Data).Data.fanout` MUST be a power of two. This encodes the number
+ of hash permutations that will be used on each resolution step. The log base 2
+ of the `fanout` indicate how wide the bitmask will be on the hash at for that step.
+ `fanout` MUST be between 8 and probably 65536. .
-They start by some uppercase hex encoded prefix which is `log2(fanout)` bits wide
+The field `Name` of an element of `PBNode.Links` for a HAMT starts with an
+uppercase hex-encoded prefix, which is `log2(fanout)` bits wide.
-####### Path resolution on HAMTs
+##### Path Resolution
-Steps:
-1. Take the current path component then hash it using the multihash id provided in `Data.hashType`.
-2. Pop the `log2(fanout)` lowest bits from the path component hash digest, then hex encode (using 0-F) thoses bits using little endian thoses bits and find the link that starts with this hex encoded path.
-3. If the link name is exactly as long as the hex encoded representation, follow the link and repeat step 2 with the child node and the remaining bit stack. The child node MUST be a hamt directory else the directory is invalid, else continue.
-4. Compare the remaining part of the last name you found, if it match the original name you were trying to resolve you successfully resolved a path component, everything past the hex encoded prefix is the name of that element (usefull when listing childs of this directory).
+To resolve the path inside a HAMT:
+1. Take the current path component then hash it using the [multihash] represented
+ by the value of `decode(PBNode.Data).hashType`.
+2. Pop the `log2(fanout)` lowest bits from the path component hash digest, then
+ hex encode (using 0-F) those bits using little endian. Find the link that starts
+ with this hex encoded path.
+3. If the link `Name` is exactly as long as the hex encoded representation, follow
+ the link and repeat step 2 with the child node and the remaining bit stack.
+ The child node MUST be a HAMT directory else the directory is invalid, else continue.
+4. Compare the remaining part of the last name you found, if it match the original
+ name you were trying to resolve you successfully resolved a path component,
+ everything past the hex encoded prefix is the name of that element
+ (useful when listing children of this directory).
-###### `TSize` / `DagSize`
+### `TSize` / `DagSize`
-This is an optional field for Links of `dag-pb` nodes, **it does not represent any meaningfull information of the underlying structure** and no known usage of it to this day (altho some implementation emit thoses).
+This is an option field of `PBNode.Links[]`. It **does not** represent any
+meaningful information of the underlying structure, and there is no known
+usage of it to this day, although some implementations emit these.
-To compute the `dagsize` of a node (which would be stored in the parents) you sum the length of the dag-pb outside message binary length, plus the blocksizes of all child files.
+To compute the `DagSize` of a node, which would be store in the parents, you have
+to sum the length of the `dag-pb` outside message binary length, plus the
+`blocksizes` of all child files.
-An example of where this could be usefull is as a hint to smart download clients, for example if you are downloading a file concurrently from two sources that have radically different speeds, it would probably be more efficient to download bigger links from the fastest source, and smaller ones from the slowest source.
+An example of where this could be useful is as a hint to smart download clients,
+for example if you are downloading a file concurrently from two sources that have
+radically different speeds, it would probably be more efficient to download bigger
+links from the fastest source, and smaller ones from the slowest source.
-There is no failure mode known for this field, so your implementation should be able to decode nodes where this field is wrong (not the value you expect), partially or completely missing. This also allows smarter encoder to give a more accurate picture (for example don't count duplicate blocks, ...).
-
-### Paths
+There is no failure mode known for this field, so your implementation should be
+able to decode nodes where this field is wrong (not the value you expect),
+partially or completely missing. This also allows smarter encoder to give a
+more accurate picture (for example don't count duplicate blocks, ...).
-Paths first start with `/`or `/ipfs//` where `` is a [multibase](https://github.com/multiformats/multibase) encoded [CID](https://github.com/multiformats/cid).
-The CID encoding MUST NOT use a multibase alphabet that have `/` (`0x2f`) unicode codepoints however CIDs may use a multibase encoding with a `/` in the alphabet if the encoded CID does not contain `/` once encoded.
-
-Everything following the CID is a collection of path component (some bytes) seperated by `/` (`0x2f`), read from left to right.
-This is inspired by POSIX paths.
+### Metadata
-- Components MUST NOT contain `/` unicode codepoints because else it would break the path into two components.
+UnixFS currently supports two optional metadata fields.
+
+#### `mode`
+
+The `mode` is for persisting the file permissions in [numeric notation](https://en.wikipedia.org/wiki/File_system_permissions#Numeric_notation)
+\[[spec](https://pubs.opengroup.org/onlinepubs/9699919799/basedefs/sys_stat.h.html)\].
+
+- If unspecified this defaults to
+ - `0755` for directories/HAMT shards
+ - `0644` for all other types where applicable
+- The nine least significant bits represent `ugo-rwx`
+- The next three least significant bits represent `setuid`, `setgid` and the `sticky bit`
+- The remaining 20 bits are reserved for future use, and are subject to change. Spec implementations **MUST** handle bits they do not expect as follows:
+ - For future-proofing the (de)serialization layer must preserve the entire uint32 value during clone/copy operations, modifying only bit values that have a well defined meaning: `clonedValue = ( modifiedBits & 07777 ) | ( originalValue & 0xFFFFF000 )`
+ - Implementations of this spec must proactively mask off bits without a defined meaning in the implemented version of the spec: `interpretedValue = originalValue & 07777`
+
+#### `mtime`
+
+A two-element structure ( `Seconds`, `FractionalNanoseconds` ) representing the
+modification time in seconds relative to the unix epoch `1970-01-01T00:00:00Z`.
+The two fields are:
+
+1. `Seconds` ( always present, signed 64bit integer ): represents the amount of seconds after **or before** the epoch.
+2. `FractionalNanoseconds` ( optional, 32bit unsigned integer ): when specified represents the fractional part of the mtime as the amount of nanoseconds. The valid range for this value are the integers `[1, 999999999]`.
+
+Implementations encoding or decoding wire-representations MUST observe the following:
+
+- An `mtime` structure with `FractionalNanoseconds` outside of the on-wire range
+ `[1, 999999999]` is **not** valid. This includes a fractional value of `0`.
+ Implementations encountering such values should consider the entire enclosing
+ metadata block malformed and abort processing the corresponding DAG.
+- The `mtime` structure is optional - its absence implies `unspecified`, rather
+ than `0`
+- For ergonomic reasons a surface API of an encoder MUST allow fractional 0 as
+ input, while at the same time MUST ensure it is stripped from the final structure
+ before encoding, satisfying the above constraints.
+
+Implementations interpreting the mtime metadata in order to apply it within a
+non-IPFS target MUST observe the following:
+
+- If the target supports a distinction between `unspecified` and `0`/`1970-01-01T00:00:00Z`,
+ the distinction must be preserved within the target. E.g. if no `mtime` structure
+ is available, a web gateway must **not** render a `Last-Modified:` header.
+- If the target requires an mtime ( e.g. a FUSE interface ) and no `mtime` is
+ supplied OR the supplied `mtime` falls outside of the targets accepted range:
+ - When no `mtime` is specified or the resulting `UnixTime` is negative:
+ implementations must assume `0`/`1970-01-01T00:00:00Z` (note that such values
+ are not merely academic: e.g. the OpenVMS epoch is `1858-11-17T00:00:00Z`)
+ - When the resulting `UnixTime` is larger than the targets range ( e.g. 32bit
+ vs 64bit mismatch) implementations must assume the highest possible value
+ in the targets range (in most cases that would be `2038-01-19T03:14:07Z`)
+
+## Paths
+
+Paths first start with `/` or `/ipfs//` where `` is a [multibase]
+encoded [CID]. The CID encoding MUST NOT use a multibase alphabet that have
+`/` (`0x2f`) unicode codepoints however CIDs may use a multibase encoding with
+a `/` in the alphabet if the encoded CID does not contain `/` once encoded.
+
+Everything following the CID is a collection of path component (some bytes)
+separated by `/` (`0x2F`). UnixFS paths read from left to right, and are
+inspired by POSIX paths.
+
+- Components MUST NOT contain `/` unicode codepoints because else it would break
+ the path into two components.
- Components SHOULD be UTF8 unicode.
- Components are case sensitive.
-#### Escaping
-
-The `\` may be supposed to trigger an escape sequence.
-
-This might be a thing, but is broken and inconsistent current implementations.
-So until we agree on a new spec for this, you SHOULD NOT use any escape sequence and non ascii character.
-
-#### Relative path components
+### Escaping
-Thoses path components must be resolved before trying to work on the path.
+The `\` may be supposed to trigger an escape sequence. However, it is currently
+broken and inconsistent across implementations. Until we agree on a specification
+for this, you SHOULD NOT use any escape sequences and non-ASCII characters.
-- `.` points to the current node, those path components must be removed.
-- `..` points to the parent, they must be removed first to last however when you remove a `..` you also remove the previous component on the left. If there is no component on the left to remove leave the `..` as-is however this is an attempt for an out-of-bound path resolution which mean you MUST error.
+### Relative Path Components
-#### Restricted names
+Relative path components MUST be resolved before trying to work on the path:
-Thoses names SHOULD NOT be used:
-
-- The `.` string. This represents the self node in POSIX pathing.
-- The `..` string. This represents the parent node in POSIX pathing.
-- nothing (the empty string) We don't actually know the failure mode for this, but it really feels like this shouldn't be a thing.
-- Any string containing a NUL (0x00) byte, this is often used to signify string terminations in some systems (such as most C compatible systems), and many unix file systems don't accept this character in path components.
-
-### Glossary
-
-- Node, Block
- A node is a word from graph theory, this is the smallest unit present in the graph.
- Due to how unixfs work, there is a 1 to 1 mapping between nodes and blocks.
-- File
- A file is some container over an arbitrary sized amounts of bytes.
- Files can be said to be single block, or multi block, in the later case they are the concatenation of multiple children files.
-- Directory, Folder
- A named collection of child nodes.
-- HAMT Directory
- This is a [Hashed-Array-Mapped-Trie](https://en.wikipedia.org/wiki/Hash_array_mapped_trie) data structure representing a Directory, those may be used to split directories into multiple blocks when they get too big, and the list of children does not fit in a single block.
-- Symlink
- This represents a POSIX Symlink.
-
-### Metadata
+- `.` points to the current node, those path components MUST be removed.
+- `..` points to the parent node, they MUST be removed left to right. When removing
+ a `..`, the path component on the left MUST also be removed. If there is no path
+ component on the left, you MUST error since it is an attempt of out-of-bounds
+ path resolution.
-UnixFS currently supports two optional metadata fields:
+### Restricted Names
-* `mode` -- The `mode` is for persisting the file permissions in [numeric notation](https://en.wikipedia.org/wiki/File_system_permissions#Numeric_notation) \[[spec](https://pubs.opengroup.org/onlinepubs/9699919799/basedefs/sys_stat.h.html)\].
- - If unspecified this defaults to
- - `0755` for directories/HAMT shards
- - `0644` for all other types where applicable
- - The nine least significant bits represent `ugo-rwx`
- - The next three least significant bits represent `setuid`, `setgid` and the `sticky bit`
- - The remaining 20 bits are reserved for future use, and are subject to change. Spec implementations **MUST** handle bits they do not expect as follows:
- - For future-proofing the (de)serialization layer must preserve the entire uint32 value during clone/copy operations, modifying only bit values that have a well defined meaning: `clonedValue = ( modifiedBits & 07777 ) | ( originalValue & 0xFFFFF000 )`
- - Implementations of this spec must proactively mask off bits without a defined meaning in the implemented version of the spec: `interpretedValue = originalValue & 07777`
+The following names SHOULD NOT be used:
-* `mtime` -- A two-element structure ( `Seconds`, `FractionalNanoseconds` ) representing the modification time in seconds relative to the unix epoch `1970-01-01T00:00:00Z`.
- - The two fields are:
- 1. `Seconds` ( always present, signed 64bit integer ): represents the amount of seconds after **or before** the epoch.
- 2. `FractionalNanoseconds` ( optional, 32bit unsigned integer ): when specified represents the fractional part of the mtime as the amount of nanoseconds. The valid range for this value are the integers `[1, 999999999]`.
+- The `.` string: represents the self node in POSIX pathing.
+- The `..` string: represents the parent node in POSIX pathing.
+- The empty string.
+- Any string containing a `NUL` (`0x00`) byte: this is often used to signify string
+ terminations in some systems (such as most C compatible systems), and many unix
+ file systems do not accept this character in path components.
- - Implementations encoding or decoding wire-representations must observe the following:
- - An `mtime` structure with `FractionalNanoseconds` outside of the on-wire range `[1, 999999999]` is **not** valid. This includes a fractional value of `0`. Implementations encountering such values should consider the entire enclosing metadata block malformed and abort processing the corresponding DAG.
- - The `mtime` structure is optional - its absence implies `unspecified`, rather than `0`
- - For ergonomic reasons a surface API of an encoder must allow fractional 0 as input, while at the same time must ensure it is stripped from the final structure before encoding, satisfying the above constraints.
+## Design Decision Rationale
- - Implementations interpreting the mtime metadata in order to apply it within a non-IPFS target must observe the following:
- - If the target supports a distinction between `unspecified` and `0`/`1970-01-01T00:00:00Z`, the distinction must be preserved within the target. E.g. if no `mtime` structure is available, a web gateway must **not** render a `Last-Modified:` header.
- - If the target requires an mtime ( e.g. a FUSE interface ) and no `mtime` is supplied OR the supplied `mtime` falls outside of the targets accepted range:
- - When no `mtime` is specified or the resulting `UnixTime` is negative: implementations must assume `0`/`1970-01-01T00:00:00Z` ( note that such values are not merely academic: e.g. the OpenVMS epoch is `1858-11-17T00:00:00Z` )
- - When the resulting `UnixTime` is larger than the targets range ( e.g. 32bit vs 64bit mismatch ) implementations must assume the highest possible value in the targets range ( in most cases that would be `2038-01-19T03:14:07Z` )
+### `mtime` and `mode` Metadata Support in UnixFSv1.5
-## Design decision rationale
+Metadata support in UnixFSv1.5 has been expanded to increase the number of possible
+use cases. These include rsync and filesystem based package managers.
-### `mtime` and `mode` metadata support in UnixFSv1.5
+Several metadata systems were evaluated, as discussed in the following sections.
-Metadata support in UnixFSv1.5 has been expanded to increase the number of possible use cases. These include rsync and filesystem based package managers.
+#### Separate Metadata Node
-Several metadata systems were evaluated:
-
-#### Separate Metadata node
-
-In this scheme, the existing `Metadata` message is expanded to include additional metadata types (`mtime`, `mode`, etc). It then contains links to the actual file data but never the file data itself.
+In this scheme, the existing `Metadata` message is expanded to include additional
+metadata types (`mtime`, `mode`, etc). It contains links to the actual file data
+but never the file data itself.
This was ultimately rejected for a number of reasons:
-1. You would always need to retrieve an additional node to access file data which limits the kind of optimizations that are possible.
-
- For example many files are under the 256KiB block size limit, so we tend to inline them into the describing UnixFS `File` node. This would not be possible with an intermediate `Metadata` node.
-
-2. The `File` node already contains some metadata (e.g. the file size) so metadata would be stored in multiple places which complicates forwards compatibility with UnixFSv2 as to map between metadata formats potentially requires multiple fetch operations
-
-#### Metadata in the directory
+1. You would always need to retrieve an additional node to access file data which
+ limits the kind of optimizations that are possible. For example many files are
+ under the 256 KiB block size limit, so we tend to inline them into the describing
+ UnixFS `File` node. This would not be possible with an intermediate `Metadata` node.
+2. The `File` node already contains some metadata (e.g. the file size) so metadata
+ would be stored in multiple places which complicates forwards compatibility with
+ UnixFSv2 as to map between metadata formats potentially requires multiple fetch
+ operations.
-Repeated `Metadata` messages are added to UnixFS `Directory` and `HAMTShard` nodes, the index of which indicates which entry they are to be applied to.
+#### Metadata in the Directory
-Where entries are `HAMTShard`s, an empty message is added.
+Repeated `Metadata` messages are added to UnixFS `Directory` and `HAMTShard` nodes,
+the index of which indicates which entry they are to be applied to. Where entries are
+`HAMTShard`s, an empty message is added.
-One advantage of this method is that if we expand stored metadata to include entry types and sizes we can perform directory listings without needing to fetch further entry nodes (excepting `HAMTShard` nodes), though without removing the storage of these datums elsewhere in the spec we run the risk of having non-canonical data locations and perhaps conflicting data as we traverse through trees containing both UnixFS v1 and v1.5 nodes.
+One advantage of this method is that if we expand stored metadata to include entry
+types and sizes we can perform directory listings without needing to fetch further
+entry nodes (excepting `HAMTShard` nodes), though without removing the storage of
+these datums elsewhere in the spec we run the risk of having non-canonical data
+locations and perhaps conflicting data as we traverse through trees containing
+both UnixFS v1 and v1.5 nodes.
This was rejected for the following reasons:
-1. When creating a UnixFS node there's no way to record metadata without wrapping it in a directory.
+1. When creating a UnixFS node there's no way to record metadata without wrapping
+ it in a directory.
+2. If you access any UnixFS node directly by its [CID], there is no way of recreating
+ the metadata which limits flexibility.
+3. In order to list the contents of a directory including entry types and sizes,
+ you have to fetch the root node of each entry anyway so the performance benefit
+ of including some metadata in the containing directory is negligible in this
+ use case.
-2. If you access any UnixFS node directly by its [CID], there is no way of recreating the metadata which limits flexibility.
-
-3. In order to list the contents of a directory including entry types and sizes, you have to fetch the root node of each entry anyway so the performance benefit of including some metadata in the containing directory is negligible in this use case.
-
-#### Metadata in the file
+#### Metadata in the File
This adds new fields to the UnixFS `Data` message to represent the various metadata fields.
-It has the advantage of being simple to implement, metadata is maintained whether the file is accessed directly via its [CID] or via an IPFS path that includes a containing directory, and by keeping the metadata small enough we can inline root UnixFS nodes into their CIDs so we can end up fetching the same number of nodes if we decide to keep file data in a leaf node for deduplication reasons.
+It has the advantage of being simple to implement, metadata is maintained whether
+the file is accessed directly via its [CID] or via an IPFS path that includes a
+containing directory, and by keeping the metadata small enough we can inline root
+UnixFS nodes into their CIDs so we can end up fetching the same number of nodes if
+we decide to keep file data in a leaf node for deduplication reasons.
Downsides to this approach are:
-1. Two users adding the same file to IPFS at different times will have different [CID]s due to the `mtime`s being different.
-
- If the content is stored in another node, its [CID] will be constant between the two users but you can't navigate to it unless you have the parent node which will be less available due to the proliferation of [CID]s.
-
-2. Metadata is also impossible to remove without changing the [CID], so metadata becomes part of the content.
-
-3. Performance may be impacted as well as if we don't inline UnixFS root nodes into [CID]s, additional fetches will be required to load a given UnixFS entry.
+1. Two users adding the same file to IPFS at different times will have different
+ [CID]s due to the `mtime`s being different. If the content is stored in another
+ node, its [CID] will be constant between the two users but you can't navigate
+ to it unless you have the parent node which will be less available due to the
+ proliferation of [CID]s.
+1. Metadata is also impossible to remove without changing the [CID], so
+ metadata becomes part of the content.
+2. Performance may be impacted as well as if we don't inline UnixFS root nodes
+ into [CID]s, additional fetches will be required to load a given UnixFS entry.
-#### Side trees
+#### Side Trees
-With this approach we would maintain a separate data structure outside of the UnixFS tree to hold metadata.
+With this approach we would maintain a separate data structure outside of the
+UnixFS tree to hold metadata.
-This was rejected due to concerns about added complexity, recovery after system crashes while writing, and having to make extra requests to fetch metadata nodes when resolving [CID]s from peers.
+This was rejected due to concerns about added complexity, recovery after system
+crashes while writing, and having to make extra requests to fetch metadata nodes
+when resolving [CID]s from peers.
-#### Side database
+#### Side Database
This scheme would see metadata stored in an external database.
-The downsides to this are that metadata would not be transferred from one node to another when syncing as [Bitswap] is not aware of the database, and in-tree metadata
+The downsides to this are that metadata would not be transferred from one node
+to another when syncing as [Bitswap] is not aware of the database, and in-tree
+metadata.
-### UnixTime protobuf datatype rationale
+### UnixTime Protobuf Datatype Rationale
#### Seconds
-The integer portion of UnixTime is represented on the wire using a varint encoding. While this is
-inefficient for negative values, it avoids introducing zig-zag encoding. Values before the year 1970
-will be exceedingly rare, and it would be handy having such cases stand out, while at the same keeping
-the "usual" positive values easy to eyeball. The varint representing the time of writing this text is
-5 bytes long. It will remain so until October 26, 3058 ( 34,359,738,367 )
+The integer portion of UnixTime is represented on the wire using a `varint` encoding.
+While this is inefficient for negative values, it avoids introducing zig-zag encoding.
+Values before the year 1970 will be exceedingly rare, and it would be handy having
+such cases stand out, while at the same keeping the "usual" positive values easy
+to eyeball. The `varint` representing the time of writing this text is 5 bytes
+long. It will remain so until October 26, 3058 (34,359,738,367).
#### FractionalNanoseconds
-Fractional values are effectively a random number in the range 1 ~ 999,999,999. Such values will exceed
-2^28 nanoseconds ( 268,435,456 ) in most cases. Therefore, the fractional part is represented as a 4-byte
-`fixed32`, [as per Google's recommendation](https://developers.google.com/protocol-buffers/docs/proto#scalar).
-## References
-
-[multihash]: https://tools.ietf.org/html/draft-multiformats-multihash-05
-[CID]: https://github.com/multiformats/cid/
-[Bitswap]: https://github.com/ipfs/specs/blob/master/BITSWAP.md
+Fractional values are effectively a random number in the range 1 ~ 999,999,999.
+Such values will exceed 2^28 nanoseconds (268,435,456) in most cases. Therefore,
+the fractional part is represented as a 4-byte `fixed32`,
+[as per Google's recommendation](https://developers.google.com/protocol-buffers/docs/proto#scalar).
# Notes for Implementers
@@ -471,12 +569,14 @@ This section and included subsections are not authoritative.
In this example, we will build a `Raw` file with the string `test` as its content.
1. First hash the data:
+
```console
$ echo -n "test" | sha256sum
9f86d081884c7d659a2feaa0c55ad015a3bf4f1b2b0b822cd15d6c15b0f00a08 -
```
2. Add the CID prefix:
+
```
f this is the multibase prefix, we need it because we are working with a hex CID, this is omitted for binary CIDs
01 the CID version, here one
@@ -486,17 +586,18 @@ f this is the multibase prefix, we need it because we are working with a hex CID
9f86d081884c7d659a2feaa0c55ad015a3bf4f1b2b0b822cd15d6c15b0f00a08 the digest we computed earlier
```
-3. Profit
-Assuming we stored this block in some implementation of our choice which makes it accessible to our client, we can try to decode it:
+3. Profit: assuming we stored this block in some implementation of our choice which makes it accessible to our client, we can try to decode it.
+
```console
$ ipfs cat f015512209f86d081884c7d659a2feaa0c55ad015a3bf4f1b2b0b822cd15d6c15b0f00a08
test
```
+## Offset List
-### Offset list
+The offset list isn't the only way to use blocksizes and reach a correct implementation,
+it is a simple canonical one, python pseudo code to compute it looks like this:
-The offset list isn't the only way to use blocksizes and reach a correct implementation, it is a simple cannonical one, python pseudo code to compute it looks like this:
```python
def offsetlist(node):
unixfs = decodeDataField(node.Data)
@@ -508,3 +609,10 @@ def offsetlist(node):
```
This will tell you which offset inside this node the children at the corresponding index starts to cover. (using `[x,y)` ranging)
+
+[protobuf]: https://developers.google.com/protocol-buffers/
+[CID]: https://github.com/multiformats/cid/
+[multicodec]: https://github.com/multiformats/multicodec
+[multihash]: https://github.com/multiformats/multihash
+[Bitswap]: https://github.com/ipfs/specs/blob/master/BITSWAP.md
+[ipld-dag-pb]: https://ipld.io/specs/codecs/dag-pb/spec/
From e2cf0af3b6868ae59bbb3a23318b0cb274085485 Mon Sep 17 00:00:00 2001
From: Henrique Dias
Date: Mon, 30 Oct 2023 11:02:58 +0100
Subject: [PATCH 06/13] chore: apply @ElPaisano suggestions
---
src/architecture/unixfs.md | 201 ++++++++++++++++++-------------------
1 file changed, 97 insertions(+), 104 deletions(-)
diff --git a/src/architecture/unixfs.md b/src/architecture/unixfs.md
index 0b7887f71..a9d2f4663 100644
--- a/src/architecture/unixfs.md
+++ b/src/architecture/unixfs.md
@@ -37,26 +37,25 @@ order: 1
---
UnixFS is a [protocol-buffers][protobuf]-based format for describing files,
-directories and symlinks as DAGs in IPFS.
+directories and symlinks as Directed Acyclic Graphs (DAGs) in IPFS.
## Nodes
A :dfn[Node] is the smallest unit present in a graph, and it comes from graph
-theory. In UnixFS, there is a 1 to 1 mapping between nodes and blocks. Therefore,
+theory. In UnixFS, there is a 1-to-1 mapping between nodes and blocks. Therefore,
they are used interchangeably in this document.
A node is addressed by a [CID]. In order to be able to read a node, its [CID] is
-required. A [CID] includes two important information:
+required. A [CID] includes two important pieces of information:
-1. A [multicodec], also known as simply codec.
+1. A [multicodec], simply known as a codec.
2. A [multihash] used to specify the hashing algorithm, the hash parameters and
the hash digest.
-Thus, the block must be retrieved, that is, the bytes which when hashed using the
-hash function specified in the multihash gives us the same multihash value back.
+Thus, the block must be retrieved; that is, the bytes which ,when hashed using the
+hash function specified in the multihash, gives us the same multihash value back.
-In UnixFS, a node can be encoded using two different multicodecs, which we give
-more details about in the following sections:
+In UnixFS, a node can be encoded using two different multicodecs, listed below. More details are provided in the following sections:
- `raw` (`0x55`), which are single block :ref[Files].
- `dag-pb` (`0x70`), which can be of any other type.
@@ -73,8 +72,7 @@ be recognized because their CIDs are encoded using the `raw` codec:
## `dag-pb` Nodes
More complex nodes use the `dag-pb` encoding. These nodes require two steps of
-decoding. The first step is to decode the outer container of the block, which
-is encoded using the IPLD [`dag-pb`][ipld-dag-pb] specification, which can be
+decoding. The first step is to decode the outer container of the block. This is encoded using the IPLD [`dag-pb`][ipld-dag-pb] specification, which can be
summarized as follows:
```protobuf
@@ -145,9 +143,8 @@ A `dag-pb` UnixFS node supports different types, which are defined in
#### `File` type
-A :dfn[File] is a container over an arbitrary sized amount of bytes. Files can be
-said to be either single block or multi block. When multi block, a File is then a
-concatenation of multiple children files
+A :dfn[File] is a container over an arbitrary sized amount of bytes. Files are either
+single block or multi-block. A multi-block file is a concatenation of multiple child files.
##### The _sister-lists_ `PBNode.Links` and `decode(PBNode.Data).blocksizes`
@@ -157,10 +154,10 @@ allow us to concatenate smaller files together.
Linked files would be loaded recursively with the same process following a DFS
(Depth-First-Search) order.
-Child nodes must be of type file, so either a [`dag-pb` File](#file-type), or a
+Child nodes must be of type File; either a `dag-pb`:ref[File], or a
[`raw` block](#raw-blocks).
-For example this example pseudo-json block:
+For example, consider this pseudo-json block:
```json
{
@@ -182,19 +179,19 @@ in `decode(PBNode.Data).blocksizes`.
Implementers need to be extra careful to ensure the values in `Data.blocksizes`
are calculated by following the definition from [`Blocksize`](#decodepbnodedatablocksize).
-This allows to do fast indexing into the file, for example if someone is trying
-to read bytes 25 to 35 we can compute an offset list by summing all previous
+This allows for fast indexing into the file. For example, if someone is trying
+to read bytes 25 to 35, we can compute an offset list by summing all previous
indexes in `blocksizes`, then do a search to find which indexes contain the
range we are interested in.
-For example here the offset list would be `[0, 20]` and thus we know we only need to download `Qmbar` to get the range we are intrested in.
+In the example above, the offset list would be `[0, 20]`. Thus, we know we only need to download `Qmbar` to get the range we are interested in.
UnixFS parser MUST error if `blocksizes` or `Links` are not of the same length.
##### `decode(PBNode.Data).Data`
-This field is an array of bytes, it is the file content and is appended before
-the links. This must be taken into account when doing offset calculations, that is
+An array of bytes that is the file content and is appended before
+the links. This must be taken into account when doing offset calculations; that is,
the length of `decode(PBNode.Data).Data` defines the value of the zeroth element
of the offset list when computing offsets.
@@ -229,9 +226,9 @@ file MUST error.
A :dfn[Directory], also known as folder, is a named collection of child :ref[Nodes]:
- Every link in `PBNode.Links` is an entry (child) of the directory, and
- `PBNode.Links[].Name` gives you the name of such child.
+ `PBNode.Links[].Name` gives you the name of that child.
- Duplicate names are not allowed. Therefore, two elements of `PBNode.Link` CANNOT
- have the same `Name`. if two identical names are present in a directory, the
+ have the same `Name`. If two identical names are present in a directory, the
decoder MUST fail.
The minimum valid `PBNode.Data` field for a directory is as follows:
@@ -248,14 +245,14 @@ The remaining relevant values are covered in [Metadata](#metadata).
The canonical sorting order is lexicographical over the names.
-In theory there is no reason an encoder couldn't use an other ordering, however
-this lose some of its meaning when mapped into most file systems today (most file
-systems consider directories are unordered-key-value objects).
+In theory, there is no reason an encoder couldn't use an other ordering. However,
+this loses some of its meaning when mapped into most file systems today, as most file
+systems consider directories to be unordered key-value objects.
-A decoder SHOULD, if it can, preserve the order of the original files in however
-it consume those names. However when, some implementation decode, modify then
-re-encode some, the original links order fully lose it's meaning (given that there
-is no way to indicate which sorting was used originally).
+A decoder SHOULD, if it can, preserve the order of the original files in the same way
+it consumed those names. However, when some implementations decode, modify and then
+re-encode, the original link order loses it's original meaning, given that there
+is no way to indicate which sorting was used originally.
##### Path Resolution
@@ -288,13 +285,13 @@ Following the POSIX specification over the current UnixFS path context is probab
A :dfn[HAMT Directory] is a [Hashed-Array-Mapped-Trie](https://en.wikipedia.org/wiki/Hash_array_mapped_trie)
data structure representing a :ref[Directory]. It is generally used to represent
-directories that cannot fit inside a single block. They are also known as "sharded
-directories" since they allow to split large directories into multiple blocks, the "shards".
+directories that cannot fit inside a single block. These are also known as "sharded
+directories:, since they allow you to split large directories into multiple blocks, known as "shards".
- `decode(PBNode.Data).hashType` indicates the [multihash] function to use to digest
the path components used for sharding. It MUST be `murmur3-x64-64` (`0x22`).
- `decode(PBNode.Data).Data.Data` is a bit field, which indicates whether or not
- links are part of this HAMT, or its leaves. The usage of this field is unknown given
+ links are part of this HAMT, or its leaves. The usage of this field is unknown, given
that you can deduce the same information from the link names.
- `decode(PBNode.Data).Data.fanout` MUST be a power of two. This encodes the number
of hash permutations that will be used on each resolution step. The log base 2
@@ -308,39 +305,36 @@ uppercase hex-encoded prefix, which is `log2(fanout)` bits wide.
To resolve the path inside a HAMT:
-1. Take the current path component then hash it using the [multihash] represented
+1. Take the current path component, then hash it using the [multihash] represented
by the value of `decode(PBNode.Data).hashType`.
2. Pop the `log2(fanout)` lowest bits from the path component hash digest, then
hex encode (using 0-F) those bits using little endian. Find the link that starts
with this hex encoded path.
3. If the link `Name` is exactly as long as the hex encoded representation, follow
the link and repeat step 2 with the child node and the remaining bit stack.
- The child node MUST be a HAMT directory else the directory is invalid, else continue.
-4. Compare the remaining part of the last name you found, if it match the original
- name you were trying to resolve you successfully resolved a path component,
- everything past the hex encoded prefix is the name of that element
- (useful when listing children of this directory).
+ The child node MUST be a HAMT directory, or else the directory is invalid. Otherwise, continue.
+4. Compare the remaining part of the last name you found. If it matches the original
+ name you were trying to resolve, you have successfully resolved a path component.
+ Everything past the hex encoded prefix is the name of that element, which is useful when listing children of this directory.
### `TSize` / `DagSize`
-This is an option field of `PBNode.Links[]`. It **does not** represent any
+This is an optional field in `PBNode.Links[]`. It **does not** represent any
meaningful information of the underlying structure, and there is no known
-usage of it to this day, although some implementations emit these.
+usage of it to this day, although some implementations omit these.
-To compute the `DagSize` of a node, which would be store in the parents, you have
-to sum the length of the `dag-pb` outside message binary length, plus the
-`blocksizes` of all child files.
+To compute the `DagSize` of a node, which is stored in the parents, sum the length of the `dag-pb` outside message binary length and the `blocksizes` of all child files.
-An example of where this could be useful is as a hint to smart download clients,
-for example if you are downloading a file concurrently from two sources that have
+An example of where this could be useful is as a hint to smart download clients.
+For example, if you are downloading a file concurrently from two sources that have
radically different speeds, it would probably be more efficient to download bigger
links from the fastest source, and smaller ones from the slowest source.
There is no failure mode known for this field, so your implementation should be
-able to decode nodes where this field is wrong (not the value you expect),
-partially or completely missing. This also allows smarter encoder to give a
-more accurate picture (for example don't count duplicate blocks, ...).
+able to decode nodes where this field is wrong (not the value you expect), or
+partially or completely missing. This also allows smarter encoders to give a
+more accurate picture (Don't count duplicate blocks, etc.).
### Metadata
@@ -351,13 +345,13 @@ UnixFS currently supports two optional metadata fields.
The `mode` is for persisting the file permissions in [numeric notation](https://en.wikipedia.org/wiki/File_system_permissions#Numeric_notation)
\[[spec](https://pubs.opengroup.org/onlinepubs/9699919799/basedefs/sys_stat.h.html)\].
-- If unspecified this defaults to
+- If unspecified, this defaults to
- `0755` for directories/HAMT shards
- `0644` for all other types where applicable
- The nine least significant bits represent `ugo-rwx`
- The next three least significant bits represent `setuid`, `setgid` and the `sticky bit`
- The remaining 20 bits are reserved for future use, and are subject to change. Spec implementations **MUST** handle bits they do not expect as follows:
- - For future-proofing the (de)serialization layer must preserve the entire uint32 value during clone/copy operations, modifying only bit values that have a well defined meaning: `clonedValue = ( modifiedBits & 07777 ) | ( originalValue & 0xFFFFF000 )`
+ - For future-proofing, the (de)serialization layer must preserve the entire uint32 value during clone/copy operations, modifying only bit values that have a well defined meaning: `clonedValue = ( modifiedBits & 07777 ) | ( originalValue & 0xFFFFF000 )`
- Implementations of this spec must proactively mask off bits without a defined meaning in the implemented version of the spec: `interpretedValue = originalValue & 07777`
#### `mtime`
@@ -367,76 +361,76 @@ modification time in seconds relative to the unix epoch `1970-01-01T00:00:00Z`.
The two fields are:
1. `Seconds` ( always present, signed 64bit integer ): represents the amount of seconds after **or before** the epoch.
-2. `FractionalNanoseconds` ( optional, 32bit unsigned integer ): when specified represents the fractional part of the mtime as the amount of nanoseconds. The valid range for this value are the integers `[1, 999999999]`.
+2. `FractionalNanoseconds` ( optional, 32bit unsigned integer ): when specified, represents the fractional part of the `mtime` as the amount of nanoseconds. The valid range for this value are the integers `[1, 999999999]`.
Implementations encoding or decoding wire-representations MUST observe the following:
- An `mtime` structure with `FractionalNanoseconds` outside of the on-wire range
`[1, 999999999]` is **not** valid. This includes a fractional value of `0`.
Implementations encountering such values should consider the entire enclosing
- metadata block malformed and abort processing the corresponding DAG.
-- The `mtime` structure is optional - its absence implies `unspecified`, rather
- than `0`
-- For ergonomic reasons a surface API of an encoder MUST allow fractional 0 as
+ metadata block malformed and abort the processing of the corresponding DAG.
+- The `mtime` structure is optional. Its absence implies `unspecified` rather
+ than `0`.
+- For ergonomic reasons, a surface API of an encoder MUST allow fractional `0` as
input, while at the same time MUST ensure it is stripped from the final structure
before encoding, satisfying the above constraints.
-Implementations interpreting the mtime metadata in order to apply it within a
+Implementations interpreting the `mtime` metadata in order to apply it within a
non-IPFS target MUST observe the following:
- If the target supports a distinction between `unspecified` and `0`/`1970-01-01T00:00:00Z`,
- the distinction must be preserved within the target. E.g. if no `mtime` structure
+ the distinction must be preserved within the target. For example, if no `mtime` structure
is available, a web gateway must **not** render a `Last-Modified:` header.
-- If the target requires an mtime ( e.g. a FUSE interface ) and no `mtime` is
+- If the target requires an `mtime` ( e.g. a FUSE interface ) and no `mtime` is
supplied OR the supplied `mtime` falls outside of the targets accepted range:
- When no `mtime` is specified or the resulting `UnixTime` is negative:
implementations must assume `0`/`1970-01-01T00:00:00Z` (note that such values
are not merely academic: e.g. the OpenVMS epoch is `1858-11-17T00:00:00Z`)
- When the resulting `UnixTime` is larger than the targets range ( e.g. 32bit
- vs 64bit mismatch) implementations must assume the highest possible value
- in the targets range (in most cases that would be `2038-01-19T03:14:07Z`)
+ vs 64bit mismatch), implementations must assume the highest possible value
+ in the targets range. In most cases, this would be `2038-01-19T03:14:07Z`.
## Paths
-Paths first start with `/` or `/ipfs//` where `` is a [multibase]
-encoded [CID]. The CID encoding MUST NOT use a multibase alphabet that have
-`/` (`0x2f`) unicode codepoints however CIDs may use a multibase encoding with
+Paths begin with a `/` or `/ipfs//`, where `` is a [multibase]
+encoded [CID]. The CID encoding MUST NOT use a multibase alphabet that contains
+`/` (`0x2f`) unicode codepoints. However, CIDs may use a multibase encoding with
a `/` in the alphabet if the encoded CID does not contain `/` once encoded.
-Everything following the CID is a collection of path component (some bytes)
+Everything following the CID is a collection of path components (some bytes)
separated by `/` (`0x2F`). UnixFS paths read from left to right, and are
inspired by POSIX paths.
-- Components MUST NOT contain `/` unicode codepoints because else it would break
+- Components MUST NOT contain `/` unicode codepoints because it would break
the path into two components.
- Components SHOULD be UTF8 unicode.
-- Components are case sensitive.
+- Components are case-sensitive.
### Escaping
-The `\` may be supposed to trigger an escape sequence. However, it is currently
+The `\` may be used to trigger an escape sequence. However, it is currently
broken and inconsistent across implementations. Until we agree on a specification
-for this, you SHOULD NOT use any escape sequences and non-ASCII characters.
+for this, you SHOULD NOT use any escape sequences and/or non-ASCII characters.
### Relative Path Components
Relative path components MUST be resolved before trying to work on the path:
-- `.` points to the current node, those path components MUST be removed.
-- `..` points to the parent node, they MUST be removed left to right. When removing
+- `.` points to the current node and MUST be removed.
+- `..` points to the parent node and MUST be removed left to right. When removing
a `..`, the path component on the left MUST also be removed. If there is no path
- component on the left, you MUST error since it is an attempt of out-of-bounds
+ component on the left, you MUST error to avoid out-of-bounds
path resolution.
### Restricted Names
The following names SHOULD NOT be used:
-- The `.` string: represents the self node in POSIX pathing.
-- The `..` string: represents the parent node in POSIX pathing.
+- The `.` string, as it represents the self node in POSIX pathing.
+- The `..` string, as it represents the parent node in POSIX pathing.
- The empty string.
-- Any string containing a `NUL` (`0x00`) byte: this is often used to signify string
- terminations in some systems (such as most C compatible systems), and many unix
+- Any string containing a `NULL` (`0x00`) byte, as this is often used to signify string
+ terminations in some systems, such as C-compatible systems. Many unix
file systems do not accept this character in path components.
## Design Decision Rationale
@@ -444,25 +438,25 @@ The following names SHOULD NOT be used:
### `mtime` and `mode` Metadata Support in UnixFSv1.5
Metadata support in UnixFSv1.5 has been expanded to increase the number of possible
-use cases. These include rsync and filesystem based package managers.
+use cases. These include `rsync` and filesystem-based package managers.
Several metadata systems were evaluated, as discussed in the following sections.
#### Separate Metadata Node
In this scheme, the existing `Metadata` message is expanded to include additional
-metadata types (`mtime`, `mode`, etc). It contains links to the actual file data
+metadata types (`mtime`, `mode`, etc). It contains links to the actual file data,
but never the file data itself.
This was ultimately rejected for a number of reasons:
-1. You would always need to retrieve an additional node to access file data which
- limits the kind of optimizations that are possible. For example many files are
+1. You would always need to retrieve an additional node to access file data, which
+ limits the kind of optimizations that are possible. For example, many files are
under the 256 KiB block size limit, so we tend to inline them into the describing
UnixFS `File` node. This would not be possible with an intermediate `Metadata` node.
-2. The `File` node already contains some metadata (e.g. the file size) so metadata
- would be stored in multiple places which complicates forwards compatibility with
- UnixFSv2 as to map between metadata formats potentially requires multiple fetch
+2. The `File` node already contains some metadata (e.g. the file size), so metadata
+ would be stored in multiple places. This complicates forwards compatibility with
+ UnixFSv2, as mapping between metadata formats potentially requires multiple fetch
operations.
#### Metadata in the Directory
@@ -471,21 +465,21 @@ Repeated `Metadata` messages are added to UnixFS `Directory` and `HAMTShard` nod
the index of which indicates which entry they are to be applied to. Where entries are
`HAMTShard`s, an empty message is added.
-One advantage of this method is that if we expand stored metadata to include entry
-types and sizes we can perform directory listings without needing to fetch further
-entry nodes (excepting `HAMTShard` nodes), though without removing the storage of
-these datums elsewhere in the spec we run the risk of having non-canonical data
+One advantage of this method is that, if we expand stored metadata to include entry
+types and sizes, we can perform directory listings without needing to fetch further
+entry nodes (excepting `HAMTShard` nodes). However, without removing the storage of
+these datums elsewhere in the spec, we run the risk of having non-canonical data
locations and perhaps conflicting data as we traverse through trees containing
both UnixFS v1 and v1.5 nodes.
This was rejected for the following reasons:
-1. When creating a UnixFS node there's no way to record metadata without wrapping
+1. When creating a UnixFS node, there's no way to record metadata without wrapping
it in a directory.
2. If you access any UnixFS node directly by its [CID], there is no way of recreating
the metadata which limits flexibility.
3. In order to list the contents of a directory including entry types and sizes,
- you have to fetch the root node of each entry anyway so the performance benefit
+ you have to fetch the root node of each entry, so the performance benefit
of including some metadata in the containing directory is negligible in this
use case.
@@ -493,27 +487,27 @@ This was rejected for the following reasons:
This adds new fields to the UnixFS `Data` message to represent the various metadata fields.
-It has the advantage of being simple to implement, metadata is maintained whether
+It has the advantage of being simple to implement. Metadata is maintained whether
the file is accessed directly via its [CID] or via an IPFS path that includes a
-containing directory, and by keeping the metadata small enough we can inline root
-UnixFS nodes into their CIDs so we can end up fetching the same number of nodes if
+containing directory. In addition, metadata is kept small enough that we can inline root
+UnixFS nodes into their CIDs so that we can end up fetching the same number of nodes if
we decide to keep file data in a leaf node for deduplication reasons.
Downsides to this approach are:
1. Two users adding the same file to IPFS at different times will have different
[CID]s due to the `mtime`s being different. If the content is stored in another
- node, its [CID] will be constant between the two users but you can't navigate
- to it unless you have the parent node which will be less available due to the
+ node, its [CID] will be constant between the two users, but you can't navigate
+ to it unless you have the parent node, which will be less available due to the
proliferation of [CID]s.
1. Metadata is also impossible to remove without changing the [CID], so
metadata becomes part of the content.
2. Performance may be impacted as well as if we don't inline UnixFS root nodes
- into [CID]s, additional fetches will be required to load a given UnixFS entry.
+ into [CID]s, so additional fetches will be required to load a given UnixFS entry.
#### Side Trees
-With this approach we would maintain a separate data structure outside of the
+With this approach, we would maintain a separate data structure outside of the
UnixFS tree to hold metadata.
This was rejected due to concerns about added complexity, recovery after system
@@ -525,7 +519,7 @@ when resolving [CID]s from peers.
This scheme would see metadata stored in an external database.
The downsides to this are that metadata would not be transferred from one node
-to another when syncing as [Bitswap] is not aware of the database, and in-tree
+to another when syncing, as [Bitswap] is not aware of the database and in-tree
metadata.
### UnixTime Protobuf Datatype Rationale
@@ -534,15 +528,14 @@ metadata.
The integer portion of UnixTime is represented on the wire using a `varint` encoding.
While this is inefficient for negative values, it avoids introducing zig-zag encoding.
-Values before the year 1970 will be exceedingly rare, and it would be handy having
-such cases stand out, while at the same keeping the "usual" positive values easy
-to eyeball. The `varint` representing the time of writing this text is 5 bytes
+Values before the year `1970` are exceedingly rare, and it would be handy having
+such cases stand out, while ensuring that the "usual" positive values are easily readable. The `varint` representing the time of writing this text is 5 bytes
long. It will remain so until October 26, 3058 (34,359,738,367).
#### FractionalNanoseconds
-Fractional values are effectively a random number in the range 1 ~ 999,999,999.
-Such values will exceed 2^28 nanoseconds (268,435,456) in most cases. Therefore,
+Fractional values are effectively a random number in the range 1 to 999,999,999.
+In most cases, such values will exceed 2^28 (268,435,456) nanoseconds. Therefore,
the fractional part is represented as a 4-byte `fixed32`,
[as per Google's recommendation](https://developers.google.com/protocol-buffers/docs/proto#scalar).
@@ -568,7 +561,7 @@ This section and included subsections are not authoritative.
In this example, we will build a `Raw` file with the string `test` as its content.
-1. First hash the data:
+1. First, hash the data:
```console
$ echo -n "test" | sha256sum
@@ -586,7 +579,7 @@ f this is the multibase prefix, we need it because we are working with a hex CID
9f86d081884c7d659a2feaa0c55ad015a3bf4f1b2b0b822cd15d6c15b0f00a08 the digest we computed earlier
```
-3. Profit: assuming we stored this block in some implementation of our choice which makes it accessible to our client, we can try to decode it.
+3. Profit! Assuming we stored this block in some implementation of our choice, which makes it accessible to our client, we can try to decode it.
```console
$ ipfs cat f015512209f86d081884c7d659a2feaa0c55ad015a3bf4f1b2b0b822cd15d6c15b0f00a08
@@ -615,4 +608,4 @@ This will tell you which offset inside this node the children at the correspondi
[multicodec]: https://github.com/multiformats/multicodec
[multihash]: https://github.com/multiformats/multihash
[Bitswap]: https://github.com/ipfs/specs/blob/master/BITSWAP.md
-[ipld-dag-pb]: https://ipld.io/specs/codecs/dag-pb/spec/
+[ipld-dag-pb]: https://ipld.io/specs/codecs/dag-pb/spec/
\ No newline at end of file
From 1667bd4e6d71b040d2313588d82aa04f89f4ced5 Mon Sep 17 00:00:00 2001
From: Marcin Rataj
Date: Fri, 6 Sep 2024 18:41:20 +0200
Subject: [PATCH 07/13] unixfs: Tsize suggestions from review
---
src/architecture/unixfs.md | 37 +++++++++++++++++++++++--------------
1 file changed, 23 insertions(+), 14 deletions(-)
diff --git a/src/architecture/unixfs.md b/src/architecture/unixfs.md
index a9d2f4663..9c2d5624c 100644
--- a/src/architecture/unixfs.md
+++ b/src/architecture/unixfs.md
@@ -317,24 +317,33 @@ To resolve the path inside a HAMT:
name you were trying to resolve, you have successfully resolved a path component.
Everything past the hex encoded prefix is the name of that element, which is useful when listing children of this directory.
-### `TSize` / `DagSize`
+### `TSize` (child DAG size hint)
-This is an optional field in `PBNode.Links[]`. It **does not** represent any
-meaningful information of the underlying structure, and there is no known
-usage of it to this day, although some implementations omit these.
+`Tsize` is an optional field in `PBNode.Links[]` which represents the precomputed size of the specific child DAG. It provides a performance optimization: a hint about the total size of child DAG can be read without having to fetch any child nodes.
-To compute the `DagSize` of a node, which is stored in the parents, sum the length of the `dag-pb` outside message binary length and the `blocksizes` of all child files.
+To compute the `Tsize` of a child DAG, sum the length of the `dag-pb` outside message binary length and the `blocksizes` of all nodes in the child DAG.
-An example of where this could be useful is as a hint to smart download clients.
-For example, if you are downloading a file concurrently from two sources that have
-radically different speeds, it would probably be more efficient to download bigger
-links from the fastest source, and smaller ones from the slowest source.
+:::note
-
-There is no failure mode known for this field, so your implementation should be
-able to decode nodes where this field is wrong (not the value you expect), or
-partially or completely missing. This also allows smarter encoders to give a
-more accurate picture (Don't count duplicate blocks, etc.).
+Examples of where `Tsize` is useful:
+
+- User interfaces, where total size of a DAG needs to be displayed immediately, without having to do the full DAG walk.
+- Smart download clients, downloading a file concurrently from two sources that have radically different speeds. It may be more efficient to parallelize and download bigger
+links from the fastest source, and smaller ones from the slower sources.
+
+:::
+
+:::warning
+
+An implementation SHOULD NOT assume the `TSize` values are correct. The value is only a hint that provides performance optimization for better UX.
+
+Following the [Robustness Principle](https://specs.ipfs.tech/architecture/principles/#robustness), implementation SHOULD be
+able to decode nodes where the `Tsize` field is wrong (not matching the sizes of sub-DAGs), or
+partially or completely missing.
+
+When total data size is needed for important purposes such as accounting, billing, and cost estimation, the `Tsize` SHOULD NOT be used, and instead a full DAG walk SHOULD to be performed.
+
+:::
### Metadata
From 0559e6585c8ed49915e8df0b6e004b933f7e023b Mon Sep 17 00:00:00 2001
From: Marcin Rataj
Date: Fri, 6 Sep 2024 18:44:25 +0200
Subject: [PATCH 08/13] unxifs: suggestions from code review
Co-authored-by: Andrew Gillis <11790789+gammazero@users.noreply.github.com>
---
src/architecture/unixfs.md | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/src/architecture/unixfs.md b/src/architecture/unixfs.md
index 9c2d5624c..21c41c4d5 100644
--- a/src/architecture/unixfs.md
+++ b/src/architecture/unixfs.md
@@ -67,7 +67,7 @@ be recognized because their CIDs are encoded using the `raw` codec:
- The file content is purely the block body.
- They never have any children nodes, and thus are also known as single block files.
-- Their size (both `dagsize` and `blocksize`) is the length of the block body.
+- Their size is the length of the block body (`Tsize` in parent is equal to `blocksize`).
## `dag-pb` Nodes
@@ -270,7 +270,7 @@ remaining components and on the CID you popped.
A :dfn[Symlink] represents a POSIX [symbolic link](https://pubs.opengroup.org/onlinepubs/9699919799/functions/symlink.html).
A symlink MUST NOT have children.
-The `PBNode.Data.Data` field is a POSIX path that MAY be appended in front of the
+The `PBNode.Data.Data` field is a POSIX path that MAY be inserted in front of the
currently remaining path component stack.
##### Path Resolution
@@ -279,7 +279,7 @@ There is no current consensus on how pathing over symlinks should behave. Some
implementations return symlink objects and fail if a consumer tries to follow them
through.
-Following the POSIX specification over the current UnixFS path context is probably fine.
+Symlink path resolution SHOULD follow the POSIX specification, over the current UnixFS path context, as much as is applicable.
#### `HAMTDirectory`
From 6c79d5bec4f6c3b704aef90d30abf6dc89c411ff Mon Sep 17 00:00:00 2001
From: Marcin Rataj
Date: Fri, 6 Sep 2024 18:44:39 +0200
Subject: [PATCH 09/13] unixfs: editorial changes
---
UNIXFS.md | 2 +-
src/data-formats/index.html | 13 ++++++++++
src/index.html | 13 ++++++++++
.../unixfs.md => unixfs-data-format.md} | 25 ++++++++++++-------
4 files changed, 43 insertions(+), 10 deletions(-)
create mode 100644 src/data-formats/index.html
rename src/{architecture/unixfs.md => unixfs-data-format.md} (97%)
diff --git a/UNIXFS.md b/UNIXFS.md
index 00444fe49..243f5f030 100644
--- a/UNIXFS.md
+++ b/UNIXFS.md
@@ -1,3 +1,3 @@
# UnixFS
-Moved to https://specs.ipfs.tech/architecture/unixfs/
+Moved to https://specs.ipfs.tech/unixfs-data-format/
diff --git a/src/data-formats/index.html b/src/data-formats/index.html
new file mode 100644
index 000000000..c34ea9654
--- /dev/null
+++ b/src/data-formats/index.html
@@ -0,0 +1,13 @@
+---
+title: Data formats
+description: |
+ IPFS basic primitive is an opaque block of bytes identified by a CID. CID includes codec that informs IPFS System about data format: how to parse the block, and how to link from one block to another.
+---
+
+{% include 'header.html' %}
+
+
+ {% include 'list.html', posts: collections.data-formats %}
+
+
+{% include 'footer.html' %}
diff --git a/src/index.html b/src/index.html
index d2d31ad64..96f8b950e 100644
--- a/src/index.html
+++ b/src/index.html
@@ -108,6 +108,19 @@
{% include 'list.html', posts: collections.webHttpGateways %}
+
+
+
+ IPFS basic primitive is an opaque block of bytes identified by a CID. CID includes codec that informs IPFS System about data format: how to parse the block, and how to link from one block to another.
+
+
+ The most popular data formats used by IPFS Systems are RAW (opaque block), CAR (archive of opaque blocks), UnixFS (filesystem abstraction built with DAG-PB and RAW codecs), DAG-CBOR/DAG-JSON, however IPFS ecosystem is not limited to them, and IPFS systems are free to choose the level of interoperability, or even implement support for own, additional formats. A complimentary CAR is a codec-agnostic archive format for transporting multiple opaque blocks.
+
+
+ Specifications:
+
+ {% include 'list.html', posts: collections.data-formats %}
+
diff --git a/src/architecture/unixfs.md b/src/unixfs-data-format.md
similarity index 97%
rename from src/architecture/unixfs.md
rename to src/unixfs-data-format.md
index 21c41c4d5..ad08a87e9 100644
--- a/src/architecture/unixfs.md
+++ b/src/unixfs-data-format.md
@@ -2,9 +2,9 @@
title: UnixFS
description: >
UnixFS is a Protocol Buffers-based format for describing files, directories,
- and symlinks as DAGs in IPFS.
-date: 2022-10-10
-maturity: reliable
+ and symlinks as dag-pb and raw DAGs in IPFS.
+date: 2024-09-06
+maturity: draft
editors:
- name: David Dias
github: daviddias
@@ -31,8 +31,13 @@ editors:
affiliation:
name: Protocol Labs
url: https://protocol.ai/
+ - name: Marcin Rataj
+ github: lidel
+ affiliation:
+ name: Interplanetary Shipyard
+ url: https://ipshipyard.com/
-tags: ['architecture']
+tags: ['data-formats']
order: 1
---
@@ -63,7 +68,7 @@ In UnixFS, a node can be encoded using two different multicodecs, listed below.
## `Raw` Nodes
The simplest nodes use `raw` encoding and are implicitly a :ref[File]. They can
-be recognized because their CIDs are encoded using the `raw` codec:
+be recognized because their CIDs are encoded using the `raw` (`0x55`) codec:
- The file content is purely the block body.
- They never have any children nodes, and thus are also known as single block files.
@@ -71,7 +76,7 @@ be recognized because their CIDs are encoded using the `raw` codec:
## `dag-pb` Nodes
-More complex nodes use the `dag-pb` encoding. These nodes require two steps of
+More complex nodes use the `dag-pb` (`0x70`) encoding. These nodes require two steps of
decoding. The first step is to decode the outer container of the block. This is encoded using the IPLD [`dag-pb`][ipld-dag-pb] specification, which can be
summarized as follows:
@@ -117,8 +122,8 @@ message Data {
repeated uint64 blocksizes = 4;
optional uint64 hashType = 5;
optional uint64 fanout = 6;
- optional uint32 mode = 7;
- optional UnixTime mtime = 8;
+ optional uint32 mode = 7; // opt-in, AKA UnixFS 1.5
+ optional UnixTime mtime = 8; // opt-in, AKA UnixFS 1.5
}
message Metadata {
@@ -176,8 +181,10 @@ size in bytes of the partial file content present in children DAGs. Each index i
`PBNode.Links` MUST have a corresponding chunk size stored at the same index
in `decode(PBNode.Data).blocksizes`.
+:::warning
Implementers need to be extra careful to ensure the values in `Data.blocksizes`
are calculated by following the definition from [`Blocksize`](#decodepbnodedatablocksize).
+:::
This allows for fast indexing into the file. For example, if someone is trying
to read bytes 25 to 35, we can compute an offset list by summing all previous
@@ -617,4 +624,4 @@ This will tell you which offset inside this node the children at the correspondi
[multicodec]: https://github.com/multiformats/multicodec
[multihash]: https://github.com/multiformats/multihash
[Bitswap]: https://github.com/ipfs/specs/blob/master/BITSWAP.md
-[ipld-dag-pb]: https://ipld.io/specs/codecs/dag-pb/spec/
\ No newline at end of file
+[ipld-dag-pb]: https://ipld.io/specs/codecs/dag-pb/spec/
From 02dec5aedbe5d11ed802446cb835962aecaa36bd Mon Sep 17 00:00:00 2001
From: Marcin Rataj
Date: Fri, 6 Sep 2024 18:58:20 +0200
Subject: [PATCH 10/13] unixfs: editorial, updated implementations links
---
src/unixfs-data-format.md | 21 ++++++++++++---------
1 file changed, 12 insertions(+), 9 deletions(-)
diff --git a/src/unixfs-data-format.md b/src/unixfs-data-format.md
index ad08a87e9..a657c92a3 100644
--- a/src/unixfs-data-format.md
+++ b/src/unixfs-data-format.md
@@ -19,8 +19,8 @@ editors:
- name: Alex Potsides
github: achingbrain
affiliation:
- name: Protocol Labs
- url: https://protocol.ai/
+ name: Interplanetary Shipyard
+ url: https://ipshipyard.com/
- name: Peter Rabbitson
github: ribasushi
affiliation:
@@ -449,7 +449,7 @@ The following names SHOULD NOT be used:
terminations in some systems, such as C-compatible systems. Many unix
file systems do not accept this character in path components.
-## Design Decision Rationale
+## Appendix: Design Decision Rationale
### `mtime` and `mode` Metadata Support in UnixFSv1.5
@@ -555,20 +555,23 @@ In most cases, such values will exceed 2^28 (268,435,456) nanoseconds. Therefore
the fractional part is represented as a 4-byte `fixed32`,
[as per Google's recommendation](https://developers.google.com/protocol-buffers/docs/proto#scalar).
-# Notes for Implementers
+# Appendix: Notes for Implementers
This section and included subsections are not authoritative.
## Implementations
- JavaScript
+ - [`@helia/unixfs`](https://www.npmjs.com/package/@helia/unixfs) implementation of a filesystem compatible with [Helia SDK](https://github.com/ipfs/helia#readme)
- Data Formats - [unixfs](https://github.com/ipfs/js-ipfs-unixfs)
- - Importer - [unixfs-importer](https://github.com/ipfs/js-ipfs-unixfs-importer)
- - Exporter - [unixfs-exporter](https://github.com/ipfs/js-ipfs-unixfs-exporter)
+ - Importer - [unixfs-importer](https://github.com/ipfs/js-ipfs-unixfs/tree/main/packages/ipfs-unixfs-importer)
+ - Exporter - [unixfs-exporter](https://github.com/ipfs/js-ipfs-unixfs/tree/main/packages/ipfs-unixfs-exporter)
- Go
- - Protocol Buffer Definitions - [`ipfs/go-unixfs/pb`](https://github.com/ipfs/go-unixfs/blob/707110f05dac4309bdcf581450881fb00f5bc578/pb/unixfs.proto)
- - [`ipfs/go-unixfs`](https://github.com/ipfs/go-unixfs/)
- - `go-ipld-prime` implementation [`ipfs/go-unixfsnode`](https://github.com/ipfs/go-unixfsnode)
+ - [Boxo SDK](https://github.com/ipfs/boxo#readme) includes implementation of UnixFS filesystem
+ - Protocol Buffer Definitions - [`ipfs/boxo/../unixfs.proto`](https://github.com/ipfs/boxo/blob/v0.23.0/ipld/unixfs/pb/unixfs.proto)
+ - [`boxo/files`](https://github.com/ipfs/boxo/tree/main/files)
+ - [`ipfs/boxo/ipld/unixfs`](https://github.com/ipfs/boxo/tree/main/ipld/unixfs/)
+ - Alternative `go-ipld-prime` implementation: [`ipfs/go-unixfsnode`](https://github.com/ipfs/go-unixfsnode)
- Rust
- [`iroh-unixfs`](https://github.com/n0-computer/iroh/tree/b7a4dd2b01dbc665435659951e3e06d900966f5f/iroh-unixfs)
- [`unixfs-v1`](https://github.com/ipfs-rust/unixfsv1)
From c70c6145b788f09d7879d57c3d53bfb42c4275a8 Mon Sep 17 00:00:00 2001
From: Marcin Rataj
Date: Fri, 6 Sep 2024 19:49:52 +0200
Subject: [PATCH 11/13] chore: remove trailing spaces
---
src/unixfs-data-format.md | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/src/unixfs-data-format.md b/src/unixfs-data-format.md
index a657c92a3..0066c6fff 100644
--- a/src/unixfs-data-format.md
+++ b/src/unixfs-data-format.md
@@ -268,7 +268,7 @@ a child under `PBNode.Links`. If you find a match, you can then remember the CID
You MUST continue the search. If you find another match, you MUST error since
duplicate names are not allowed.
-Assuming no errors were raised, you can continue to the path resolution on the
+Assuming no errors were raised, you can continue to the path resolution on the
remaining components and on the CID you popped.
@@ -345,7 +345,7 @@ links from the fastest source, and smaller ones from the slower sources.
An implementation SHOULD NOT assume the `TSize` values are correct. The value is only a hint that provides performance optimization for better UX.
Following the [Robustness Principle](https://specs.ipfs.tech/architecture/principles/#robustness), implementation SHOULD be
-able to decode nodes where the `Tsize` field is wrong (not matching the sizes of sub-DAGs), or
+able to decode nodes where the `Tsize` field is wrong (not matching the sizes of sub-DAGs), or
partially or completely missing.
When total data size is needed for important purposes such as accounting, billing, and cost estimation, the `Tsize` SHOULD NOT be used, and instead a full DAG walk SHOULD to be performed.
@@ -397,7 +397,7 @@ non-IPFS target MUST observe the following:
- If the target supports a distinction between `unspecified` and `0`/`1970-01-01T00:00:00Z`,
the distinction must be preserved within the target. For example, if no `mtime` structure
is available, a web gateway must **not** render a `Last-Modified:` header.
-- If the target requires an `mtime` ( e.g. a FUSE interface ) and no `mtime` is
+- If the target requires an `mtime` ( e.g. a FUSE interface ) and no `mtime` is
supplied OR the supplied `mtime` falls outside of the targets accepted range:
- When no `mtime` is specified or the resulting `UnixTime` is negative:
implementations must assume `0`/`1970-01-01T00:00:00Z` (note that such values
@@ -513,10 +513,10 @@ Downsides to this approach are:
1. Two users adding the same file to IPFS at different times will have different
[CID]s due to the `mtime`s being different. If the content is stored in another
- node, its [CID] will be constant between the two users, but you can't navigate
- to it unless you have the parent node, which will be less available due to the
+ node, its [CID] will be constant between the two users, but you can't navigate
+ to it unless you have the parent node, which will be less available due to the
proliferation of [CID]s.
-1. Metadata is also impossible to remove without changing the [CID], so
+1. Metadata is also impossible to remove without changing the [CID], so
metadata becomes part of the content.
2. Performance may be impacted as well as if we don't inline UnixFS root nodes
into [CID]s, so additional fetches will be required to load a given UnixFS entry.
From 556a6c8e5dbc2f3def2d830cca399cd71b327ab2 Mon Sep 17 00:00:00 2001
From: Marcin Rataj
Date: Fri, 6 Sep 2024 20:17:35 +0200
Subject: [PATCH 12/13] chore: remove duplicated mention of cars
---
src/index.html | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/index.html b/src/index.html
index 78cbcd882..5a630da49 100644
--- a/src/index.html
+++ b/src/index.html
@@ -114,7 +114,7 @@
IPFS basic primitive is an opaque block of bytes identified by a CID. CID includes codec that informs IPFS System about data format: how to parse the block, and how to link from one block to another.
- The most popular data formats used by IPFS Systems are RAW (opaque block), CAR (archive of opaque blocks), UnixFS (filesystem abstraction built with DAG-PB and RAW codecs), DAG-CBOR/DAG-JSON, however IPFS ecosystem is not limited to them, and IPFS systems are free to choose the level of interoperability, or even implement support for own, additional formats. A complimentary CAR is a codec-agnostic archive format for transporting multiple opaque blocks.
+ The most popular data formats used by IPFS Systems are RAW (an opaque block), UnixFS (filesystem abstraction built with DAG-PB and RAW codecs), DAG-CBOR/DAG-JSON, however IPFS ecosystem is not limited to them, and IPFS systems are free to choose the level of interoperability, or even implement support for own, additional formats. A complimentary CAR is a codec-agnostic archive format for transporting multiple opaque blocks.
Specifications:
From 9131a7cfb56bf2ef4fb916feec53836c75ba1a05 Mon Sep 17 00:00:00 2001
From: Marcin Rataj
Date: Fri, 6 Sep 2024 20:32:56 +0200
Subject: [PATCH 13/13] chore: markdown lint
---
src/unixfs-data-format.md | 47 +++++++++++++++++++++------------------
1 file changed, 25 insertions(+), 22 deletions(-)
diff --git a/src/unixfs-data-format.md b/src/unixfs-data-format.md
index 0066c6fff..3de0ae4cd 100644
--- a/src/unixfs-data-format.md
+++ b/src/unixfs-data-format.md
@@ -271,7 +271,6 @@ duplicate names are not allowed.
Assuming no errors were raised, you can continue to the path resolution on the
remaining components and on the CID you popped.
-
#### `Symlink` type
A :dfn[Symlink] represents a POSIX [symbolic link](https://pubs.opengroup.org/onlinepubs/9699919799/functions/symlink.html).
@@ -490,14 +489,14 @@ both UnixFS v1 and v1.5 nodes.
This was rejected for the following reasons:
-1. When creating a UnixFS node, there's no way to record metadata without wrapping
- it in a directory.
-2. If you access any UnixFS node directly by its [CID], there is no way of recreating
- the metadata which limits flexibility.
-3. In order to list the contents of a directory including entry types and sizes,
- you have to fetch the root node of each entry, so the performance benefit
- of including some metadata in the containing directory is negligible in this
- use case.
+1. When creating a UnixFS node, there's no way to record metadata without
+ wrapping it in a directory.
+2. If you access any UnixFS node directly by its [CID], there is no way of
+ recreating the metadata which limits flexibility.
+3. In order to list the contents of a directory including entry types and
+ sizes, you have to fetch the root node of each entry, so the performance
+ benefit of including some metadata in the containing directory is negligible
+ in this use case.
#### Metadata in the File
@@ -511,15 +510,16 @@ we decide to keep file data in a leaf node for deduplication reasons.
Downsides to this approach are:
-1. Two users adding the same file to IPFS at different times will have different
- [CID]s due to the `mtime`s being different. If the content is stored in another
- node, its [CID] will be constant between the two users, but you can't navigate
- to it unless you have the parent node, which will be less available due to the
- proliferation of [CID]s.
-1. Metadata is also impossible to remove without changing the [CID], so
- metadata becomes part of the content.
-2. Performance may be impacted as well as if we don't inline UnixFS root nodes
- into [CID]s, so additional fetches will be required to load a given UnixFS entry.
+1. Two users adding the same file to IPFS at different times will have
+ different [CID]s due to the `mtime`s being different. If the content is
+ stored in another node, its [CID] will be constant between the two users,
+ but you can't navigate to it unless you have the parent node, which will be
+ less available due to the proliferation of [CID]s.
+2. Metadata is also impossible to remove without changing the [CID], so
+ metadata becomes part of the content.
+3. Performance may be impacted as well as if we don't inline UnixFS root nodes
+ into [CID]s, so additional fetches will be required to load a given UnixFS
+ entry.
#### Side Trees
@@ -580,25 +580,28 @@ This section and included subsections are not authoritative.
In this example, we will build a `Raw` file with the string `test` as its content.
-1. First, hash the data:
+First, hash the data:
```console
$ echo -n "test" | sha256sum
9f86d081884c7d659a2feaa0c55ad015a3bf4f1b2b0b822cd15d6c15b0f00a08 -
```
-2. Add the CID prefix:
+Add the CID prefix:
```
+f01551220
+ 9f86d081884c7d659a2feaa0c55ad015a3bf4f1b2b0b822cd15d6c15b0f00a08
+
f this is the multibase prefix, we need it because we are working with a hex CID, this is omitted for binary CIDs
01 the CID version, here one
55 the codec, here we MUST use Raw because this is a Raw file
12 the hashing function used, here sha256
20 the digest length 32 bytes
- 9f86d081884c7d659a2feaa0c55ad015a3bf4f1b2b0b822cd15d6c15b0f00a08 the digest we computed earlier
+ 9f86d081884c7d659a2feaa0c55ad015a3bf4f1b2b0b822cd15d6c15b0f00a08 is the the digest we computed earlier
```
-3. Profit! Assuming we stored this block in some implementation of our choice, which makes it accessible to our client, we can try to decode it.
+Done. Assuming we stored this block in some implementation of our choice, which makes it accessible to our client, we can try to decode it.
```console
$ ipfs cat f015512209f86d081884c7d659a2feaa0c55ad015a3bf4f1b2b0b822cd15d6c15b0f00a08