Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Questions regarding LittleFS page program behavior #1062

Open
bal-stan opened this issue Jan 9, 2025 · 3 comments
Open

Questions regarding LittleFS page program behavior #1062

bal-stan opened this issue Jan 9, 2025 · 3 comments
Labels

Comments

@bal-stan
Copy link

bal-stan commented Jan 9, 2025

Hi,

I am trying to answer a few questions I have regarding LittleFS page program behaviour. I would appreciate it if someone can correct me or confirm my understanding.

I am running LittleFS on a bunch of MT29F16G16ADACA flash chips. In the datasheet (table 33, page 109) it states:

Parameter Symbol Typ Max Unit Notes
Number of partial-page programs NOP 4 cycles 1

My understanding is that each page has a program limit of 4, after which the block it is in needs to be erased before the page can be programmed (reliably) again.

I've looked at the LittleFS docs but, I didn't see any documentation stating whether LittleFS will ever program the same page more than once, without first erasing the block it is in.

Arguably this is probably more trouble than it is worth as it would need to first read the page (assuming it doesn't have a copy in RAM), check if it can make the necessary changes by just flipping 1s to 0s, and if so write the changes, otherwise look for a new page.

According to my testing:

  1. For inlined files:

LittleFS will program consecutive pages of the same block during each file write. When all pages in the block are exhausted, it will find a new unused/free block, erase it, write the file to the first page of the new block and mark the old block as unused/free.

  1. For normal files:

It will program to a new block during each file write. Curiously, instead of programming consecutive blocks (like it programs consecutive pages when dealing with inlined files), it seems to split the block count in half and alternate to which half it programs to. For example, for a 32 block device, it will program to block 0, then 16, then 1, then 17, etc.

Questions:

  1. Am I correct in assuming that LittleFS will never program the same page more than once without first erasing the block it is in?
  2. In the case of an N-page wide block. If I store an inlined file on a single page, and then proceed to rewrite its contents N-1 times, does that mean that the inlined file now effectively occupies N pages, since all of the pages of the block have been written to?
  3. If 2 is true, how can I prevent this? Run garbage collection? Disable inlined files? Rewrite the file N+1 times, so it gets moved to a new block?
  4. Can multiple files (inlined or not) share the same block? What about the same page? If yes, does this change the answers to Q2 and Q3?

Thanks!

@geky geky added the question label Feb 3, 2025
@geky
Copy link
Member

geky commented Feb 3, 2025

Hi @bal-stan, thanks for creating an issue. Hopefully I can answer these.

  1. Am I correct in assuming that LittleFS will never program the same page more than once without first erasing the block it is in?

Yes. LittleFS keeps track of what is erased to avoid programming unerased blocks/pages.

I believe SPIFFS uses program-masking to mark pages as deleted, which allows it to garbage-collect in $O(1)$ with bounded RAM. AFAIK it's not possible to get below $O(\log n)$ otherwise.

But in littlefs support for more devices was more important. NAND flash has reliability issues, as you've noted, but it also breaks built-in ECC, and doesn't really work on non-flash storage, SD/eMMC/etc.

  1. In the case of an N-page wide block. If I store an inlined file on a single page, and then proceed to rewrite its contents N-1 times, does that mean that the inlined file now effectively occupies N pages, since all of the pages of the block have been written to?

That's an interesting philosophical question!

When you rewrite an inline file, the older copies are considered garbage and will be cleaned up when needed. This is called metadata compaction (because we technically have two garbage-collectors), and is automatically triggered when the metadata block is full.

  1. If 2 is true, how can I prevent this? Run garbage collection? Disable inlined files? Rewrite the file N+1 times, so it gets moved to a new block?

lfs_fs_gc and cfg.compact_thresh allow you to compact metadata blocks manually. But this is mainly for moving the expensive erase+compact operation into available idle time. If you're IO-bound, lfs_fs_gc can be counter-productive, as you generally want to compact as few times as possible.

You can disable/limit inline files with cfg.inline_max. This forces files > cfg.inline_max to be written as CTZ skip-list files, which have their own costs.

  1. Can multiple files (inlined or not) share the same block? What about the same page? If yes, does this change the answers to Q2 and Q3?

Yes! Inline files can reside in the same metadata block. It's worth noting the metadata of files always share blocks, inline files just take this another step by allowing file-data to reside in the metadata block as well.

What happens is by default LittleFS shoves all file metadata (and inlined data) into one metadata block until it's full, tries to compact, and if compaction fails to get the metadata below 1/2 the block size, it splits the metadata block into two. This roughly balances metadata across all metadata blocks in the filesystem.

As far as pages, each atomic commit is aligned to a page, but since compaction is one big atomic commit, we can mostly ignore page sizes for understanding how much storage metadata uses.

LittleFS was originally written for NOR flash, where progs are often at the byte-level, so it doesn't really have the same concept of pages found in some other flash filesystems.


It will program to a new block during each file write. Curiously, instead of programming consecutive blocks (like it programs consecutive pages when dealing with inlined files), it seems to split the block count in half and alternate to which half it programs to. For example, for a 32 block device, it will program to block 0, then 16, then 1, then 17, etc.

What is funny is I think you stopped before seeing the pattern diverge. I would expect something like 0 -> 16 -> 1 -> 17 -> 0 -> 18 -> 1 -> 19 -> 0.

What you're probably observing is the data writes to blocks 16->17->18, etc, followed by the metadata commits to blocks {0,1} (the initial metadata block).

A block is randomly selected during mount to help with wear-leveling, which is why the data writes start at 16 and not, say, block 2.

@bal-stan
Copy link
Author

bal-stan commented Feb 7, 2025

Hi @geky , thanks for responding!

To be sure, can you confirm the exact behaviour when performing metadata compaction? Would it:

a) Move any present page data in a block to a separate block, so that it can erase the old one
b) Temporarily save present page data in the block in RAM, erase the block and reprogram the data again? What if the RAM buffer is too small?
c) Leave the block unchanged until the data fits in the RAM buffer (i.e. some data gets deleted) or there is no present data in the block (all data has been marked as deleted)?

I am basically looking to confirm that there are no circumstances under which LittleFS will program a page in a block more than 4 times (whether due to data being written/erased or metadata compaction) without erasing the whole block first.

@geky
Copy link
Member

geky commented Feb 11, 2025

I am basically looking to confirm that there are no circumstances under which LittleFS will program a page in a block more than 4 times (whether due to data being written/erased or metadata compaction) without erasing the whole block first.

Just to be clear, LittleFS will never program the same page more than once after an erase.

This is an intentional constraint and a property we test in CI:

littlefs/bd/lfs_emubd.c

Lines 374 to 376 in 0494ce7

for (lfs_off_t i = 0; i < size; i++) {
LFS_ASSERT(b->data[off+i] == bd->cfg->erase_value);
}


a) Move any present page data in a block to a separate block, so that it can erase the old one

Ah, so metadata is always stored in pairs of blocks. When compacting, littlefs moves all the metadata from one block to the other so it can erase the full block.

This is what lets us compact metadata atomically, without needing a block allocation every compaction.

You may note this does not provide wear-leveling. To wear-level on top of this scheme, we then relocate every block_cycles number of compactions.

DESIGN.md has more info on how this works:
DESIGN.md#metadata-pairs

b) Temporarily save present page data in the block in RAM, erase the block and reprogram the data again? What if the RAM buffer is too small?

This is often called read-modify-write (RMW), and in addition to the RAM requirements/flash issues, it does not provide power-loss resilience. If you lose power after erasing your block may contain garbage.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants