Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RFC: blkalgn: add block command alignment observability tool #4813

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

dkruces
Copy link

@dkruces dkruces commented Nov 28, 2023

The tool observes NVMe commands and checks for LBA and block size alignment.

The tool is used as part of the Large block size (LBS) effort [1] in the kernel to validate part of the work.

[1] https://kernelnewbies.org/KernelProjects/large-block-size

Copy link
Collaborator

@yonghong-song yonghong-song left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not an expert on nvme. But maybe people using/working on nvme or io performance can comment on why this tool is useful and how it can help trouble shoot the production issue.

tools/nvmecmd.py Outdated Show resolved Hide resolved
tools/nvmecmd.py Outdated
/* local strcmp function, max length 16 to protect instruction loops */
#define CMPMAX 16

static int local_strcmp(const char *cs, const char *ct)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe use bpf_strncmp() bpf helper?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I tried but run into build issues. Could you indicate the correct way of making use of the bpf helper? Maybe I added the wrong header?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the error I get:

bpf: Failed to load program: Permission denied
0: R1=ctx(off=0,imm=0) R10=fp0
; struct nvme_ns *ns = (struct nvme_ns *)ctx->di; struct request *req = (struct request *)ctx->si;
0: (79) r6 = *(u64 *)(r1 +112)        ; R1=ctx(off=0,imm=0) R6_w=scalar()
; struct nvme_ns *ns = (struct nvme_ns *)ctx->di; struct request *req = (struct request *)ctx->si;
1: (79) r8 = *(u64 *)(r1 +104)        ; R1=ctx(off=0,imm=0) R8_w=scalar()
2: (b7) r7 = 0                        ; R7_w=0
; struct data_t data = {};
3: (63) *(u32 *)(r10 -8) = r7         ; R7_w=0 R10=fp0 fp-8=????0000
4: (63) *(u32 *)(r10 -12) = r7        ; R7_w=0 R10=fp0 fp-16=0000????
5: (63) *(u32 *)(r10 -16) = r7        ; R7_w=0 R10=fp0 fp-16=00000000
6: (63) *(u32 *)(r10 -20) = r7        ; R7_w=0 R10=fp0 fp-24=0000????
7: (63) *(u32 *)(r10 -24) = r7        ; R7_w=0 R10=fp0 fp-24=00000000
8: (63) *(u32 *)(r10 -28) = r7        ; R7_w=0 R10=fp0 fp-32=0000????
9: (63) *(u32 *)(r10 -32) = r7        ; R7_w=0 R10=fp0 fp-32=00000000
10: (63) *(u32 *)(r10 -36) = r7       ; R7_w=0 R10=fp0 fp-40=0000????
11: (63) *(u32 *)(r10 -40) = r7       ; R7_w=0 R10=fp0 fp-40=00000000
12: (63) *(u32 *)(r10 -44) = r7       ; R7_w=0 R10=fp0 fp-48=0000????
13: (63) *(u32 *)(r10 -48) = r7       ; R7_w=0 R10=fp0 fp-48=00000000
14: (63) *(u32 *)(r10 -52) = r7       ; R7_w=0 R10=fp0 fp-56=0000????
15: (63) *(u32 *)(r10 -56) = r7       ; R7_w=0 R10=fp0 fp-56=00000000
16: (63) *(u32 *)(r10 -60) = r7       ; R7_w=0 R10=fp0 fp-64=0000????
17: (63) *(u32 *)(r10 -64) = r7       ; R7_w=0 R10=fp0 fp-64=00000000
18: (63) *(u32 *)(r10 -68) = r7       ; R7_w=0 R10=fp0 fp-72=0000????
; if (bpf_strncmp(({ typeof(struct gendisk *) _val; __builtin_memset(&_val, 0, sizeof(_val)); bpf_probe_read(&_val, sizeof(_val), (void *)&({ typeof(struct request_queue *) _val; __builtin_memset(&_val, 0, sizeof(_val)); bpf_probe_read(&_val, sizeof(_val), (void *)&req->q); _val; })->disk); _val; })->disk_name, 7, "nvme0n2"))
19: (7b) *(u64 *)(r10 -80) = r7       ; R7_w=0 R10=fp0 fp-80_w=00000000
; if (bpf_strncmp(({ typeof(struct gendisk *) _val; __builtin_memset(&_val, 0, sizeof(_val)); bpf_probe_read(&_val, sizeof(_val), (void *)&({ typeof(struct request_queue *) _val; __builtin_memset(&_val, 0, sizeof(_val)); bpf_probe_read(&_val, sizeof(_val), (void *)&req->q); _val; })->disk); _val; })->disk_name, 7, "nvme0n2"))
20: (7b) *(u64 *)(r10 -88) = r7       ; R7_w=0 R10=fp0 fp-88_w=00000000
21: (bf) r1 = r10                     ; R1_w=fp0 R10=fp0
;
22: (07) r1 += -88                    ; R1_w=fp-88
; if (bpf_strncmp(({ typeof(struct gendisk *) _val; __builtin_memset(&_val, 0, sizeof(_val)); bpf_probe_read(&_val, sizeof(_val), (void *)&({ typeof(struct request_queue *) _val; __builtin_memset(&_val, 0, sizeof(_val)); bpf_probe_read(&_val, sizeof(_val), (void *)&req->q); _val; })->disk); _val; })->disk_name, 7, "nvme0n2"))
23: (b7) r2 = 8                       ; R2_w=8
24: (bf) r3 = r8                      ; R3_w=scalar(id=1) R8_w=scalar(id=1)
25: (85) call bpf_probe_read#4        ; R0_w=scalar() fp-88_w=mmmmmmmm
; if (bpf_strncmp(({ typeof(struct gendisk *) _val; __builtin_memset(&_val, 0, sizeof(_val)); bpf_probe_read(&_val, sizeof(_val), (void *)&({ typeof(struct request_queue *) _val; __builtin_memset(&_val, 0, sizeof(_val)); bpf_probe_read(&_val, sizeof(_val), (void *)&req->q); _val; })->disk); _val; })->disk_name, 7, "nvme0n2"))
26: (79) r3 = *(u64 *)(r10 -88)       ; R3_w=scalar() R10=fp0
; if (bpf_strncmp(({ typeof(struct gendisk *) _val; __builtin_memset(&_val, 0, sizeof(_val)); bpf_probe_read(&_val, sizeof(_val), (void *)&({ typeof(struct request_queue *) _val; __builtin_memset(&_val, 0, sizeof(_val)); bpf_probe_read(&_val, sizeof(_val), (void *)&req->q); _val; })->disk); _val; })->disk_name, 7, "nvme0n2"))
27: (07) r3 += 128                    ; R3_w=scalar()
28: (bf) r1 = r10                     ; R1_w=fp0 R10=fp0
;
29: (07) r1 += -80                    ; R1_w=fp-80
; if (bpf_strncmp(({ typeof(struct gendisk *) _val; __builtin_memset(&_val, 0, sizeof(_val)); bpf_probe_read(&_val, sizeof(_val), (void *)&({ typeof(struct request_queue *) _val; __builtin_memset(&_val, 0, sizeof(_val)); bpf_probe_read(&_val, sizeof(_val), (void *)&req->q); _val; })->disk); _val; })->disk_name, 7, "nvme0n2"))
30: (b7) r2 = 8                       ; R2_w=8
31: (85) call bpf_probe_read#4        ; R0=scalar() fp-80=mmmmmmmm
; if (bpf_strncmp(({ typeof(struct gendisk *) _val; __builtin_memset(&_val, 0, sizeof(_val)); bpf_probe_read(&_val, sizeof(_val), (void *)&({ typeof(struct request_queue *) _val; __builtin_memset(&_val, 0, sizeof(_val)); bpf_probe_read(&_val, sizeof(_val), (void *)&req->q); _val; })->disk); _val; })->disk_name, 7, "nvme0n2"))
32: (79) r1 = *(u64 *)(r10 -80)       ; R1_w=scalar() R10=fp0
; if (bpf_strncmp(({ typeof(struct gendisk *) _val; __builtin_memset(&_val, 0, sizeof(_val)); bpf_probe_read(&_val, sizeof(_val), (void *)&({ typeof(struct request_queue *) _val; __builtin_memset(&_val, 0, sizeof(_val)); bpf_probe_read(&_val, sizeof(_val), (void *)&req->q); _val; })->disk); _val; })->disk_name, 7, "nvme0n2"))
33: (07) r1 += 12                     ; R1_w=scalar()
; if (bpf_strncmp(({ typeof(struct gendisk *) _val; __builtin_memset(&_val, 0, sizeof(_val)); bpf_probe_read(&_val, sizeof(_val), (void *)&({ typeof(struct request_queue *) _val; __builtin_memset(&_val, 0, sizeof(_val)); bpf_probe_read(&_val, sizeof(_val), (void *)&req->q); _val; })->disk); _val; })->disk_name, 7, "nvme0n2"))
34: (b7) r2 = 7                       ; R2_w=7
35: (18) r3 = 0x0                     ; R3_w=0
37: (85) call bpf_strncmp#182
R1 type=scalar expected=fp, pkt, pkt_meta, map_key, map_value, mem, alloc_mem, buf
processed 37 insns (limit 1000000) max_states_per_insn 0 total_states 1 peak_states 1 mark_read 1

Traceback (most recent call last):
  File "/root/./nvmecmd.py", line 190, in <module>
    bpf = BPF(text=bpf_text)
          ^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/bcc/__init__.py", line 487, in __init__
    self._trace_autoload()
  File "/usr/lib/python3/dist-packages/bcc/__init__.py", line 1455, in _trace_autoload
    fn = self.load_func(func_name, BPF.KPROBE)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/bcc/__init__.py", line 526, in load_func
    raise Exception("Failed to load BPF program %s: %s" %
Exception: Failed to load BPF program b'kprobe__nvme_setup_cmd': Permission denied

For the code:

void kprobe__nvme_setup_cmd(struct pt_regs *ctx, struct nvme_ns *ns,
                            struct request *req)
{
        struct data_t data = {};
        u32 max_algn_size = 4096, algn_size = 4096;
        u32 lba_len = algn_size / 4096;
        bool is_algn = false;
        u8 i;


        if (bpf_strncmp(req->q->disk->disk_name, 7, "nvme0n2"))
            return;


        if ((req->cmd_flags & 0xff) != 0)
            return;


        data.pid = bpf_get_current_pid_tgid() >> 32;
        bpf_get_current_comm(&data.comm, sizeof(data.comm));
        bpf_probe_read_kernel(&data.disk, sizeof(data.disk),
                              req->q->disk->disk_name);
        data.op = req->cmd_flags & 0xff;
        data.len = req->__data_len;
        data.lba = req->__sector >> (ns->lba_shift - SECTOR_SHIFT);

        for (i=0; i<8; i++) {
            is_algn = !(data.len % algn_size) && !(data.lba % lba_len);
            if (is_algn) {
                max_algn_size = algn_size;
            }
            algn_size = algn_size << 1;
            lba_len = algn_size / 4096;
        }
        data.algn = max_algn_size;

        events.ringbuf_output(&data, sizeof(data), 0);
        block_len.increment(bpf_log2l(req->__data_len));
        algn.increment(bpf_log2l(max_algn_size));
}

Any idea what am I doing wrong?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Current, your program type is kprobe (kprobe__nvme_setup_cmd). Could you change to use kfunc style? Not 100% sure but maybe that will make bpf_strncmp() work? Also, kfunc is preferred as it is better for pointer tracing as you do not need to do bpf_probe_read. kfunc has been supported for more than 4 years, so I think we can have kfunc support as the default now. If people really feel it is needed to use kprobe (which is available at more than 6 years ago), we can add it later. WDYT?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@yonghong-song
I did update the tool to use kfunc instead of kprobe. However, I still get the same issue. Here the changes after the tool update to monitor block layer instead of the nvme layer (#4813 (comment)).

--- a/tools/blkalgn.py
+++ b/tools/blkalgn.py
@@ -99,8 +99,8 @@ static int local_strcmp(const char *cs, const char *ct)
 bpf_text_disk_filter = ""
 if args.disk:
     bpf_text_disk_filter = """
-        if (local_strcmp(req->q->disk->disk_name, "{disk}"))
-            return;
+        if (bpf_strncmp(req->q->disk->disk_name, 10, "{disk}"))
+            return 0;
     """.format(
         disk=args.disk
     )
@@ -118,13 +118,13 @@ if args.ops:
     operation = blk_ops[args.ops]
     bpf_text_ops_filter = """
         if ((req->cmd_flags & 0xff) != {ops})
-            return;
+            return 0;
     """.format(
         ops=operation
     )

 bpf_text += """
-void start_request(struct pt_regs *ctx, struct request *req)
+KFUNC_PROBE(blk_mq_start_request, struct request *req)
 {{
         struct data_t data = {{}};
         u32 max_algn_size = 4096, algn_size = 4096;
@@ -158,6 +158,8 @@ void start_request(struct pt_regs *ctx, struct request *req)
         events.ringbuf_output(&data, sizeof(data), 0);
         block_len.increment(bpf_log2l(req->__data_len));
         algn.increment(bpf_log2l(max_algn_size));
+
+        return 0;
 }}
 """.format(
     disk_filter=bpf_text_disk_filter, ops_filter=bpf_text_ops_filter
@@ -176,8 +178,8 @@ if args.trace:
         % ("DISK", "OPS", "LEN", "LBA", "PID", "COMM", "ALGN")
     )

-if BPF.get_kprobe_functions(b"blk_mq_start_request"):
-    bpf.attach_kprobe(event="blk_mq_start_request", fn_name="start_request")
+if not BPF.support_kfunc():
+    exit()
bpf: Failed to load program: Permission denied
reg type unsupported for arg#0 function kfunc__blk_mq_start_request#16
0: R1=ctx(off=0,imm=0) R10=fp0
; KFUNC_PROBE(blk_mq_start_request, struct request *req)
0: (79) r7 = *(u64 *)(r1 +0)
func 'blk_mq_start_request' arg0 has btf_id 14442 type STRUCT 'request'
1: R1=ctx(off=0,imm=0) R7_w=ptr_request(off=0,imm=0)
1: (b7) r1 = 0                        ; R1_w=0
; struct data_t data = {};
2: (63) *(u32 *)(r10 -8) = r1         ; R1_w=0 R10=fp0 fp-8=????0000
3: (63) *(u32 *)(r10 -12) = r1        ; R1_w=0 R10=fp0 fp-16=0000????
4: (63) *(u32 *)(r10 -16) = r1        ; R1_w=0 R10=fp0 fp-16=00000000
5: (63) *(u32 *)(r10 -20) = r1        ; R1_w=0 R10=fp0 fp-24=0000????
6: (63) *(u32 *)(r10 -24) = r1        ; R1_w=0 R10=fp0 fp-24=00000000
7: (63) *(u32 *)(r10 -28) = r1        ; R1_w=0 R10=fp0 fp-32=0000????
8: (63) *(u32 *)(r10 -32) = r1        ; R1_w=0 R10=fp0 fp-32=00000000
9: (63) *(u32 *)(r10 -36) = r1        ; R1_w=0 R10=fp0 fp-40=0000????
10: (63) *(u32 *)(r10 -40) = r1       ; R1_w=0 R10=fp0 fp-40=00000000
11: (63) *(u32 *)(r10 -44) = r1       ; R1_w=0 R10=fp0 fp-48=0000????
12: (63) *(u32 *)(r10 -48) = r1       ; R1_w=0 R10=fp0 fp-48=00000000
13: (63) *(u32 *)(r10 -52) = r1       ; R1_w=0 R10=fp0 fp-56=0000????
14: (63) *(u32 *)(r10 -56) = r1       ; R1_w=0 R10=fp0 fp-56=00000000
15: (63) *(u32 *)(r10 -60) = r1       ; R1_w=0 R10=fp0 fp-64=0000????
16: (63) *(u32 *)(r10 -64) = r1       ; R1_w=0 R10=fp0 fp-64=00000000
17: (63) *(u32 *)(r10 -68) = r1       ; R1_w=0 R10=fp0 fp-72=0000????
; if (bpf_strncmp(req->q->disk->disk_name, 10, "nvme0n2"))
18: (79) r1 = *(u64 *)(r7 +0)         ; R1_w=ptr_request_queue(off=0,imm=0) R7_w=ptr_request(off=0,imm=0)
; if (bpf_strncmp(req->q->disk->disk_name, 10, "nvme0n2"))
19: (79) r1 = *(u64 *)(r1 +128)       ; R1_w=ptr_gendisk(off=0,imm=0)
; if (bpf_strncmp(req->q->disk->disk_name, 10, "nvme0n2"))
20: (07) r1 += 12                     ; R1_w=ptr_gendisk(off=12,imm=0)
; if (bpf_strncmp(req->q->disk->disk_name, 10, "nvme0n2"))
21: (b7) r2 = 10                      ; R2_w=10
22: (18) r3 = 0x0                     ; R3_w=0
24: (85) call bpf_strncmp#182
R1 type=ptr_ expected=fp, pkt, pkt_meta, map_key, map_value, mem, alloc_mem, buf
processed 24 insns (limit 1000000) max_states_per_insn 0 total_states 0 peak_states 0 mark_read 0

Traceback (most recent call last):
  File "/root/bcc/./tools/blkalgn.py", line 173, in <module>
    bpf = BPF(text=bpf_text)
          ^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/bcc/__init__.py", line 487, in __init__
    self._trace_autoload()
  File "/usr/lib/python3/dist-packages/bcc/__init__.py", line 1473, in _trace_autoload
    self.attach_kfunc(fn_name=func_name)
  File "/usr/lib/python3/dist-packages/bcc/__init__.py", line 1135, in attach_kfunc
    fn = self.load_func(fn_name, BPF.TRACING)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/bcc/__init__.py", line 526, in load_func
    raise Exception("Failed to load BPF program %s: %s" %
Exception: Failed to load BPF program b'kfunc__blk_mq_start_request': Permission denied

tools/nvmecmd.py Outdated Show resolved Hide resolved
tools/nvmecmd.py Outdated Show resolved Hide resolved
@yonghong-song
Copy link
Collaborator

There is a conflict with latest master branch. Please rebase and resubmit. Thanks!

@dkruces
Copy link
Author

dkruces commented Dec 4, 2023

I am not an expert on nvme. But maybe people using/working on nvme or io performance can comment on why this tool is useful and how it can help trouble shoot the production issue.

Thanks @yonghong-song for the review. Just to add a bit more information why this is important for LBS (I may add it to the commit message after fixing the conflict): Instrumenting the NVMe layer with nvmecmd allows us to validate the min order [1] concept and make sure the NVMe commands are actually sent with the correct order and alignment. This would ultimately have to match with the Namespace Preferred Write Granularity (NPWG), so ideally this tool can be extended with support to read that from the device and warn with the cases (commands) that don't honor the value.

[1] min order:

This extension (to read from the device) is something we haven't yet explored but we think adding xNVMe as dependency would help moving towards that feature. However, the tool is more simpler and just reports the alignment values. Would this be something acceptable to upstream to IO Visor BCC project (in case nvmecmd gets accepted)? We tried to keep it simple as indicated by the contributing guidelines [2] as our interpretation of that document suggests potential for conflict with such extension of the tool. We have also a version of the nvmecmd where we can record NVMe commands into a database for later inspection (tcpdump-like) [3] that we skipped in the PR because of the same reason. Could you or someone else from the IO Visor Project clarify on this topic?

[2] https://github.com/iovisor/bcc/blob/master/CONTRIBUTING-SCRIPTS.md#tools
[3] https://github.com/dagmcr/bcc/tree/nvmecmd-dump

@dkruces dkruces force-pushed the nvmecmd branch 2 times, most recently from 2eebe3d to 6186c77 Compare December 4, 2023 15:09
@dkruces
Copy link
Author

dkruces commented Dec 4, 2023

Hi again,

I've updated the PR. Changes are:

  • Updated commit message to indicate min order effort.
  • Change --ops argument type int to str. To allow the user to indicate read/write operation to instrument.
  • Replace 0/1 with read/write when tracing.
  • Replace REQ column name to OPS.
  • Remove unused logging library.
  • Update examples and documentation.

Note: RFC is kept in the PR until getting more feedback (#4813 (comment), #4813 (review)).

TODO:

@yonghong-song
Copy link
Collaborator

@dagmcr Thanks for detailed explanation for the current state and future vision of nvmecmd tool. I can run on my company production machine with nvme drives and it works as expected. I think the extension of the tool to validate the command seems useful too for production debugging, assuming reading from the disk won't disrupt the system.

@yonghong-song
Copy link
Collaborator

cc @brendangregg

@dkruces
Copy link
Author

dkruces commented Dec 12, 2023

A new PR update:

After our latest LBS community meeting feedback and presenting the tool, a general suggestion was to make it more generic, and not NVMe specific, so any block device can benefit from it.

This suggestion makes the following 'big' changes from the previous PR version:

  • Update the tracing point we are hooked from: nvme_setup_cmd -> blk_mq_start_request.
    • In addition, I made use of bpf.attach_kprobe instead of the kprobe__ prefix used before with nvme function.
  • Rename the tool from nvmecmd to blkalgn. After all, we are recording block commands issued from the block layer to the block device for alignment purposes (LBA and size).

Note: I'm very much open to suggestions with naming of the tool. Let me know if you have better name ideas.

  • Include headers are updated/cleaned.
  • Since lba_shift is given by the block device and we are one layer above, lba_shift is now retrieved directly from the logical_block_size (limits struct) and using the logarithmic bpf helper (bpf_log).

Note: the bpf_log2l is adding an additional order that would required subtraction. That's the reason to pick bpf_log instead of the bpf_log2l (#1649).

Functionality of the tool remains the same. A tcpdump-like version can be found here: https://github.com/dagmcr/bcc/tree/blkalgn-dump. @yonghong-song, you mentioned the extension of the tool might be valuable. Would that version of the tool with recording capabilities for later parsing suitable for iovisor/bcc?

@dkruces dkruces changed the title RFC: nvmecmd: add nvme command observability tool RFC: blkalgn: add block command alignment observability tool Dec 12, 2023
The tool observes block commands and checks for LBA and block size
alignment.

The tool is used as part of the Large block size (LBS) effort [1] in the
kernel to validate min order mapping [2].

[1] https://kernelnewbies.org/KernelProjects/large-block-size

[2] min order:
use of min order: linux-kdevops/linux@563cea7
add min order support: linux-kdevops/linux@27f85d8
upstream RFC: https://lore.kernel.org/all/20230915183848.1018717-2-kernel@pankajraghav.com/

Signed-off-by: Daniel Gomez <da.gomez@samsung.com>
intel-lab-lkp pushed a commit to intel-lab-lkp/linux that referenced this pull request Aug 26, 2024
Pankaj Raghav (Samsung) <kernel@pankajraghav.com> says:

This is the 13th version of the series that enables block size > page size
(Large Block Size) experimental support in XFS. Please consider this for
the inclusion in 6.12.

The context and motivation can be seen in cover letter of the RFC v1 [0].
We also recorded a talk about this effort at LPC [1], if someone would
like more context on this effort.

Thanks to David Howells, the page cache changes have also been tested on
top of AFS[2] with mapping_min_order set.

A lot of emphasis has been put on testing using kdevops, starting with an XFS
baseline [3]. The testing has been split into regression and progression.

Regression testing:
In regression testing, we ran the whole test suite to check for regressions on
existing profiles due to the page cache changes.

I also ran split_huge_page_test selftest on XFS filesystem to check for
huge page splits in min order chunks is done correctly.

No regressions were found with these patches added on top.

Progression testing:
For progression testing, we tested for 8k, 16k, 32k and 64k block sizes.  To
compare it with existing support, an ARM VM with 64k base page system (without
our patches) was used as a reference to check for actual failures due to LBS
support in a 4k base page size system.

No new failures were found with the LBS support.

We've done some preliminary performance tests with fio on XFS on 4k block size
against pmem and NVMe with buffered IO and Direct IO on vanilla Vs + these
patches applied, and detected no regressions.

We ran sysbench on postgres and mysql for several hours on LBS XFS
without any issues.

We also wrote an eBPF tool called blkalgn [5] to see if IO sent to the device
is aligned and at least filesystem block size in length.

For those who want this in a git tree we have this up on a kdevops
large-block-minorder-for-next-v13 tag [6].

[0] https://lore.kernel.org/lkml/20230915183848.1018717-1-kernel@pankajraghav.com/
[1] https://www.youtube.com/watch?v=ar72r5Xf7x4
[2] https://lore.kernel.org/linux-mm/3792765.1724196264@warthog.procyon.org.uk/
[3] https://github.com/linux-kdevops/kdevops/blob/master/docs/xfs-bugs.md
489 non-critical issues and 55 critical issues. We've determined and reported
that the 55 critical issues have all fall into 5 common  XFS asserts or hung
tasks  and 2 memory management asserts.
[4] https://github.com/linux-kdevops/fstests/tree/lbs-fixes
[5] iovisor/bcc#4813
[6] https://github.com/linux-kdevops/linux/
[7] https://lore.kernel.org/linux-kernel/Zl20pc-YlIWCSy6Z@casper.infradead.org/#t

* patches from https://lore.kernel.org/r/20240822135018.1931258-1-kernel@pankajraghav.com: (5979 commits)
  xfs: enable block size larger than page size support
  xfs: make the calculation generic in xfs_sb_validate_fsb_count()
  xfs: expose block size in stat
  xfs: use kvmalloc for xattr buffers
  iomap: fix iomap_dio_zero() for fs bs > system page size
  filemap: cap PTE range to be created to allowed zero fill in folio_map_range()
  mm: split a folio in minimum folio order chunks
  readahead: allocate folios with mapping_min_order in readahead
  filemap: allocate mapping_min_order folios in the page cache
  fs: Allow fine-grained control of folio sizes
  Add linux-next specific files for 20240821
  l2tp: use skb_queue_purge in l2tp_ip_destroy_sock
  af_unix: Don't call skb_get() for OOB skb.
  dt-bindings: net: socionext,uniphier-ave4: add top-level constraints
  dt-bindings: net: renesas,etheravb: add top-level constraints
  dt-bindings: net: mediatek,net: add top-level constraints
  dt-bindings: net: mediatek,net: narrow interrupts per variants
  net: Silence false field-spanning write warning in metadata_dst memcpy
  net: hns3: Use ARRAY_SIZE() to improve readability
  selftests: net/forwarding: spawn sh inside vrf to speed up ping loop
  ...

Link: https://lore.kernel.org/r/20240822135018.1931258-1-kernel@pankajraghav.com
Signed-off-by: Christian Brauner <brauner@kernel.org>
srcres258 pushed a commit to srcres258/linux-doc that referenced this pull request Sep 5, 2024
Pankaj Raghav (Samsung) <kernel@pankajraghav.com> says:

This is the 13th version of the series that enables block size > page size
(Large Block Size) experimental support in XFS. Please consider this for
the inclusion in 6.12.

The context and motivation can be seen in cover letter of the RFC v1 [0].
We also recorded a talk about this effort at LPC [1], if someone would
like more context on this effort.

Thanks to David Howells, the page cache changes have also been tested on
top of AFS[2] with mapping_min_order set.

A lot of emphasis has been put on testing using kdevops, starting with an XFS
baseline [3]. The testing has been split into regression and progression.

Regression testing:
In regression testing, we ran the whole test suite to check for regressions on
existing profiles due to the page cache changes.

I also ran split_huge_page_test selftest on XFS filesystem to check for
huge page splits in min order chunks is done correctly.

No regressions were found with these patches added on top.

Progression testing:
For progression testing, we tested for 8k, 16k, 32k and 64k block sizes.  To
compare it with existing support, an ARM VM with 64k base page system (without
our patches) was used as a reference to check for actual failures due to LBS
support in a 4k base page size system.

No new failures were found with the LBS support.

We've done some preliminary performance tests with fio on XFS on 4k block size
against pmem and NVMe with buffered IO and Direct IO on vanilla Vs + these
patches applied, and detected no regressions.

We ran sysbench on postgres and mysql for several hours on LBS XFS
without any issues.

We also wrote an eBPF tool called blkalgn [5] to see if IO sent to the device
is aligned and at least filesystem block size in length.

For those who want this in a git tree we have this up on a kdevops
large-block-minorder-for-next-v13 tag [6].

[0] https://lore.kernel.org/lkml/20230915183848.1018717-1-kernel@pankajraghav.com/
[1] https://www.youtube.com/watch?v=ar72r5Xf7x4
[2] https://lore.kernel.org/linux-mm/3792765.1724196264@warthog.procyon.org.uk/
[3] https://github.com/linux-kdevops/kdevops/blob/master/docs/xfs-bugs.md
489 non-critical issues and 55 critical issues. We've determined and reported
that the 55 critical issues have all fall into 5 common  XFS asserts or hung
tasks  and 2 memory management asserts.
[4] https://github.com/linux-kdevops/fstests/tree/lbs-fixes
[5] iovisor/bcc#4813
[6] https://github.com/linux-kdevops/linux/
[7] https://lore.kernel.org/linux-kernel/Zl20pc-YlIWCSy6Z@casper.infradead.org/#t

* patches from https://lore.kernel.org/r/20240822135018.1931258-1-kernel@pankajraghav.com: (5979 commits)
  xfs: enable block size larger than page size support
  xfs: make the calculation generic in xfs_sb_validate_fsb_count()
  xfs: expose block size in stat
  xfs: use kvmalloc for xattr buffers
  iomap: fix iomap_dio_zero() for fs bs > system page size
  filemap: cap PTE range to be created to allowed zero fill in folio_map_range()
  mm: split a folio in minimum folio order chunks
  readahead: allocate folios with mapping_min_order in readahead
  filemap: allocate mapping_min_order folios in the page cache
  fs: Allow fine-grained control of folio sizes
  Add linux-next specific files for 20240821
  l2tp: use skb_queue_purge in l2tp_ip_destroy_sock
  af_unix: Don't call skb_get() for OOB skb.
  dt-bindings: net: socionext,uniphier-ave4: add top-level constraints
  dt-bindings: net: renesas,etheravb: add top-level constraints
  dt-bindings: net: mediatek,net: add top-level constraints
  dt-bindings: net: mediatek,net: narrow interrupts per variants
  net: Silence false field-spanning write warning in metadata_dst memcpy
  net: hns3: Use ARRAY_SIZE() to improve readability
  selftests: net/forwarding: spawn sh inside vrf to speed up ping loop
  ...

Link: https://lore.kernel.org/r/20240822135018.1931258-1-kernel@pankajraghav.com
Signed-off-by: Christian Brauner <brauner@kernel.org>
mcgrof pushed a commit to linux-kdevops/linux that referenced this pull request Sep 5, 2024
Pankaj Raghav (Samsung) <kernel@pankajraghav.com> says:

This is the 13th version of the series that enables block size > page size
(Large Block Size) experimental support in XFS. Please consider this for
the inclusion in 6.12.

The context and motivation can be seen in cover letter of the RFC v1 [0].
We also recorded a talk about this effort at LPC [1], if someone would
like more context on this effort.

Thanks to David Howells, the page cache changes have also been tested on
top of AFS[2] with mapping_min_order set.

A lot of emphasis has been put on testing using kdevops, starting with an XFS
baseline [3]. The testing has been split into regression and progression.

Regression testing:
In regression testing, we ran the whole test suite to check for regressions on
existing profiles due to the page cache changes.

I also ran split_huge_page_test selftest on XFS filesystem to check for
huge page splits in min order chunks is done correctly.

No regressions were found with these patches added on top.

Progression testing:
For progression testing, we tested for 8k, 16k, 32k and 64k block sizes.  To
compare it with existing support, an ARM VM with 64k base page system (without
our patches) was used as a reference to check for actual failures due to LBS
support in a 4k base page size system.

No new failures were found with the LBS support.

We've done some preliminary performance tests with fio on XFS on 4k block size
against pmem and NVMe with buffered IO and Direct IO on vanilla Vs + these
patches applied, and detected no regressions.

We ran sysbench on postgres and mysql for several hours on LBS XFS
without any issues.

We also wrote an eBPF tool called blkalgn [5] to see if IO sent to the device
is aligned and at least filesystem block size in length.

For those who want this in a git tree we have this up on a kdevops
large-block-minorder-for-next-v13 tag [6].

[0] https://lore.kernel.org/lkml/20230915183848.1018717-1-kernel@pankajraghav.com/
[1] https://www.youtube.com/watch?v=ar72r5Xf7x4
[2] https://lore.kernel.org/linux-mm/3792765.1724196264@warthog.procyon.org.uk/
[3] https://github.com/linux-kdevops/kdevops/blob/master/docs/xfs-bugs.md
489 non-critical issues and 55 critical issues. We've determined and reported
that the 55 critical issues have all fall into 5 common  XFS asserts or hung
tasks  and 2 memory management asserts.
[4] https://github.com/linux-kdevops/fstests/tree/lbs-fixes
[5] iovisor/bcc#4813
[6] https://github.com/linux-kdevops/linux/
[7] https://lore.kernel.org/linux-kernel/Zl20pc-YlIWCSy6Z@casper.infradead.org/#t

* patches from https://lore.kernel.org/r/20240822135018.1931258-1-kernel@pankajraghav.com: (5979 commits)
  xfs: enable block size larger than page size support
  xfs: make the calculation generic in xfs_sb_validate_fsb_count()
  xfs: expose block size in stat
  xfs: use kvmalloc for xattr buffers
  iomap: fix iomap_dio_zero() for fs bs > system page size
  filemap: cap PTE range to be created to allowed zero fill in folio_map_range()
  mm: split a folio in minimum folio order chunks
  readahead: allocate folios with mapping_min_order in readahead
  filemap: allocate mapping_min_order folios in the page cache
  fs: Allow fine-grained control of folio sizes
  Add linux-next specific files for 20240821
  l2tp: use skb_queue_purge in l2tp_ip_destroy_sock
  af_unix: Don't call skb_get() for OOB skb.
  dt-bindings: net: socionext,uniphier-ave4: add top-level constraints
  dt-bindings: net: renesas,etheravb: add top-level constraints
  dt-bindings: net: mediatek,net: add top-level constraints
  dt-bindings: net: mediatek,net: narrow interrupts per variants
  net: Silence false field-spanning write warning in metadata_dst memcpy
  net: hns3: Use ARRAY_SIZE() to improve readability
  selftests: net/forwarding: spawn sh inside vrf to speed up ping loop
  ...

Link: https://lore.kernel.org/r/20240822135018.1931258-1-kernel@pankajraghav.com
Signed-off-by: Christian Brauner <brauner@kernel.org>
intel-lab-lkp pushed a commit to intel-lab-lkp/linux that referenced this pull request Sep 7, 2024
Pankaj Raghav (Samsung) <kernel@pankajraghav.com> says:

This is the 13th version of the series that enables block size > page size
(Large Block Size) experimental support in XFS. Please consider this for
the inclusion in 6.12.

The context and motivation can be seen in cover letter of the RFC v1 [0].
We also recorded a talk about this effort at LPC [1], if someone would
like more context on this effort.

Thanks to David Howells, the page cache changes have also been tested on
top of AFS[2] with mapping_min_order set.

A lot of emphasis has been put on testing using kdevops, starting with an XFS
baseline [3]. The testing has been split into regression and progression.

Regression testing:
In regression testing, we ran the whole test suite to check for regressions on
existing profiles due to the page cache changes.

I also ran split_huge_page_test selftest on XFS filesystem to check for
huge page splits in min order chunks is done correctly.

No regressions were found with these patches added on top.

Progression testing:
For progression testing, we tested for 8k, 16k, 32k and 64k block sizes.  To
compare it with existing support, an ARM VM with 64k base page system (without
our patches) was used as a reference to check for actual failures due to LBS
support in a 4k base page size system.

No new failures were found with the LBS support.

We've done some preliminary performance tests with fio on XFS on 4k block size
against pmem and NVMe with buffered IO and Direct IO on vanilla Vs + these
patches applied, and detected no regressions.

We ran sysbench on postgres and mysql for several hours on LBS XFS
without any issues.

We also wrote an eBPF tool called blkalgn [5] to see if IO sent to the device
is aligned and at least filesystem block size in length.

For those who want this in a git tree we have this up on a kdevops
large-block-minorder-for-next-v13 tag [6].

[0] https://lore.kernel.org/lkml/20230915183848.1018717-1-kernel@pankajraghav.com/
[1] https://www.youtube.com/watch?v=ar72r5Xf7x4
[2] https://lore.kernel.org/linux-mm/3792765.1724196264@warthog.procyon.org.uk/
[3] https://github.com/linux-kdevops/kdevops/blob/master/docs/xfs-bugs.md
489 non-critical issues and 55 critical issues. We've determined and reported
that the 55 critical issues have all fall into 5 common  XFS asserts or hung
tasks  and 2 memory management asserts.
[4] https://github.com/linux-kdevops/fstests/tree/lbs-fixes
[5] iovisor/bcc#4813
[6] https://github.com/linux-kdevops/linux/
[7] https://lore.kernel.org/linux-kernel/Zl20pc-YlIWCSy6Z@casper.infradead.org/#t

* patches from https://lore.kernel.org/r/20240822135018.1931258-1-kernel@pankajraghav.com: (5979 commits)
  xfs: enable block size larger than page size support
  xfs: make the calculation generic in xfs_sb_validate_fsb_count()
  xfs: expose block size in stat
  xfs: use kvmalloc for xattr buffers
  iomap: fix iomap_dio_zero() for fs bs > system page size
  filemap: cap PTE range to be created to allowed zero fill in folio_map_range()
  mm: split a folio in minimum folio order chunks
  readahead: allocate folios with mapping_min_order in readahead
  filemap: allocate mapping_min_order folios in the page cache
  fs: Allow fine-grained control of folio sizes
  Add linux-next specific files for 20240821
  l2tp: use skb_queue_purge in l2tp_ip_destroy_sock
  af_unix: Don't call skb_get() for OOB skb.
  dt-bindings: net: socionext,uniphier-ave4: add top-level constraints
  dt-bindings: net: renesas,etheravb: add top-level constraints
  dt-bindings: net: mediatek,net: add top-level constraints
  dt-bindings: net: mediatek,net: narrow interrupts per variants
  net: Silence false field-spanning write warning in metadata_dst memcpy
  net: hns3: Use ARRAY_SIZE() to improve readability
  selftests: net/forwarding: spawn sh inside vrf to speed up ping loop
  ...

Link: https://lore.kernel.org/r/20240822135018.1931258-1-kernel@pankajraghav.com
Signed-off-by: Christian Brauner <brauner@kernel.org>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants