-
I am testing and evaluating gocryptfs for use in some filesystem encryption projects. I have some questions on raw I/O performance impact I am seeing. My test consist of reading and writing a large 1GB file that consists of random numbers (collected from /dev/urandom). I am doing read and write using The write performance looks heavily impacted
Is this expected? This is very different from the published performance data (https://nuetzlich.net/gocryptfs/comparison/#performance). The VM where I tested this does have AES-NI with 2 core Intel(R) Xeon(R) CPU E5-2676 v3 @ 2.40GHz Additional environment information `gocryptfs v2.3.2 without_openssl; go-fuse v2.3.0; 2023-04-29 go1.20.3 linux/amd64 Kernel - 5.4.17-2136.313.6.el8uek.x86_64 OS - Oracle Linux Server 8.7 I was wondering if I am missing something in configuration that is causing this huge impact. Any advice or pointer is much appreciated. |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 14 replies
-
The numbers at https://nuetzlich.net/gocryptfs/comparison/#performance-on-linux are with I have benchmarked what I get for blocksizes from 1kiB up to 1MiB. Note that gocryptfs internally works with blocks up to 128kiB in size, so going larger than that should make no difference. $ for i in 1k 2k 4k 8k 16k 32k 64k 128k 256k 512k 1M ; do echo -n "bs=$i: " ; timeout --foreground -s INT 1s dd if=/dev/zero bs=$i of=zero 2>&1 | tail -1 ; done
bs=1k: 10482688 bytes (10 MB, 10 MiB) copied, 0,472906 s, 22,2 MB/s
bs=2k: 40665088 bytes (41 MB, 39 MiB) copied, 0,996893 s, 40,8 MB/s
bs=4k: 95678464 bytes (96 MB, 91 MiB) copied, 0,989218 s, 96,7 MB/s
bs=8k: 112320512 bytes (112 MB, 107 MiB) copied, 0,976511 s, 115 MB/s
bs=16k: 189825024 bytes (190 MB, 181 MiB) copied, 0,966398 s, 196 MB/s
bs=32k: 243957760 bytes (244 MB, 233 MiB) copied, 0,950065 s, 257 MB/s
bs=64k: 276561920 bytes (277 MB, 264 MiB) copied, 0,932196 s, 297 MB/s
bs=128k: 418250752 bytes (418 MB, 399 MiB) copied, 0,930442 s, 450 MB/s
bs=256k: 428081152 bytes (428 MB, 408 MiB) copied, 0,894369 s, 479 MB/s
bs=512k: 386924544 bytes (387 MB, 369 MiB) copied, 0,895406 s, 432 MB/s
bs=1M: 420478976 bytes (420 MB, 401 MiB) copied, 0,905745 s, 464 MB/s |
Beta Was this translation helpful? Give feedback.
-
@DrDaveD I'm splitting the thread off here
I looked at this, and squashfuse_ll enables the kernel page cache, with infinite timeout. This makes sense because the squashfs image is immutable. gocryptfs does not enable the kernel page cache. So after the first read, the data is in the kernel page cache and the request does not even hit FUSE anymore. I guess the benchmarks reads the same files (the Python scripts?) over and over. |
Beta Was this translation helpful? Give feedback.
The numbers at https://nuetzlich.net/gocryptfs/comparison/#performance-on-linux are with
bs=128k
. For small blocksizes, like 1k as you have, the FUSE overhead comes into play quite a bit.I have benchmarked what I get for blocksizes from 1kiB up to 1MiB. Note that gocryptfs internally works with blocks up to 128kiB in size, so going larger than that should make no difference.