-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug in deleting files #243
Comments
I recovered a longer error trace: May 14 14:35:12 backup1 kernel: [ 3534.365747] shrink_slab: arc_shrinker_func+0x0/0xc0 [zfs] negative objects to delete nr=-9164955913602068391 |
Thanks for the bug report, I've noticed similar behavior and am currently looking in to it. |
Re-order initialization in spl_kmem_init to allow for kmem tracing to work. The spl_kmem_init function calls taskq_create prior to initializing the tracking (calling spl_kmem_init_tracking). Since taskq_create uses kmem_alloc, NULL dereferences occur because the global kmem_list hasn't had its next & prev pointers initialized yet. This commit moves the calls to spl_kmem_init_tracking earlier in the spl_kmem_init function in order that the subsequent kmem_alloc calls (by taskq_create) work properly. Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Closes openzfs#243
…s#243) Signed-off-by: mayank <mayank.patel@mayadata.io>
The performance of `zfs receive` can be bottlenecked on the CPU consumed by the `receive_writer` thread, especially when receiving streams with small compressed block sizes. Much of the CPU is spent creating and destroying dbuf's and arc buf's, one for each `WRITE` record in the send stream. This commit introduces the concept of "lightweight writes", which allows `zfs receive` to write to the DMU by providing an ABD, and instantiating only a new type of `dbuf_dirty_record_t`. The dbuf and arc buf for this "dirty leaf block" are not instantiated. Because there is no dbuf with the dirty data, this mechanism doesn't support reading from "lightweight-dirty" blocks (they would see the on-disk state rather than the dirty data). Since the dedup-receive code has been removed, `zfs receive` is write-only, so this works fine. Because there are no arc bufs for the received data, the received data is no longer cached in the ARC. Testing a receive of a stream with average compressed block size of 4KB, this commit improves performance by 50%, while also reducing CPU usage by 50% of a CPU. On a per-block basis, CPU consumed by receive_writer() and dbuf_evict() is now 1/7th (14%) of what it was. Baseline: 450MB/s, CPU in receive_writer() 40% + dbuf_evict() 35% New: 670MB/s, CPU in receive_writer() 17% + dbuf_evict() 0% The code is also restructured in a few ways: Added a `dr_dnode` field to the dbuf_dirty_record_t. This simplifies some existing code that no longer needs `DB_DNODE_ENTER()` and related routines. The new field is needed by the lightweight-type dirty record. To ensure that the `dr_dnode` field remains valid until the dirty record is freed, we have to ensure that the `dnode_move()` doesn't relocate the dnode_t. To do this we keep a hold on the dnode until it's zio's have completed. This is already done by the user-accounting code (`userquota_updates_task()`), this commit extends that so that it always keeps the dnode hold until zio completion (see `dnode_rele_task()`). `dn_dirty_txg` was previously zeroed when the dnode was synced. This was not necessary, since its meaning can be "when was this dnode last dirtied". This change simplifies the new `dnode_rele_task()` code. Removed some dead code related to `DRR_WRITE_BYREF` (dedup receive). Signed-off-by: Matthew Ahrens <mahrens@delphix.com>
…ueue_depth to improve sync write performance (openzfs#243)
During backup I delete some thousands of files of old backups. My setup is described in issue 216.
Unfortunately there is a bug because server became unresponsive and on log I see a lot of messages like:
shrink_slab: arc_shrinker_func+0x0/0xc0 [zfs] negative objects to delete nr=-(LONG NUMBER EACH TIME DIFFERENT)
Thanks in advance for any help.
Mario
The text was updated successfully, but these errors were encountered: