Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support Debian/Ubuntu Style Packages #11

Closed
behlendorf opened this issue May 17, 2010 · 3 comments
Closed

Support Debian/Ubuntu Style Packages #11

behlendorf opened this issue May 17, 2010 · 3 comments
Labels
Type: Feature Feature request or new feature

Comments

@behlendorf
Copy link
Contributor

The build system should be updated to support building Debian/Ubuntu style packages.

http://www.debian.org/doc/devel-manuals#maint-guide

@behlendorf
Copy link
Contributor Author

This was sort of fixed by commit 0f237a4. As a short term work around I've automated the process of using alien to convert rpm packages to deb and tgz packages. The long term fix remains to add native packaging but I just don't have time for that. This I had time for:

$ lsb_release -d
Description:    Ubuntu 10.04.1 LTS

$ ./configure
$ make deb

$ ls *.deb
zfs_0.5.0-2_i386.deb
zfs-devel_0.5.0-2_i386.deb
zfs-modules_0.5.0-2_i386.deb
zfs-modules-devel_0.5.0-2_i386.deb
zfs-test_0.5.0-2_i386.deb

$ sudo dpkg -i *.deb
(Reading database ... 225652 files and directories currently installed.)
Preparing to replace zfs 0.5.0-2 (using zfs_0.5.0-2_i386.deb) ...
Unpacking replacement zfs ...
Preparing to replace zfs-devel 0.5.0-2 (using zfs-devel_0.5.0-2_i386.deb) ...
Unpacking replacement zfs-devel ...
Preparing to replace zfs-modules 0.5.0-2 (using zfs-modules_0.5.0-2_i386.deb) ...
Unpacking replacement zfs-modules ...
Preparing to replace zfs-modules-devel 0.5.0-2 (using zfs-modules-devel_0.5.0-2_i386.deb) ...
Unpacking replacement zfs-modules-devel ...
Preparing to replace zfs-test 0.5.0-2 (using zfs-test_0.5.0-2_i386.deb) ...
Unpacking replacement zfs-test ...
Setting up zfs (0.5.0-2) ...
Setting up zfs-devel (0.5.0-2) ...
Setting up zfs-modules (0.5.0-2) ...
Setting up zfs-modules-devel (0.5.0-2) ...
Setting up zfs-test (0.5.0-2) ...
Processing triggers for man-db ...
Processing triggers for libc-bin ...
ldconfig deferred processing now taking place

$ sudo /usr/libexec/zfs/zconfig.sh
test 1 - persistent zpool.cache: PASS
test 2 - scan disks for pools to import: PASS
test 3 - ZVOL sanity: PASS
test 4 - zpool import/export: PASS

$ sudo modprobe zfs
$ sudo /usr/libexec/zfs/zpios-sanity.sh 
status    name        id    wr-data wr-ch   wr-bw   rd-data rd-ch   rd-bw
-------------------------------------------------------------------------------
PASS:     file-raid0   0    64m 64  12.53m  64m 64  640.13m
PASS:     file-raid10  0    64m 64  43.69m  64m 64  615.40m
PASS:     file-raidz   0    64m 64  11.45m  64m 64  460.44m
PASS:     file-raidz2  0    64m 64  29.57m  64m 64  444.44m
PASS:     lo-raid0     0    64m 64  43.60m  64m 64  831.17m
PASS:     lo-raid10    0    64m 64  9.51m   64m 64  633.66m
PASS:     lo-raidz     0    64m 64  9.34m   64m 64  800.04m
PASS:     lo-raidz2    0    64m 64  21.10m  64m 64  888.92m

@dajhorn
Copy link
Contributor

dajhorn commented Feb 18, 2011

I did the DKMS packaging for Debian and Ubuntu here:

https://launchpad.net/~dajhorn/+archive/zfs

This packaging is a debian/ overlay for each of spl/, zfs/ and lzfs/. Nothing in the repository needs to be changed.

@behlendorf
Copy link
Contributor Author

Closing bug, dajhorn was nice enough to take care of this for us. There is now a link to his ppa on the main zfsonlinux.org web site.

FransUrbo pushed a commit to FransUrbo/zfs that referenced this issue Mar 30, 2013
Correct KeyGen, importing pools sets feature flags
ryao added a commit to ryao/zfs that referenced this issue May 28, 2013
There are a few things wrong with the existing initialization code. We
try to take zvol_state_lock before it is initialized because we call
zvol_create_minors() before any pools are imported. We also do not clean
up from failures in zvol_init particularly well. We resolve this with
changes to zvol_init() and zvol_fini().

In addition, the following error will occur on some (possibly all) kernels because
blk_init_queue() will try to take the spinlock before we initialize it.

[    5.538871] BUG: spinlock bad magic on CPU#0, zpool/4054
[    5.538885]  lock: 0xffff88021a73de60, .magic: 00000000, .owner:
<none>/-1, .owner_cpu: 0
[    5.538888] Pid: 4054, comm: zpool Not tainted 3.9.3 openzfs#11
[    5.538890] Call Trace:
[    5.538898]  [<ffffffff81478ef8>] spin_dump+0x8c/0x91
[    5.538902]  [<ffffffff81478f1e>] spin_bug+0x21/0x26
[    5.538906]  [<ffffffff812da097>] do_raw_spin_lock+0x127/0x130
[    5.538911]  [<ffffffff81253301>] ? zvol_probe+0x91/0xf0
[    5.538914]  [<ffffffff8147d851>] _raw_spin_lock_irq+0x21/0x30
[    5.538919]  [<ffffffff812c2c1e>] cfq_init_queue+0x1fe/0x350
[    5.538922]  [<ffffffff81253360>] ? zvol_probe+0xf0/0xf0
[    5.538926]  [<ffffffff812aacb8>] elevator_init+0x78/0x140
[    5.538930]  [<ffffffff812b2677>] blk_init_allocated_queue+0x87/0xb0
[    5.538933]  [<ffffffff81253360>] ? zvol_probe+0xf0/0xf0
[    5.538937]  [<ffffffff812b26d5>] blk_init_queue_node+0x35/0x70
[    5.538941]  [<ffffffff812b271e>] blk_init_queue+0xe/0x10
[    5.538944]  [<ffffffff8125211b>] __zvol_create_minor+0x24b/0x620
[    5.538947]  [<ffffffff81253264>] zvol_create_minors_cb+0x24/0x30
[    5.538952]  [<ffffffff811bd9ca>] dmu_objset_find_spa+0xea/0x510
[    5.538955]  [<ffffffff81253240>] ? zvol_free+0x60/0x60
[    5.538958]  [<ffffffff811bda71>] dmu_objset_find_spa+0x191/0x510
[    5.538962]  [<ffffffff81253240>] ? zvol_free+0x60/0x60
[    5.538965]  [<ffffffff81253ea2>] zvol_create_minors+0x92/0x180
[    5.538969]  [<ffffffff811f8d80>] spa_open_common+0x250/0x380
[    5.538973]  [<ffffffff811f8ece>] spa_open+0xe/0x10
[    5.538977]  [<ffffffff8122817e>] pool_status_check.part.22+0x1e/0x80
[    5.538980]  [<ffffffff81228a55>] zfsdev_ioctl+0x155/0x190
[    5.538984]  [<ffffffff8116a695>] do_vfs_ioctl+0x325/0x5a0
[    5.538989]  [<ffffffff81163f1d>] ? final_putname+0x1d/0x40
[    5.538992]  [<ffffffff8116a950>] sys_ioctl+0x40/0x80
[    5.538996]  [<ffffffff814812c9>] ? do_page_fault+0x9/0x10
[    5.539000]  [<ffffffff81483929>] system_call_fastpath+0x16/0x1b
[    5.541118]  zd0: unknown partition table

We fix this by calling spin_lock_init before blk_init_queue.

Signed-off-by: Richard Yao <ryao@cs.stonybrook.edu>
ryao added a commit to ryao/zfs that referenced this issue May 28, 2013
There are a few things wrong with the existing initialization code. We
try to take zvol_state_lock before it is initialized because we call
zvol_create_minors() before any pools are imported. We also do not clean
up from failures in zvol_init particularly well. We resolve this with
changes to zvol_init() and zvol_fini().

In addition, the following error will occur on some (possibly all) kernels because
blk_init_queue() will try to take the spinlock before we initialize it.

[    5.538871] BUG: spinlock bad magic on CPU#0, zpool/4054
[    5.538885]  lock: 0xffff88021a73de60, .magic: 00000000, .owner:
<none>/-1, .owner_cpu: 0
[    5.538888] Pid: 4054, comm: zpool Not tainted 3.9.3 openzfs#11
[    5.538890] Call Trace:
[    5.538898]  [<ffffffff81478ef8>] spin_dump+0x8c/0x91
[    5.538902]  [<ffffffff81478f1e>] spin_bug+0x21/0x26
[    5.538906]  [<ffffffff812da097>] do_raw_spin_lock+0x127/0x130
[    5.538911]  [<ffffffff81253301>] ? zvol_probe+0x91/0xf0
[    5.538914]  [<ffffffff8147d851>] _raw_spin_lock_irq+0x21/0x30
[    5.538919]  [<ffffffff812c2c1e>] cfq_init_queue+0x1fe/0x350
[    5.538922]  [<ffffffff81253360>] ? zvol_probe+0xf0/0xf0
[    5.538926]  [<ffffffff812aacb8>] elevator_init+0x78/0x140
[    5.538930]  [<ffffffff812b2677>] blk_init_allocated_queue+0x87/0xb0
[    5.538933]  [<ffffffff81253360>] ? zvol_probe+0xf0/0xf0
[    5.538937]  [<ffffffff812b26d5>] blk_init_queue_node+0x35/0x70
[    5.538941]  [<ffffffff812b271e>] blk_init_queue+0xe/0x10
[    5.538944]  [<ffffffff8125211b>] __zvol_create_minor+0x24b/0x620
[    5.538947]  [<ffffffff81253264>] zvol_create_minors_cb+0x24/0x30
[    5.538952]  [<ffffffff811bd9ca>] dmu_objset_find_spa+0xea/0x510
[    5.538955]  [<ffffffff81253240>] ? zvol_free+0x60/0x60
[    5.538958]  [<ffffffff811bda71>] dmu_objset_find_spa+0x191/0x510
[    5.538962]  [<ffffffff81253240>] ? zvol_free+0x60/0x60
[    5.538965]  [<ffffffff81253ea2>] zvol_create_minors+0x92/0x180
[    5.538969]  [<ffffffff811f8d80>] spa_open_common+0x250/0x380
[    5.538973]  [<ffffffff811f8ece>] spa_open+0xe/0x10
[    5.538977]  [<ffffffff8122817e>] pool_status_check.part.22+0x1e/0x80
[    5.538980]  [<ffffffff81228a55>] zfsdev_ioctl+0x155/0x190
[    5.538984]  [<ffffffff8116a695>] do_vfs_ioctl+0x325/0x5a0
[    5.538989]  [<ffffffff81163f1d>] ? final_putname+0x1d/0x40
[    5.538992]  [<ffffffff8116a950>] sys_ioctl+0x40/0x80
[    5.538996]  [<ffffffff814812c9>] ? do_page_fault+0x9/0x10
[    5.539000]  [<ffffffff81483929>] system_call_fastpath+0x16/0x1b
[    5.541118]  zd0: unknown partition table

We fix this by calling spin_lock_init before blk_init_queue.

Signed-off-by: Richard Yao <ryao@cs.stonybrook.edu>
ryao added a commit to ryao/zfs that referenced this issue May 28, 2013
There are a few things wrong with the existing initialization code. We
try to take zvol_state_lock before it is initialized because we call
zvol_create_minors() before any pools are imported. We also do not clean
up from failures in zvol_init particularly well. We resolve this with
changes to zvol_init() and zvol_fini().

In addition, the following error will occur on some (possibly all)
kernels because blk_init_queue() will try to take the spinlock before we
initialize it.

[    5.538871] BUG: spinlock bad magic on CPU#0, zpool/4054
[    5.538885]  lock: 0xffff88021a73de60, .magic: 00000000, .owner:
<none>/-1, .owner_cpu: 0
[    5.538888] Pid: 4054, comm: zpool Not tainted 3.9.3 openzfs#11
[    5.538890] Call Trace:
[    5.538898]  [<ffffffff81478ef8>] spin_dump+0x8c/0x91
[    5.538902]  [<ffffffff81478f1e>] spin_bug+0x21/0x26
[    5.538906]  [<ffffffff812da097>] do_raw_spin_lock+0x127/0x130
[    5.538911]  [<ffffffff81253301>] ? zvol_probe+0x91/0xf0
[    5.538914]  [<ffffffff8147d851>] _raw_spin_lock_irq+0x21/0x30
[    5.538919]  [<ffffffff812c2c1e>] cfq_init_queue+0x1fe/0x350
[    5.538922]  [<ffffffff81253360>] ? zvol_probe+0xf0/0xf0
[    5.538926]  [<ffffffff812aacb8>] elevator_init+0x78/0x140
[    5.538930]  [<ffffffff812b2677>] blk_init_allocated_queue+0x87/0xb0
[    5.538933]  [<ffffffff81253360>] ? zvol_probe+0xf0/0xf0
[    5.538937]  [<ffffffff812b26d5>] blk_init_queue_node+0x35/0x70
[    5.538941]  [<ffffffff812b271e>] blk_init_queue+0xe/0x10
[    5.538944]  [<ffffffff8125211b>] __zvol_create_minor+0x24b/0x620
[    5.538947]  [<ffffffff81253264>] zvol_create_minors_cb+0x24/0x30
[    5.538952]  [<ffffffff811bd9ca>] dmu_objset_find_spa+0xea/0x510
[    5.538955]  [<ffffffff81253240>] ? zvol_free+0x60/0x60
[    5.538958]  [<ffffffff811bda71>] dmu_objset_find_spa+0x191/0x510
[    5.538962]  [<ffffffff81253240>] ? zvol_free+0x60/0x60
[    5.538965]  [<ffffffff81253ea2>] zvol_create_minors+0x92/0x180
[    5.538969]  [<ffffffff811f8d80>] spa_open_common+0x250/0x380
[    5.538973]  [<ffffffff811f8ece>] spa_open+0xe/0x10
[    5.538977]  [<ffffffff8122817e>] pool_status_check.part.22+0x1e/0x80
[    5.538980]  [<ffffffff81228a55>] zfsdev_ioctl+0x155/0x190
[    5.538984]  [<ffffffff8116a695>] do_vfs_ioctl+0x325/0x5a0
[    5.538989]  [<ffffffff81163f1d>] ? final_putname+0x1d/0x40
[    5.538992]  [<ffffffff8116a950>] sys_ioctl+0x40/0x80
[    5.538996]  [<ffffffff814812c9>] ? do_page_fault+0x9/0x10
[    5.539000]  [<ffffffff81483929>] system_call_fastpath+0x16/0x1b
[    5.541118]  zd0: unknown partition table

We fix this by calling spin_lock_init before blk_init_queue.

Signed-off-by: Richard Yao <ryao@cs.stonybrook.edu>
ryao added a commit to ryao/zfs that referenced this issue Jun 9, 2013
In addition, the following error will occur on some (possibly all)
kernels because blk_init_queue() will try to take the spinlock before we
initialize it.

[    5.538871] BUG: spinlock bad magic on CPU#0, zpool/4054
[    5.538885]  lock: 0xffff88021a73de60, .magic: 00000000, .owner:
<none>/-1, .owner_cpu: 0
[    5.538888] Pid: 4054, comm: zpool Not tainted 3.9.3 openzfs#11
[    5.538890] Call Trace:
[    5.538898]  [<ffffffff81478ef8>] spin_dump+0x8c/0x91
[    5.538902]  [<ffffffff81478f1e>] spin_bug+0x21/0x26
[    5.538906]  [<ffffffff812da097>] do_raw_spin_lock+0x127/0x130
[    5.538911]  [<ffffffff81253301>] ? zvol_probe+0x91/0xf0
[    5.538914]  [<ffffffff8147d851>] _raw_spin_lock_irq+0x21/0x30
[    5.538919]  [<ffffffff812c2c1e>] cfq_init_queue+0x1fe/0x350
[    5.538922]  [<ffffffff81253360>] ? zvol_probe+0xf0/0xf0
[    5.538926]  [<ffffffff812aacb8>] elevator_init+0x78/0x140
[    5.538930]  [<ffffffff812b2677>] blk_init_allocated_queue+0x87/0xb0
[    5.538933]  [<ffffffff81253360>] ? zvol_probe+0xf0/0xf0
[    5.538937]  [<ffffffff812b26d5>] blk_init_queue_node+0x35/0x70
[    5.538941]  [<ffffffff812b271e>] blk_init_queue+0xe/0x10
[    5.538944]  [<ffffffff8125211b>] __zvol_create_minor+0x24b/0x620
[    5.538947]  [<ffffffff81253264>] zvol_create_minors_cb+0x24/0x30
[    5.538952]  [<ffffffff811bd9ca>] dmu_objset_find_spa+0xea/0x510
[    5.538955]  [<ffffffff81253240>] ? zvol_free+0x60/0x60
[    5.538958]  [<ffffffff811bda71>] dmu_objset_find_spa+0x191/0x510
[    5.538962]  [<ffffffff81253240>] ? zvol_free+0x60/0x60
[    5.538965]  [<ffffffff81253ea2>] zvol_create_minors+0x92/0x180
[    5.538969]  [<ffffffff811f8d80>] spa_open_common+0x250/0x380
[    5.538973]  [<ffffffff811f8ece>] spa_open+0xe/0x10
[    5.538977]  [<ffffffff8122817e>] pool_status_check.part.22+0x1e/0x80
[    5.538980]  [<ffffffff81228a55>] zfsdev_ioctl+0x155/0x190
[    5.538984]  [<ffffffff8116a695>] do_vfs_ioctl+0x325/0x5a0
[    5.538989]  [<ffffffff81163f1d>] ? final_putname+0x1d/0x40
[    5.538992]  [<ffffffff8116a950>] sys_ioctl+0x40/0x80
[    5.538996]  [<ffffffff814812c9>] ? do_page_fault+0x9/0x10
[    5.539000]  [<ffffffff81483929>] system_call_fastpath+0x16/0x1b
[    5.541118]  zd0: unknown partition table

We fix this by calling spin_lock_init before blk_init_queue.

Also, we do not cleanup from failures in zvol_init particularly
well. We resolve this with changes to zvol_init() and zvol_fini().

Signed-off-by: Richard Yao <ryao@cs.stonybrook.edu>
ryao added a commit to ryao/zfs that referenced this issue Jun 13, 2013
In addition, the following error will occur on some (possibly all)
kernels because blk_init_queue() will try to take the spinlock before we
initialize it.

[    5.538871] BUG: spinlock bad magic on CPU#0, zpool/4054
[    5.538885]  lock: 0xffff88021a73de60, .magic: 00000000, .owner:
<none>/-1, .owner_cpu: 0
[    5.538888] Pid: 4054, comm: zpool Not tainted 3.9.3 openzfs#11
[    5.538890] Call Trace:
[    5.538898]  [<ffffffff81478ef8>] spin_dump+0x8c/0x91
[    5.538902]  [<ffffffff81478f1e>] spin_bug+0x21/0x26
[    5.538906]  [<ffffffff812da097>] do_raw_spin_lock+0x127/0x130
[    5.538911]  [<ffffffff81253301>] ? zvol_probe+0x91/0xf0
[    5.538914]  [<ffffffff8147d851>] _raw_spin_lock_irq+0x21/0x30
[    5.538919]  [<ffffffff812c2c1e>] cfq_init_queue+0x1fe/0x350
[    5.538922]  [<ffffffff81253360>] ? zvol_probe+0xf0/0xf0
[    5.538926]  [<ffffffff812aacb8>] elevator_init+0x78/0x140
[    5.538930]  [<ffffffff812b2677>] blk_init_allocated_queue+0x87/0xb0
[    5.538933]  [<ffffffff81253360>] ? zvol_probe+0xf0/0xf0
[    5.538937]  [<ffffffff812b26d5>] blk_init_queue_node+0x35/0x70
[    5.538941]  [<ffffffff812b271e>] blk_init_queue+0xe/0x10
[    5.538944]  [<ffffffff8125211b>] __zvol_create_minor+0x24b/0x620
[    5.538947]  [<ffffffff81253264>] zvol_create_minors_cb+0x24/0x30
[    5.538952]  [<ffffffff811bd9ca>] dmu_objset_find_spa+0xea/0x510
[    5.538955]  [<ffffffff81253240>] ? zvol_free+0x60/0x60
[    5.538958]  [<ffffffff811bda71>] dmu_objset_find_spa+0x191/0x510
[    5.538962]  [<ffffffff81253240>] ? zvol_free+0x60/0x60
[    5.538965]  [<ffffffff81253ea2>] zvol_create_minors+0x92/0x180
[    5.538969]  [<ffffffff811f8d80>] spa_open_common+0x250/0x380
[    5.538973]  [<ffffffff811f8ece>] spa_open+0xe/0x10
[    5.538977]  [<ffffffff8122817e>] pool_status_check.part.22+0x1e/0x80
[    5.538980]  [<ffffffff81228a55>] zfsdev_ioctl+0x155/0x190
[    5.538984]  [<ffffffff8116a695>] do_vfs_ioctl+0x325/0x5a0
[    5.538989]  [<ffffffff81163f1d>] ? final_putname+0x1d/0x40
[    5.538992]  [<ffffffff8116a950>] sys_ioctl+0x40/0x80
[    5.538996]  [<ffffffff814812c9>] ? do_page_fault+0x9/0x10
[    5.539000]  [<ffffffff81483929>] system_call_fastpath+0x16/0x1b
[    5.541118]  zd0: unknown partition table

We fix this by calling spin_lock_init before blk_init_queue.

Also, we do not cleanup from failures in zvol_init particularly
well. We resolve this with changes to zvol_init() and zvol_fini().

Signed-off-by: Richard Yao <ryao@cs.stonybrook.edu>
ryao added a commit to ryao/zfs that referenced this issue Jun 21, 2013
In addition, the following error will occur on some (possibly all)
kernels because blk_init_queue() will try to take the spinlock before we
initialize it.

[    5.538871] BUG: spinlock bad magic on CPU#0, zpool/4054
[    5.538885]  lock: 0xffff88021a73de60, .magic: 00000000, .owner:
<none>/-1, .owner_cpu: 0
[    5.538888] Pid: 4054, comm: zpool Not tainted 3.9.3 openzfs#11
[    5.538890] Call Trace:
[    5.538898]  [<ffffffff81478ef8>] spin_dump+0x8c/0x91
[    5.538902]  [<ffffffff81478f1e>] spin_bug+0x21/0x26
[    5.538906]  [<ffffffff812da097>] do_raw_spin_lock+0x127/0x130
[    5.538911]  [<ffffffff81253301>] ? zvol_probe+0x91/0xf0
[    5.538914]  [<ffffffff8147d851>] _raw_spin_lock_irq+0x21/0x30
[    5.538919]  [<ffffffff812c2c1e>] cfq_init_queue+0x1fe/0x350
[    5.538922]  [<ffffffff81253360>] ? zvol_probe+0xf0/0xf0
[    5.538926]  [<ffffffff812aacb8>] elevator_init+0x78/0x140
[    5.538930]  [<ffffffff812b2677>] blk_init_allocated_queue+0x87/0xb0
[    5.538933]  [<ffffffff81253360>] ? zvol_probe+0xf0/0xf0
[    5.538937]  [<ffffffff812b26d5>] blk_init_queue_node+0x35/0x70
[    5.538941]  [<ffffffff812b271e>] blk_init_queue+0xe/0x10
[    5.538944]  [<ffffffff8125211b>] __zvol_create_minor+0x24b/0x620
[    5.538947]  [<ffffffff81253264>] zvol_create_minors_cb+0x24/0x30
[    5.538952]  [<ffffffff811bd9ca>] dmu_objset_find_spa+0xea/0x510
[    5.538955]  [<ffffffff81253240>] ? zvol_free+0x60/0x60
[    5.538958]  [<ffffffff811bda71>] dmu_objset_find_spa+0x191/0x510
[    5.538962]  [<ffffffff81253240>] ? zvol_free+0x60/0x60
[    5.538965]  [<ffffffff81253ea2>] zvol_create_minors+0x92/0x180
[    5.538969]  [<ffffffff811f8d80>] spa_open_common+0x250/0x380
[    5.538973]  [<ffffffff811f8ece>] spa_open+0xe/0x10
[    5.538977]  [<ffffffff8122817e>] pool_status_check.part.22+0x1e/0x80
[    5.538980]  [<ffffffff81228a55>] zfsdev_ioctl+0x155/0x190
[    5.538984]  [<ffffffff8116a695>] do_vfs_ioctl+0x325/0x5a0
[    5.538989]  [<ffffffff81163f1d>] ? final_putname+0x1d/0x40
[    5.538992]  [<ffffffff8116a950>] sys_ioctl+0x40/0x80
[    5.538996]  [<ffffffff814812c9>] ? do_page_fault+0x9/0x10
[    5.539000]  [<ffffffff81483929>] system_call_fastpath+0x16/0x1b
[    5.541118]  zd0: unknown partition table

We fix this by calling spin_lock_init before blk_init_queue.

Also, we do not cleanup from failures in zvol_init particularly
well. We resolve this with changes to zvol_init() and zvol_fini().

Signed-off-by: Richard Yao <ryao@cs.stonybrook.edu>
ryao added a commit to ryao/zfs that referenced this issue Jun 21, 2013
The following error will occur on some (possibly all) kernels because
blk_init_queue() will try to take the spinlock before we initialize it.

[    5.538871] BUG: spinlock bad magic on CPU#0, zpool/4054
[    5.538885]  lock: 0xffff88021a73de60, .magic: 00000000, .owner:
<none>/-1, .owner_cpu: 0
[    5.538888] Pid: 4054, comm: zpool Not tainted 3.9.3 openzfs#11
[    5.538890] Call Trace:
[    5.538898]  [<ffffffff81478ef8>] spin_dump+0x8c/0x91
[    5.538902]  [<ffffffff81478f1e>] spin_bug+0x21/0x26
[    5.538906]  [<ffffffff812da097>] do_raw_spin_lock+0x127/0x130
[    5.538911]  [<ffffffff81253301>] ? zvol_probe+0x91/0xf0
[    5.538914]  [<ffffffff8147d851>] _raw_spin_lock_irq+0x21/0x30
[    5.538919]  [<ffffffff812c2c1e>] cfq_init_queue+0x1fe/0x350
[    5.538922]  [<ffffffff81253360>] ? zvol_probe+0xf0/0xf0
[    5.538926]  [<ffffffff812aacb8>] elevator_init+0x78/0x140
[    5.538930]  [<ffffffff812b2677>] blk_init_allocated_queue+0x87/0xb0
[    5.538933]  [<ffffffff81253360>] ? zvol_probe+0xf0/0xf0
[    5.538937]  [<ffffffff812b26d5>] blk_init_queue_node+0x35/0x70
[    5.538941]  [<ffffffff812b271e>] blk_init_queue+0xe/0x10
[    5.538944]  [<ffffffff8125211b>] __zvol_create_minor+0x24b/0x620
[    5.538947]  [<ffffffff81253264>] zvol_create_minors_cb+0x24/0x30
[    5.538952]  [<ffffffff811bd9ca>] dmu_objset_find_spa+0xea/0x510
[    5.538955]  [<ffffffff81253240>] ? zvol_free+0x60/0x60
[    5.538958]  [<ffffffff811bda71>] dmu_objset_find_spa+0x191/0x510
[    5.538962]  [<ffffffff81253240>] ? zvol_free+0x60/0x60
[    5.538965]  [<ffffffff81253ea2>] zvol_create_minors+0x92/0x180
[    5.538969]  [<ffffffff811f8d80>] spa_open_common+0x250/0x380
[    5.538973]  [<ffffffff811f8ece>] spa_open+0xe/0x10
[    5.538977]  [<ffffffff8122817e>] pool_status_check.part.22+0x1e/0x80
[    5.538980]  [<ffffffff81228a55>] zfsdev_ioctl+0x155/0x190
[    5.538984]  [<ffffffff8116a695>] do_vfs_ioctl+0x325/0x5a0
[    5.538989]  [<ffffffff81163f1d>] ? final_putname+0x1d/0x40
[    5.538992]  [<ffffffff8116a950>] sys_ioctl+0x40/0x80
[    5.538996]  [<ffffffff814812c9>] ? do_page_fault+0x9/0x10
[    5.539000]  [<ffffffff81483929>] system_call_fastpath+0x16/0x1b
[    5.541118]  zd0: unknown partition table

We fix this by calling spin_lock_init before blk_init_queue.

The manner in which zvol_init() initializes structures is
suspectible to a race between initialization and a probe on a zvol. We
reorganize zvol_init() to prevent that.

Lastly, calling zvol_create_minors(NULL) in zvol_init() does nothing
because no pools are imported, so we remove it.

Signed-off-by: Richard Yao <ryao@gentoo.org>
ryao added a commit to ryao/zfs that referenced this issue Jul 1, 2013
The following error will occur on some (possibly all) kernels because
blk_init_queue() will try to take the spinlock before we initialize it.

[    5.538871] BUG: spinlock bad magic on CPU#0, zpool/4054
[    5.538885]  lock: 0xffff88021a73de60, .magic: 00000000, .owner:
<none>/-1, .owner_cpu: 0
[    5.538888] Pid: 4054, comm: zpool Not tainted 3.9.3 openzfs#11
[    5.538890] Call Trace:
[    5.538898]  [<ffffffff81478ef8>] spin_dump+0x8c/0x91
[    5.538902]  [<ffffffff81478f1e>] spin_bug+0x21/0x26
[    5.538906]  [<ffffffff812da097>] do_raw_spin_lock+0x127/0x130
[    5.538911]  [<ffffffff81253301>] ? zvol_probe+0x91/0xf0
[    5.538914]  [<ffffffff8147d851>] _raw_spin_lock_irq+0x21/0x30
[    5.538919]  [<ffffffff812c2c1e>] cfq_init_queue+0x1fe/0x350
[    5.538922]  [<ffffffff81253360>] ? zvol_probe+0xf0/0xf0
[    5.538926]  [<ffffffff812aacb8>] elevator_init+0x78/0x140
[    5.538930]  [<ffffffff812b2677>] blk_init_allocated_queue+0x87/0xb0
[    5.538933]  [<ffffffff81253360>] ? zvol_probe+0xf0/0xf0
[    5.538937]  [<ffffffff812b26d5>] blk_init_queue_node+0x35/0x70
[    5.538941]  [<ffffffff812b271e>] blk_init_queue+0xe/0x10
[    5.538944]  [<ffffffff8125211b>] __zvol_create_minor+0x24b/0x620
[    5.538947]  [<ffffffff81253264>] zvol_create_minors_cb+0x24/0x30
[    5.538952]  [<ffffffff811bd9ca>] dmu_objset_find_spa+0xea/0x510
[    5.538955]  [<ffffffff81253240>] ? zvol_free+0x60/0x60
[    5.538958]  [<ffffffff811bda71>] dmu_objset_find_spa+0x191/0x510
[    5.538962]  [<ffffffff81253240>] ? zvol_free+0x60/0x60
[    5.538965]  [<ffffffff81253ea2>] zvol_create_minors+0x92/0x180
[    5.538969]  [<ffffffff811f8d80>] spa_open_common+0x250/0x380
[    5.538973]  [<ffffffff811f8ece>] spa_open+0xe/0x10
[    5.538977]  [<ffffffff8122817e>] pool_status_check.part.22+0x1e/0x80
[    5.538980]  [<ffffffff81228a55>] zfsdev_ioctl+0x155/0x190
[    5.538984]  [<ffffffff8116a695>] do_vfs_ioctl+0x325/0x5a0
[    5.538989]  [<ffffffff81163f1d>] ? final_putname+0x1d/0x40
[    5.538992]  [<ffffffff8116a950>] sys_ioctl+0x40/0x80
[    5.538996]  [<ffffffff814812c9>] ? do_page_fault+0x9/0x10
[    5.539000]  [<ffffffff81483929>] system_call_fastpath+0x16/0x1b
[    5.541118]  zd0: unknown partition table

We fix this by calling spin_lock_init before blk_init_queue.

The manner in which zvol_init() initializes structures is
suspectible to a race between initialization and a probe on a zvol. We
reorganize zvol_init() to prevent that.

Lastly, calling zvol_create_minors(NULL) in zvol_init() does nothing
because no pools are imported, so we remove it.

Signed-off-by: Richard Yao <ryao@gentoo.org>
behlendorf pushed a commit to behlendorf/zfs that referenced this issue Jul 2, 2013
The following error will occur on some (possibly all) kernels
because blk_init_queue() will try to take the spinlock before
we initialize it.

  BUG: spinlock bad magic on CPU#0, zpool/4054
   lock: 0xffff88021a73de60, .magic: 00000000,
   .owner: <none>/-1, .owner_cpu: 0
  Pid: 4054, comm: zpool Not tainted 3.9.3 openzfs#11
  Call Trace:
   [<ffffffff81478ef8>] spin_dump+0x8c/0x91
   [<ffffffff81478f1e>] spin_bug+0x21/0x26
   [<ffffffff812da097>] do_raw_spin_lock+0x127/0x130
   [<ffffffff8147d851>] _raw_spin_lock_irq+0x21/0x30
   [<ffffffff812c2c1e>] cfq_init_queue+0x1fe/0x350
   [<ffffffff812aacb8>] elevator_init+0x78/0x140
   [<ffffffff812b2677>] blk_init_allocated_queue+0x87/0xb0
   [<ffffffff812b26d5>] blk_init_queue_node+0x35/0x70
   [<ffffffff812b271e>] blk_init_queue+0xe/0x10
   [<ffffffff8125211b>] __zvol_create_minor+0x24b/0x620
   [<ffffffff81253264>] zvol_create_minors_cb+0x24/0x30
   [<ffffffff811bd9ca>] dmu_objset_find_spa+0xea/0x510
   [<ffffffff811bda71>] dmu_objset_find_spa+0x191/0x510
   [<ffffffff81253ea2>] zvol_create_minors+0x92/0x180
   [<ffffffff811f8d80>] spa_open_common+0x250/0x380
   [<ffffffff811f8ece>] spa_open+0xe/0x10
   [<ffffffff8122817e>] pool_status_check.part.22+0x1e/0x80
   [<ffffffff81228a55>] zfsdev_ioctl+0x155/0x190
   [<ffffffff8116a695>] do_vfs_ioctl+0x325/0x5a0
   [<ffffffff8116a950>] sys_ioctl+0x40/0x80
   [<ffffffff814812c9>] ? do_page_fault+0x9/0x10
   [<ffffffff81483929>] system_call_fastpath+0x16/0x1b
   zd0: unknown partition table

We fix this by calling spin_lock_init before blk_init_queue.

The manner in which zvol_init() initializes structures is
suspectible to a race between initialization and a probe on
a zvol. We reorganize zvol_init() to prevent that.

Signed-off-by: Richard Yao <ryao@gentoo.org>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
ryao added a commit to ryao/zfs that referenced this issue Jul 3, 2013
The following error will occur on some (possibly all) kernels because
blk_init_queue() will try to take the spinlock before we initialize it.

[    5.538871] BUG: spinlock bad magic on CPU#0, zpool/4054
[    5.538885]  lock: 0xffff88021a73de60, .magic: 00000000, .owner:
<none>/-1, .owner_cpu: 0
[    5.538888] Pid: 4054, comm: zpool Not tainted 3.9.3 openzfs#11
[    5.538890] Call Trace:
[    5.538898]  [<ffffffff81478ef8>] spin_dump+0x8c/0x91
[    5.538902]  [<ffffffff81478f1e>] spin_bug+0x21/0x26
[    5.538906]  [<ffffffff812da097>] do_raw_spin_lock+0x127/0x130
[    5.538911]  [<ffffffff81253301>] ? zvol_probe+0x91/0xf0
[    5.538914]  [<ffffffff8147d851>] _raw_spin_lock_irq+0x21/0x30
[    5.538919]  [<ffffffff812c2c1e>] cfq_init_queue+0x1fe/0x350
[    5.538922]  [<ffffffff81253360>] ? zvol_probe+0xf0/0xf0
[    5.538926]  [<ffffffff812aacb8>] elevator_init+0x78/0x140
[    5.538930]  [<ffffffff812b2677>] blk_init_allocated_queue+0x87/0xb0
[    5.538933]  [<ffffffff81253360>] ? zvol_probe+0xf0/0xf0
[    5.538937]  [<ffffffff812b26d5>] blk_init_queue_node+0x35/0x70
[    5.538941]  [<ffffffff812b271e>] blk_init_queue+0xe/0x10
[    5.538944]  [<ffffffff8125211b>] __zvol_create_minor+0x24b/0x620
[    5.538947]  [<ffffffff81253264>] zvol_create_minors_cb+0x24/0x30
[    5.538952]  [<ffffffff811bd9ca>] dmu_objset_find_spa+0xea/0x510
[    5.538955]  [<ffffffff81253240>] ? zvol_free+0x60/0x60
[    5.538958]  [<ffffffff811bda71>] dmu_objset_find_spa+0x191/0x510
[    5.538962]  [<ffffffff81253240>] ? zvol_free+0x60/0x60
[    5.538965]  [<ffffffff81253ea2>] zvol_create_minors+0x92/0x180
[    5.538969]  [<ffffffff811f8d80>] spa_open_common+0x250/0x380
[    5.538973]  [<ffffffff811f8ece>] spa_open+0xe/0x10
[    5.538977]  [<ffffffff8122817e>] pool_status_check.part.22+0x1e/0x80
[    5.538980]  [<ffffffff81228a55>] zfsdev_ioctl+0x155/0x190
[    5.538984]  [<ffffffff8116a695>] do_vfs_ioctl+0x325/0x5a0
[    5.538989]  [<ffffffff81163f1d>] ? final_putname+0x1d/0x40
[    5.538992]  [<ffffffff8116a950>] sys_ioctl+0x40/0x80
[    5.538996]  [<ffffffff814812c9>] ? do_page_fault+0x9/0x10
[    5.539000]  [<ffffffff81483929>] system_call_fastpath+0x16/0x1b
[    5.541118]  zd0: unknown partition table

We fix this by calling spin_lock_init before blk_init_queue.

The manner in which zvol_init() initializes structures is
suspectible to a race between initialization and a probe on a zvol. We
reorganize zvol_init() to prevent that.

Lastly, calling zvol_create_minors(NULL) in zvol_init() does nothing
because no pools are imported, so we remove it.

Signed-off-by: Richard Yao <ryao@gentoo.org>
ryao added a commit to ryao/zfs that referenced this issue Jul 3, 2013
The following error will occur on some (possibly all) kernels
because blk_init_queue() will try to take the spinlock before
we initialize it.

  BUG: spinlock bad magic on CPU#0, zpool/4054
   lock: 0xffff88021a73de60, .magic: 00000000,
   .owner: <none>/-1, .owner_cpu: 0
  Pid: 4054, comm: zpool Not tainted 3.9.3 openzfs#11
  Call Trace:
   [<ffffffff81478ef8>] spin_dump+0x8c/0x91
   [<ffffffff81478f1e>] spin_bug+0x21/0x26
   [<ffffffff812da097>] do_raw_spin_lock+0x127/0x130
   [<ffffffff8147d851>] _raw_spin_lock_irq+0x21/0x30
   [<ffffffff812c2c1e>] cfq_init_queue+0x1fe/0x350
   [<ffffffff812aacb8>] elevator_init+0x78/0x140
   [<ffffffff812b2677>] blk_init_allocated_queue+0x87/0xb0
   [<ffffffff812b26d5>] blk_init_queue_node+0x35/0x70
   [<ffffffff812b271e>] blk_init_queue+0xe/0x10
   [<ffffffff8125211b>] __zvol_create_minor+0x24b/0x620
   [<ffffffff81253264>] zvol_create_minors_cb+0x24/0x30
   [<ffffffff811bd9ca>] dmu_objset_find_spa+0xea/0x510
   [<ffffffff811bda71>] dmu_objset_find_spa+0x191/0x510
   [<ffffffff81253ea2>] zvol_create_minors+0x92/0x180
   [<ffffffff811f8d80>] spa_open_common+0x250/0x380
   [<ffffffff811f8ece>] spa_open+0xe/0x10
   [<ffffffff8122817e>] pool_status_check.part.22+0x1e/0x80
   [<ffffffff81228a55>] zfsdev_ioctl+0x155/0x190
   [<ffffffff8116a695>] do_vfs_ioctl+0x325/0x5a0
   [<ffffffff8116a950>] sys_ioctl+0x40/0x80
   [<ffffffff814812c9>] ? do_page_fault+0x9/0x10
   [<ffffffff81483929>] system_call_fastpath+0x16/0x1b
   zd0: unknown partition table

We fix this by calling spin_lock_init before blk_init_queue.

The manner in which zvol_init() initializes structures is
suspectible to a race between initialization and a probe on
a zvol. We reorganize zvol_init() to prevent that.

Signed-off-by: Richard Yao <ryao@gentoo.org>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
behlendorf pushed a commit that referenced this issue Jul 3, 2013
The following error will occur on some (possibly all) kernels
because blk_init_queue() will try to take the spinlock before
we initialize it.

  BUG: spinlock bad magic on CPU#0, zpool/4054
   lock: 0xffff88021a73de60, .magic: 00000000,
   .owner: <none>/-1, .owner_cpu: 0
  Pid: 4054, comm: zpool Not tainted 3.9.3 #11
  Call Trace:
   [<ffffffff81478ef8>] spin_dump+0x8c/0x91
   [<ffffffff81478f1e>] spin_bug+0x21/0x26
   [<ffffffff812da097>] do_raw_spin_lock+0x127/0x130
   [<ffffffff8147d851>] _raw_spin_lock_irq+0x21/0x30
   [<ffffffff812c2c1e>] cfq_init_queue+0x1fe/0x350
   [<ffffffff812aacb8>] elevator_init+0x78/0x140
   [<ffffffff812b2677>] blk_init_allocated_queue+0x87/0xb0
   [<ffffffff812b26d5>] blk_init_queue_node+0x35/0x70
   [<ffffffff812b271e>] blk_init_queue+0xe/0x10
   [<ffffffff8125211b>] __zvol_create_minor+0x24b/0x620
   [<ffffffff81253264>] zvol_create_minors_cb+0x24/0x30
   [<ffffffff811bd9ca>] dmu_objset_find_spa+0xea/0x510
   [<ffffffff811bda71>] dmu_objset_find_spa+0x191/0x510
   [<ffffffff81253ea2>] zvol_create_minors+0x92/0x180
   [<ffffffff811f8d80>] spa_open_common+0x250/0x380
   [<ffffffff811f8ece>] spa_open+0xe/0x10
   [<ffffffff8122817e>] pool_status_check.part.22+0x1e/0x80
   [<ffffffff81228a55>] zfsdev_ioctl+0x155/0x190
   [<ffffffff8116a695>] do_vfs_ioctl+0x325/0x5a0
   [<ffffffff8116a950>] sys_ioctl+0x40/0x80
   [<ffffffff814812c9>] ? do_page_fault+0x9/0x10
   [<ffffffff81483929>] system_call_fastpath+0x16/0x1b
   zd0: unknown partition table

We fix this by calling spin_lock_init before blk_init_queue.

The manner in which zvol_init() initializes structures is
suspectible to a race between initialization and a probe on
a zvol. We reorganize zvol_init() to prevent that.

Signed-off-by: Richard Yao <ryao@gentoo.org>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
unya pushed a commit to unya/zfs that referenced this issue Dec 13, 2013
The following error will occur on some (possibly all) kernels
because blk_init_queue() will try to take the spinlock before
we initialize it.

  BUG: spinlock bad magic on CPU#0, zpool/4054
   lock: 0xffff88021a73de60, .magic: 00000000,
   .owner: <none>/-1, .owner_cpu: 0
  Pid: 4054, comm: zpool Not tainted 3.9.3 openzfs#11
  Call Trace:
   [<ffffffff81478ef8>] spin_dump+0x8c/0x91
   [<ffffffff81478f1e>] spin_bug+0x21/0x26
   [<ffffffff812da097>] do_raw_spin_lock+0x127/0x130
   [<ffffffff8147d851>] _raw_spin_lock_irq+0x21/0x30
   [<ffffffff812c2c1e>] cfq_init_queue+0x1fe/0x350
   [<ffffffff812aacb8>] elevator_init+0x78/0x140
   [<ffffffff812b2677>] blk_init_allocated_queue+0x87/0xb0
   [<ffffffff812b26d5>] blk_init_queue_node+0x35/0x70
   [<ffffffff812b271e>] blk_init_queue+0xe/0x10
   [<ffffffff8125211b>] __zvol_create_minor+0x24b/0x620
   [<ffffffff81253264>] zvol_create_minors_cb+0x24/0x30
   [<ffffffff811bd9ca>] dmu_objset_find_spa+0xea/0x510
   [<ffffffff811bda71>] dmu_objset_find_spa+0x191/0x510
   [<ffffffff81253ea2>] zvol_create_minors+0x92/0x180
   [<ffffffff811f8d80>] spa_open_common+0x250/0x380
   [<ffffffff811f8ece>] spa_open+0xe/0x10
   [<ffffffff8122817e>] pool_status_check.part.22+0x1e/0x80
   [<ffffffff81228a55>] zfsdev_ioctl+0x155/0x190
   [<ffffffff8116a695>] do_vfs_ioctl+0x325/0x5a0
   [<ffffffff8116a950>] sys_ioctl+0x40/0x80
   [<ffffffff814812c9>] ? do_page_fault+0x9/0x10
   [<ffffffff81483929>] system_call_fastpath+0x16/0x1b
   zd0: unknown partition table

We fix this by calling spin_lock_init before blk_init_queue.

The manner in which zvol_init() initializes structures is
suspectible to a race between initialization and a probe on
a zvol. We reorganize zvol_init() to prevent that.

Signed-off-by: Richard Yao <ryao@gentoo.org>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
akatrevorjay added a commit to akatrevorjay/zfs that referenced this issue Dec 16, 2017
# This is the 1st commit message:
Merge branch 'master' of https://github.com/zfsonlinux/zfs

* 'master' of https://github.com/zfsonlinux/zfs:
  Enable QAT support in zfs-dkms RPM

# This is the commit message openzfs#2:

Import 0.6.5.7-0ubuntu3

# This is the commit message openzfs#3:

gbp changes

# This is the commit message openzfs#4:

Bump ver

# This is the commit message openzfs#5:

-j9 baby

# This is the commit message openzfs#6:

Up

# This is the commit message openzfs#7:

Yup

# This is the commit message openzfs#8:

Add new module

# This is the commit message openzfs#9:

Up

# This is the commit message openzfs#10:

Up

# This is the commit message openzfs#11:

Bump

# This is the commit message openzfs#12:

Grr

# This is the commit message openzfs#13:

Yay

# This is the commit message openzfs#14:

Yay

# This is the commit message openzfs#15:

Yay

# This is the commit message openzfs#16:

Yay

# This is the commit message openzfs#17:

Yay

# This is the commit message openzfs#18:

Yay

# This is the commit message openzfs#19:

yay

# This is the commit message openzfs#20:

yay

# This is the commit message openzfs#21:

yay

# This is the commit message openzfs#22:

Update ppa script

# This is the commit message openzfs#23:

Update gbp conf with br changes

# This is the commit message openzfs#24:

Update gbp conf with br changes

# This is the commit message openzfs#25:

Bump

# This is the commit message openzfs#26:

No pristine

# This is the commit message openzfs#27:

Bump

# This is the commit message openzfs#28:

Lol whoops

# This is the commit message openzfs#29:

Fix name

# This is the commit message openzfs#30:

Fix name

# This is the commit message openzfs#31:

rebase

# This is the commit message openzfs#32:

Bump

# This is the commit message openzfs#33:

Bump

# This is the commit message openzfs#34:

Bump

# This is the commit message openzfs#35:

Bump

# This is the commit message openzfs#36:

ntrim

# This is the commit message openzfs#37:

Bump

# This is the commit message openzfs#38:

9

# This is the commit message openzfs#39:

Bump

# This is the commit message openzfs#40:

Bump

# This is the commit message openzfs#41:

Bump

# This is the commit message openzfs#42:

Revert "9"

This reverts commit de488f1.

# This is the commit message openzfs#43:

Bump

# This is the commit message openzfs#44:

Account for zconfig.sh being removed

# This is the commit message openzfs#45:

Bump

# This is the commit message openzfs#46:

Add artful

# This is the commit message openzfs#47:

Add in zed.d and zpool.d scripts

# This is the commit message openzfs#48:

Bump

# This is the commit message openzfs#49:

Bump

# This is the commit message openzfs#50:

Bump

# This is the commit message openzfs#51:

Bump

# This is the commit message openzfs#52:

ugh

# This is the commit message openzfs#53:

fix zed upgrade

# This is the commit message openzfs#54:

Bump

# This is the commit message openzfs#55:

conf file zed.d

# This is the commit message #56:

Bump
jkryl referenced this issue in mayadata-io/cstor Feb 27, 2018
…11)

* [CCS 59] uZFS API to call dmu from istgt

* [CCS-59] fix in travis and header for pkg-utils

* Adding other header files for make pkg build

* [CCS 59] added few header files to build pkg mod

* [CCS 59]cstyle fixes, adding cstyle check to travis

* Remove unnecessary header files,added TOTAL_TIME env

* Incorporating review comments [CCS 59]

* cstyle fix, removed getenv in uzfs_test

* Merged libuzfs_ioctl into zfs_ioctl

* cstyle fix, removed cstyle from travis

* compilation issue with non-uzfs and userspace build

Once zvol is created using 'zfs create -V' command, it can be opened by doing 'uzfs_open_dataset()', and use uzfs_write_data() and uzfs_read_data() APIs for doing IOs.
shartse pushed a commit to shartse/zfs that referenced this issue Jun 21, 2018
richardelling pushed a commit to richardelling/zfs that referenced this issue Oct 15, 2018
…penzfs#11)

* [CCS 59] uZFS API to call dmu from istgt

* [CCS-59] fix in travis and header for pkg-utils

* Adding other header files for make pkg build

* [CCS 59] added few header files to build pkg mod

* [CCS 59]cstyle fixes, adding cstyle check to travis

* Remove unnecessary header files,added TOTAL_TIME env

* Incorporating review comments [CCS 59]

* cstyle fix, removed getenv in uzfs_test

* Merged libuzfs_ioctl into zfs_ioctl

* cstyle fix, removed cstyle from travis

* compilation issue with non-uzfs and userspace build

Once zvol is created using 'zfs create -V' command, it can be opened by doing 'uzfs_open_dataset()', and use uzfs_write_data() and uzfs_read_data() APIs for doing IOs.
richardelling pushed a commit to richardelling/zfs that referenced this issue Oct 15, 2018
added callback function in uzfs_get_txg_diff
Signed-off-by: mayank <mayank.patel@cloudbyte.com>
markroper added a commit to markroper/zfs that referenced this issue Feb 12, 2020
Using zfs with Lustre, an arc_read can trigger kernel memory allocation
that in turn leads to a memory reclaim callback and a deadlock within a
single zfs process. This change uses spl_fstrans_mark and
spl_trans_unmark to prevent the reclaim attempt and the deadlock
(https://zfsonlinux.topicbox.com/groups/zfs-devel/T4db2c705ec1804ba).
The stack trace observed is:

     #0 [ffffc9002b98adc8] __schedule at ffffffff81610f2e
     openzfs#1 [ffffc9002b98ae68] schedule at ffffffff81611558
     openzfs#2 [ffffc9002b98ae70] schedule_preempt_disabled at ffffffff8161184a
     openzfs#3 [ffffc9002b98ae78] __mutex_lock at ffffffff816131e8
     openzfs#4 [ffffc9002b98af18] arc_buf_destroy at ffffffffa0bf37d7 [zfs]
     openzfs#5 [ffffc9002b98af48] dbuf_destroy at ffffffffa0bfa6fe [zfs]
     openzfs#6 [ffffc9002b98af88] dbuf_evict_one at ffffffffa0bfaa96 [zfs]
     openzfs#7 [ffffc9002b98afa0] dbuf_rele_and_unlock at ffffffffa0bfa561 [zfs]
     openzfs#8 [ffffc9002b98b050] dbuf_rele_and_unlock at ffffffffa0bfa32b [zfs]
     openzfs#9 [ffffc9002b98b100] osd_object_delete at ffffffffa0b64ecc [osd_zfs]
    openzfs#10 [ffffc9002b98b118] lu_object_free at ffffffffa06d6a74 [obdclass]
    openzfs#11 [ffffc9002b98b178] lu_site_purge_objects at ffffffffa06d7fc1 [obdclass]
    openzfs#12 [ffffc9002b98b220] lu_cache_shrink_scan at ffffffffa06d81b8 [obdclass]
    openzfs#13 [ffffc9002b98b278] shrink_slab at ffffffff811ca9d8
    openzfs#14 [ffffc9002b98b338] shrink_node at ffffffff811cfd94
    openzfs#15 [ffffc9002b98b3b8] do_try_to_free_pages at ffffffff811cfe63
    openzfs#16 [ffffc9002b98b408] try_to_free_pages at ffffffff811d01c4
    openzfs#17 [ffffc9002b98b488] __alloc_pages_slowpath at ffffffff811be7f2
    openzfs#18 [ffffc9002b98b580] __alloc_pages_nodemask at ffffffff811bf3ed
    openzfs#19 [ffffc9002b98b5e0] new_slab at ffffffff81226304
    openzfs#20 [ffffc9002b98b638] ___slab_alloc at ffffffff812272ab
    openzfs#21 [ffffc9002b98b6f8] __slab_alloc at ffffffff8122740c
    openzfs#22 [ffffc9002b98b708] kmem_cache_alloc at ffffffff81227578
    openzfs#23 [ffffc9002b98b740] spl_kmem_cache_alloc at ffffffffa048a1fd [spl]
    openzfs#24 [ffffc9002b98b780] arc_buf_alloc_impl at ffffffffa0befba2 [zfs]
    openzfs#25 [ffffc9002b98b7b0] arc_read at ffffffffa0bf0924 [zfs]
    openzfs#26 [ffffc9002b98b858] dbuf_read at ffffffffa0bf9083 [zfs]
    openzfs#27 [ffffc9002b98b900] dmu_buf_hold_by_dnode at ffffffffa0c04869 [zfs]

Signed-off-by: Mark Roper <markroper@gmail.com>
allanjude pushed a commit to KlaraSystems/zfs that referenced this issue Apr 28, 2020
ddt_refcount: Fix ddt entry deletion
problame added a commit to problame/zfs that referenced this issue Oct 10, 2020
This is a fixup of commit 0fdd610

See added test case for a reproducer.

Stack trace:

    panic: VERIFY3(nvlist_next_nvpair(redactnvl, pair) == NULL) failed (0xfffff80003ce5d18x == 0x)

    cpuid = 7
    time = 1602212370
    KDB: stack backtrace:
    #0 0xffffffff80c1d297 at kdb_backtrace+0x67
    openzfs#1 0xffffffff80bd05cd at vpanic+0x19d
    openzfs#2 0xffffffff828446fa at spl_panic+0x3a
    openzfs#3 0xffffffff828af85d at dmu_redact_snap+0x39d
    openzfs#4 0xffffffff829c0370 at zfs_ioc_redact+0xa0
    openzfs#5 0xffffffff829bba44 at zfsdev_ioctl_common+0x4a4
    openzfs#6 0xffffffff8284c3ed at zfsdev_ioctl+0x14d
    openzfs#7 0xffffffff80a85ead at devfs_ioctl+0xad
    openzfs#8 0xffffffff8122a46c at VOP_IOCTL_APV+0x7c
    openzfs#9 0xffffffff80cb0a3a at vn_ioctl+0x16a
    openzfs#10 0xffffffff80a8649f at devfs_ioctl_f+0x1f
    openzfs#11 0xffffffff80c3b55e at kern_ioctl+0x2be
    openzfs#12 0xffffffff80c3b22d at sys_ioctl+0x15d
    openzfs#13 0xffffffff810a88e4 at amd64_syscall+0x364
    openzfs#14 0xffffffff81082330 at fast_syscall_common+0x101

Signed-off-by: Christian Schwarz <me@cschwarz.com>
rob-wing pushed a commit to KlaraSystems/zfs that referenced this issue Feb 17, 2023
Under certain loads, the following panic is hit:

    panic: VERIFY3(vrecycle(vp) == 1) failed (0 == 1)
    cpuid = 17
    KDB: stack backtrace:
    #0 0xffffffff805e29c5 at kdb_backtrace+0x65
    #1 0xffffffff8059620f at vpanic+0x17f
    #2 0xffffffff81a27f4a at spl_panic+0x3a
    #3 0xffffffff81a3a4d0 at zfsctl_snapshot_inactive+0x40
    openzfs#4 0xffffffff8066fdee at vinactivef+0xde
    openzfs#5 0xffffffff80670b8a at vgonel+0x1ea
    openzfs#6 0xffffffff806711e1 at vgone+0x31
    openzfs#7 0xffffffff8065fa0d at vfs_hash_insert+0x26d
    openzfs#8 0xffffffff81a39069 at sfs_vgetx+0x149
    openzfs#9 0xffffffff81a39c54 at zfsctl_snapdir_lookup+0x1e4
    openzfs#10 0xffffffff80661c2c at lookup+0x45c
    openzfs#11 0xffffffff80660e59 at namei+0x259
    openzfs#12 0xffffffff8067e3d3 at kern_statat+0xf3
    openzfs#13 0xffffffff8067eacf at sys_fstatat+0x2f
    openzfs#14 0xffffffff808b5ecc at amd64_syscall+0x10c
    openzfs#15 0xffffffff8088f07b at fast_syscall_common+0xf8

A race condition can occur when allocating a new vnode and adding that
vnode to the vfs hash. If the newly created vnode loses the race when
being inserted into the vfs hash, it will not be recycled as its
usecount is greater than zero, hitting the above assertion.

Fix this by dropping the assertion.

FreeBSD-issue: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=252700

Signed-off-by:  Rob Wing <rob.wing@klarasystems.com>
Sponsored-by:   rsync.net
Sponsored-by:   Klara, Inc.
rob-wing pushed a commit to KlaraSystems/zfs that referenced this issue Feb 17, 2023
Under certain loads, the following panic is hit:

    panic: page fault
    KDB: stack backtrace:
    #0 0xffffffff805db025 at kdb_backtrace+0x65
    #1 0xffffffff8058e86f at vpanic+0x17f
    #2 0xffffffff8058e6e3 at panic+0x43
    #3 0xffffffff808adc15 at trap_fatal+0x385
    openzfs#4 0xffffffff808adc6f at trap_pfault+0x4f
    openzfs#5 0xffffffff80886da8 at calltrap+0x8
    openzfs#6 0xffffffff80669186 at vgonel+0x186
    openzfs#7 0xffffffff80669841 at vgone+0x31
    openzfs#8 0xffffffff8065806d at vfs_hash_insert+0x26d
    openzfs#9 0xffffffff81a39069 at sfs_vgetx+0x149
    openzfs#10 0xffffffff81a39c54 at zfsctl_snapdir_lookup+0x1e4
    openzfs#11 0xffffffff8065a28c at lookup+0x45c
    openzfs#12 0xffffffff806594b9 at namei+0x259
    openzfs#13 0xffffffff80676a33 at kern_statat+0xf3
    openzfs#14 0xffffffff8067712f at sys_fstatat+0x2f
    openzfs#15 0xffffffff808ae50c at amd64_syscall+0x10c
    openzfs#16 0xffffffff808876bb at fast_syscall_common+0xf8

The page fault occurs because vgonel() will call VOP_CLOSE() for active
vnodes. For this reason, define vop_close for zfsctl_ops_snapshot. While
here, define vop_open for consistency.

After adding the necessary vop, the bug progresses to the following
panic:

    panic: VERIFY3(vrecycle(vp) == 1) failed (0 == 1)
    cpuid = 17
    KDB: stack backtrace:
    #0 0xffffffff805e29c5 at kdb_backtrace+0x65
    #1 0xffffffff8059620f at vpanic+0x17f
    #2 0xffffffff81a27f4a at spl_panic+0x3a
    #3 0xffffffff81a3a4d0 at zfsctl_snapshot_inactive+0x40
    openzfs#4 0xffffffff8066fdee at vinactivef+0xde
    openzfs#5 0xffffffff80670b8a at vgonel+0x1ea
    openzfs#6 0xffffffff806711e1 at vgone+0x31
    openzfs#7 0xffffffff8065fa0d at vfs_hash_insert+0x26d
    openzfs#8 0xffffffff81a39069 at sfs_vgetx+0x149
    openzfs#9 0xffffffff81a39c54 at zfsctl_snapdir_lookup+0x1e4
    openzfs#10 0xffffffff80661c2c at lookup+0x45c
    openzfs#11 0xffffffff80660e59 at namei+0x259
    openzfs#12 0xffffffff8067e3d3 at kern_statat+0xf3
    openzfs#13 0xffffffff8067eacf at sys_fstatat+0x2f
    openzfs#14 0xffffffff808b5ecc at amd64_syscall+0x10c
    openzfs#15 0xffffffff8088f07b at fast_syscall_common+0xf8

This is caused by a race condition that can occur when allocating a new
vnode and adding that vnode to the vfs hash. If the newly created vnode
loses the race when being inserted into the vfs hash, it will not be
recycled as its usecount is greater than zero, hitting the above
assertion.

Fix this by dropping the assertion.

FreeBSD-issue: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=252700

Signed-off-by:  Rob Wing <rob.wing@klarasystems.com>
Submitted-by:   Klara, Inc.
Sponsored-by:   rsync.net
behlendorf pushed a commit that referenced this issue Feb 22, 2023
Under certain loads, the following panic is hit:

    panic: page fault
    KDB: stack backtrace:
    #0 0xffffffff805db025 at kdb_backtrace+0x65
    #1 0xffffffff8058e86f at vpanic+0x17f
    #2 0xffffffff8058e6e3 at panic+0x43
    #3 0xffffffff808adc15 at trap_fatal+0x385
    #4 0xffffffff808adc6f at trap_pfault+0x4f
    #5 0xffffffff80886da8 at calltrap+0x8
    #6 0xffffffff80669186 at vgonel+0x186
    #7 0xffffffff80669841 at vgone+0x31
    #8 0xffffffff8065806d at vfs_hash_insert+0x26d
    #9 0xffffffff81a39069 at sfs_vgetx+0x149
    #10 0xffffffff81a39c54 at zfsctl_snapdir_lookup+0x1e4
    #11 0xffffffff8065a28c at lookup+0x45c
    #12 0xffffffff806594b9 at namei+0x259
    #13 0xffffffff80676a33 at kern_statat+0xf3
    #14 0xffffffff8067712f at sys_fstatat+0x2f
    #15 0xffffffff808ae50c at amd64_syscall+0x10c
    #16 0xffffffff808876bb at fast_syscall_common+0xf8

The page fault occurs because vgonel() will call VOP_CLOSE() for active
vnodes. For this reason, define vop_close for zfsctl_ops_snapshot. While
here, define vop_open for consistency.

After adding the necessary vop, the bug progresses to the following
panic:

    panic: VERIFY3(vrecycle(vp) == 1) failed (0 == 1)
    cpuid = 17
    KDB: stack backtrace:
    #0 0xffffffff805e29c5 at kdb_backtrace+0x65
    #1 0xffffffff8059620f at vpanic+0x17f
    #2 0xffffffff81a27f4a at spl_panic+0x3a
    #3 0xffffffff81a3a4d0 at zfsctl_snapshot_inactive+0x40
    #4 0xffffffff8066fdee at vinactivef+0xde
    #5 0xffffffff80670b8a at vgonel+0x1ea
    #6 0xffffffff806711e1 at vgone+0x31
    #7 0xffffffff8065fa0d at vfs_hash_insert+0x26d
    #8 0xffffffff81a39069 at sfs_vgetx+0x149
    #9 0xffffffff81a39c54 at zfsctl_snapdir_lookup+0x1e4
    #10 0xffffffff80661c2c at lookup+0x45c
    #11 0xffffffff80660e59 at namei+0x259
    #12 0xffffffff8067e3d3 at kern_statat+0xf3
    #13 0xffffffff8067eacf at sys_fstatat+0x2f
    #14 0xffffffff808b5ecc at amd64_syscall+0x10c
    #15 0xffffffff8088f07b at fast_syscall_common+0xf8

This is caused by a race condition that can occur when allocating a new
vnode and adding that vnode to the vfs hash. If the newly created vnode
loses the race when being inserted into the vfs hash, it will not be
recycled as its usecount is greater than zero, hitting the above
assertion.

Fix this by dropping the assertion.

FreeBSD-issue: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=252700
Reviewed-by: Andriy Gapon <avg@FreeBSD.org>
Reviewed-by: Mateusz Guzik <mjguzik@gmail.com>
Reviewed-by: Alek Pinchuk <apinchuk@axcient.com>
Reviewed-by: Ryan Moeller <ryan@iXsystems.com>
Signed-off-by: Rob Wing <rob.wing@klarasystems.com>
Co-authored-by: Rob Wing <rob.wing@klarasystems.com>
Submitted-by: Klara, Inc.
Sponsored-by: rsync.net
Closes #14501
behlendorf pushed a commit to behlendorf/zfs that referenced this issue May 28, 2023
Under certain loads, the following panic is hit:

    panic: page fault
    KDB: stack backtrace:
    #0 0xffffffff805db025 at kdb_backtrace+0x65
    #1 0xffffffff8058e86f at vpanic+0x17f
    #2 0xffffffff8058e6e3 at panic+0x43
    #3 0xffffffff808adc15 at trap_fatal+0x385
    #4 0xffffffff808adc6f at trap_pfault+0x4f
    #5 0xffffffff80886da8 at calltrap+0x8
    #6 0xffffffff80669186 at vgonel+0x186
    openzfs#7 0xffffffff80669841 at vgone+0x31
    openzfs#8 0xffffffff8065806d at vfs_hash_insert+0x26d
    openzfs#9 0xffffffff81a39069 at sfs_vgetx+0x149
    openzfs#10 0xffffffff81a39c54 at zfsctl_snapdir_lookup+0x1e4
    openzfs#11 0xffffffff8065a28c at lookup+0x45c
    openzfs#12 0xffffffff806594b9 at namei+0x259
    openzfs#13 0xffffffff80676a33 at kern_statat+0xf3
    openzfs#14 0xffffffff8067712f at sys_fstatat+0x2f
    openzfs#15 0xffffffff808ae50c at amd64_syscall+0x10c
    openzfs#16 0xffffffff808876bb at fast_syscall_common+0xf8

The page fault occurs because vgonel() will call VOP_CLOSE() for active
vnodes. For this reason, define vop_close for zfsctl_ops_snapshot. While
here, define vop_open for consistency.

After adding the necessary vop, the bug progresses to the following
panic:

    panic: VERIFY3(vrecycle(vp) == 1) failed (0 == 1)
    cpuid = 17
    KDB: stack backtrace:
    #0 0xffffffff805e29c5 at kdb_backtrace+0x65
    #1 0xffffffff8059620f at vpanic+0x17f
    #2 0xffffffff81a27f4a at spl_panic+0x3a
    #3 0xffffffff81a3a4d0 at zfsctl_snapshot_inactive+0x40
    #4 0xffffffff8066fdee at vinactivef+0xde
    #5 0xffffffff80670b8a at vgonel+0x1ea
    #6 0xffffffff806711e1 at vgone+0x31
    openzfs#7 0xffffffff8065fa0d at vfs_hash_insert+0x26d
    openzfs#8 0xffffffff81a39069 at sfs_vgetx+0x149
    openzfs#9 0xffffffff81a39c54 at zfsctl_snapdir_lookup+0x1e4
    openzfs#10 0xffffffff80661c2c at lookup+0x45c
    openzfs#11 0xffffffff80660e59 at namei+0x259
    openzfs#12 0xffffffff8067e3d3 at kern_statat+0xf3
    openzfs#13 0xffffffff8067eacf at sys_fstatat+0x2f
    openzfs#14 0xffffffff808b5ecc at amd64_syscall+0x10c
    openzfs#15 0xffffffff8088f07b at fast_syscall_common+0xf8

This is caused by a race condition that can occur when allocating a new
vnode and adding that vnode to the vfs hash. If the newly created vnode
loses the race when being inserted into the vfs hash, it will not be
recycled as its usecount is greater than zero, hitting the above
assertion.

Fix this by dropping the assertion.

FreeBSD-issue: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=252700
Reviewed-by: Andriy Gapon <avg@FreeBSD.org>
Reviewed-by: Mateusz Guzik <mjguzik@gmail.com>
Reviewed-by: Alek Pinchuk <apinchuk@axcient.com>
Reviewed-by: Ryan Moeller <ryan@iXsystems.com>
Signed-off-by: Rob Wing <rob.wing@klarasystems.com>
Co-authored-by: Rob Wing <rob.wing@klarasystems.com>
Submitted-by: Klara, Inc.
Sponsored-by: rsync.net
Closes openzfs#14501
behlendorf pushed a commit that referenced this issue May 30, 2023
Under certain loads, the following panic is hit:

    panic: page fault
    KDB: stack backtrace:
    #0 0xffffffff805db025 at kdb_backtrace+0x65
    #1 0xffffffff8058e86f at vpanic+0x17f
    #2 0xffffffff8058e6e3 at panic+0x43
    #3 0xffffffff808adc15 at trap_fatal+0x385
    #4 0xffffffff808adc6f at trap_pfault+0x4f
    #5 0xffffffff80886da8 at calltrap+0x8
    #6 0xffffffff80669186 at vgonel+0x186
    #7 0xffffffff80669841 at vgone+0x31
    #8 0xffffffff8065806d at vfs_hash_insert+0x26d
    #9 0xffffffff81a39069 at sfs_vgetx+0x149
    #10 0xffffffff81a39c54 at zfsctl_snapdir_lookup+0x1e4
    #11 0xffffffff8065a28c at lookup+0x45c
    #12 0xffffffff806594b9 at namei+0x259
    #13 0xffffffff80676a33 at kern_statat+0xf3
    #14 0xffffffff8067712f at sys_fstatat+0x2f
    #15 0xffffffff808ae50c at amd64_syscall+0x10c
    #16 0xffffffff808876bb at fast_syscall_common+0xf8

The page fault occurs because vgonel() will call VOP_CLOSE() for active
vnodes. For this reason, define vop_close for zfsctl_ops_snapshot. While
here, define vop_open for consistency.

After adding the necessary vop, the bug progresses to the following
panic:

    panic: VERIFY3(vrecycle(vp) == 1) failed (0 == 1)
    cpuid = 17
    KDB: stack backtrace:
    #0 0xffffffff805e29c5 at kdb_backtrace+0x65
    #1 0xffffffff8059620f at vpanic+0x17f
    #2 0xffffffff81a27f4a at spl_panic+0x3a
    #3 0xffffffff81a3a4d0 at zfsctl_snapshot_inactive+0x40
    #4 0xffffffff8066fdee at vinactivef+0xde
    #5 0xffffffff80670b8a at vgonel+0x1ea
    #6 0xffffffff806711e1 at vgone+0x31
    #7 0xffffffff8065fa0d at vfs_hash_insert+0x26d
    #8 0xffffffff81a39069 at sfs_vgetx+0x149
    #9 0xffffffff81a39c54 at zfsctl_snapdir_lookup+0x1e4
    #10 0xffffffff80661c2c at lookup+0x45c
    #11 0xffffffff80660e59 at namei+0x259
    #12 0xffffffff8067e3d3 at kern_statat+0xf3
    #13 0xffffffff8067eacf at sys_fstatat+0x2f
    #14 0xffffffff808b5ecc at amd64_syscall+0x10c
    #15 0xffffffff8088f07b at fast_syscall_common+0xf8

This is caused by a race condition that can occur when allocating a new
vnode and adding that vnode to the vfs hash. If the newly created vnode
loses the race when being inserted into the vfs hash, it will not be
recycled as its usecount is greater than zero, hitting the above
assertion.

Fix this by dropping the assertion.

FreeBSD-issue: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=252700
Reviewed-by: Andriy Gapon <avg@FreeBSD.org>
Reviewed-by: Mateusz Guzik <mjguzik@gmail.com>
Reviewed-by: Alek Pinchuk <apinchuk@axcient.com>
Reviewed-by: Ryan Moeller <ryan@iXsystems.com>
Signed-off-by: Rob Wing <rob.wing@klarasystems.com>
Co-authored-by: Rob Wing <rob.wing@klarasystems.com>
Submitted-by: Klara, Inc.
Sponsored-by: rsync.net
Closes #14501
EchterAgo pushed a commit to EchterAgo/zfs that referenced this issue Aug 4, 2023
Under certain loads, the following panic is hit:

    panic: page fault
    KDB: stack backtrace:
    #0 0xffffffff805db025 at kdb_backtrace+0x65
    openzfs#1 0xffffffff8058e86f at vpanic+0x17f
    openzfs#2 0xffffffff8058e6e3 at panic+0x43
    openzfs#3 0xffffffff808adc15 at trap_fatal+0x385
    openzfs#4 0xffffffff808adc6f at trap_pfault+0x4f
    openzfs#5 0xffffffff80886da8 at calltrap+0x8
    openzfs#6 0xffffffff80669186 at vgonel+0x186
    openzfs#7 0xffffffff80669841 at vgone+0x31
    openzfs#8 0xffffffff8065806d at vfs_hash_insert+0x26d
    openzfs#9 0xffffffff81a39069 at sfs_vgetx+0x149
    openzfs#10 0xffffffff81a39c54 at zfsctl_snapdir_lookup+0x1e4
    openzfs#11 0xffffffff8065a28c at lookup+0x45c
    openzfs#12 0xffffffff806594b9 at namei+0x259
    openzfs#13 0xffffffff80676a33 at kern_statat+0xf3
    openzfs#14 0xffffffff8067712f at sys_fstatat+0x2f
    openzfs#15 0xffffffff808ae50c at amd64_syscall+0x10c
    openzfs#16 0xffffffff808876bb at fast_syscall_common+0xf8

The page fault occurs because vgonel() will call VOP_CLOSE() for active
vnodes. For this reason, define vop_close for zfsctl_ops_snapshot. While
here, define vop_open for consistency.

After adding the necessary vop, the bug progresses to the following
panic:

    panic: VERIFY3(vrecycle(vp) == 1) failed (0 == 1)
    cpuid = 17
    KDB: stack backtrace:
    #0 0xffffffff805e29c5 at kdb_backtrace+0x65
    openzfs#1 0xffffffff8059620f at vpanic+0x17f
    openzfs#2 0xffffffff81a27f4a at spl_panic+0x3a
    openzfs#3 0xffffffff81a3a4d0 at zfsctl_snapshot_inactive+0x40
    openzfs#4 0xffffffff8066fdee at vinactivef+0xde
    openzfs#5 0xffffffff80670b8a at vgonel+0x1ea
    openzfs#6 0xffffffff806711e1 at vgone+0x31
    openzfs#7 0xffffffff8065fa0d at vfs_hash_insert+0x26d
    openzfs#8 0xffffffff81a39069 at sfs_vgetx+0x149
    openzfs#9 0xffffffff81a39c54 at zfsctl_snapdir_lookup+0x1e4
    openzfs#10 0xffffffff80661c2c at lookup+0x45c
    openzfs#11 0xffffffff80660e59 at namei+0x259
    openzfs#12 0xffffffff8067e3d3 at kern_statat+0xf3
    openzfs#13 0xffffffff8067eacf at sys_fstatat+0x2f
    openzfs#14 0xffffffff808b5ecc at amd64_syscall+0x10c
    openzfs#15 0xffffffff8088f07b at fast_syscall_common+0xf8

This is caused by a race condition that can occur when allocating a new
vnode and adding that vnode to the vfs hash. If the newly created vnode
loses the race when being inserted into the vfs hash, it will not be
recycled as its usecount is greater than zero, hitting the above
assertion.

Fix this by dropping the assertion.

FreeBSD-issue: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=252700
Reviewed-by: Andriy Gapon <avg@FreeBSD.org>
Reviewed-by: Mateusz Guzik <mjguzik@gmail.com>
Reviewed-by: Alek Pinchuk <apinchuk@axcient.com>
Reviewed-by: Ryan Moeller <ryan@iXsystems.com>
Signed-off-by: Rob Wing <rob.wing@klarasystems.com>
Co-authored-by: Rob Wing <rob.wing@klarasystems.com>
Submitted-by: Klara, Inc.
Sponsored-by: rsync.net
Closes openzfs#14501
This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Type: Feature Feature request or new feature
Projects
None yet
Development

No branches or pull requests

2 participants