-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Quick fix for unescaped curly braces deprecation #5
Conversation
Hi @offshore
Please take a look at the https://wiki.qemu.org/Contribute/SubmitAPatch it gives a good summary of how to do it right. Update: -- Max. |
Fix from the mainline is backported as eed6eaf |
Thanx for detailed explaination and quick response; I'm glad the problem is solved, despite I didn't know about mainline being already patched. |
Currently offloads disabled by guest via the VIRTIO_NET_CTRL_GUEST_OFFLOADS_SET command are not preserved on VM migration. Instead all offloads reported by guest features (via VIRTIO_PCI_GUEST_FEATURES) get enabled. What happens is: first the VirtIONet::curr_guest_offloads gets restored and offloads are getting set correctly: #0 qemu_set_offload (nc=0x555556a11400, csum=1, tso4=0, tso6=0, ecn=0, ufo=0) at net/net.c:474 #1 virtio_net_apply_guest_offloads (n=0x555557701ca0) at hw/net/virtio-net.c:720 #2 virtio_net_post_load_device (opaque=0x555557701ca0, version_id=11) at hw/net/virtio-net.c:2334 #3 vmstate_load_state (f=0x5555569dc010, vmsd=0x555556577c80 <vmstate_virtio_net_device>, opaque=0x555557701ca0, version_id=11) at migration/vmstate.c:168 #4 virtio_load (vdev=0x555557701ca0, f=0x5555569dc010, version_id=11) at hw/virtio/virtio.c:2197 #5 virtio_device_get (f=0x5555569dc010, opaque=0x555557701ca0, size=0, field=0x55555668cd00 <__compound_literal.5>) at hw/virtio/virtio.c:2036 #6 vmstate_load_state (f=0x5555569dc010, vmsd=0x555556577ce0 <vmstate_virtio_net>, opaque=0x555557701ca0, version_id=11) at migration/vmstate.c:143 #7 vmstate_load (f=0x5555569dc010, se=0x5555578189e0) at migration/savevm.c:829 #8 qemu_loadvm_section_start_full (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2211 #9 qemu_loadvm_state_main (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2395 #10 qemu_loadvm_state (f=0x5555569dc010) at migration/savevm.c:2467 #11 process_incoming_migration_co (opaque=0x0) at migration/migration.c:449 However later on the features are getting restored, and offloads get reset to everything supported by features: #0 qemu_set_offload (nc=0x555556a11400, csum=1, tso4=1, tso6=1, ecn=0, ufo=0) at net/net.c:474 #1 virtio_net_apply_guest_offloads (n=0x555557701ca0) at hw/net/virtio-net.c:720 #2 virtio_net_set_features (vdev=0x555557701ca0, features=5104441767) at hw/net/virtio-net.c:773 #3 virtio_set_features_nocheck (vdev=0x555557701ca0, val=5104441767) at hw/virtio/virtio.c:2052 #4 virtio_load (vdev=0x555557701ca0, f=0x5555569dc010, version_id=11) at hw/virtio/virtio.c:2220 #5 virtio_device_get (f=0x5555569dc010, opaque=0x555557701ca0, size=0, field=0x55555668cd00 <__compound_literal.5>) at hw/virtio/virtio.c:2036 #6 vmstate_load_state (f=0x5555569dc010, vmsd=0x555556577ce0 <vmstate_virtio_net>, opaque=0x555557701ca0, version_id=11) at migration/vmstate.c:143 #7 vmstate_load (f=0x5555569dc010, se=0x5555578189e0) at migration/savevm.c:829 #8 qemu_loadvm_section_start_full (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2211 #9 qemu_loadvm_state_main (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2395 #10 qemu_loadvm_state (f=0x5555569dc010) at migration/savevm.c:2467 #11 process_incoming_migration_co (opaque=0x0) at migration/migration.c:449 Fix this by preserving the state in saved_guest_offloads field and pushing out offload initialization to the new post load hook. Cc: qemu-stable@nongnu.org Signed-off-by: Mikhail Sennikovsky <mikhail.sennikovskii@cloud.ionos.com> Signed-off-by: Jason Wang <jasowang@redhat.com>
Guests can crash QEMU when writting to PnP registers: $ echo 'writeb 0x800ff042 69' | qemu-system-sparc -M leon3_generic -S -bios /etc/magic -qtest stdio [I 1571938309.932255] OPENED [R +0.063474] writeb 0x800ff042 69 Segmentation fault (core dumped) (gdb) bt #0 0x0000000000000000 in () #1 0x0000555f4bcdf0bc in memory_region_write_with_attrs_accessor (mr=0x555f4d7be8c0, addr=66, value=0x7fff07d00f08, size=1, shift=0, mask=255, attrs=...) at memory.c:503 #2 0x0000555f4bcdf185 in access_with_adjusted_size (addr=66, value=0x7fff07d00f08, size=1, access_size_min=1, access_size_max=4, access_fn=0x555f4bcdeff4 <memory_region_write_with_attrs_accessor>, mr=0x555f4d7be8c0, attrs=...) at memory.c:539 #3 0x0000555f4bce2243 in memory_region_dispatch_write (mr=0x555f4d7be8c0, addr=66, data=69, op=MO_8, attrs=...) at memory.c:1489 #4 0x0000555f4bc80b20 in flatview_write_continue (fv=0x555f4d92c400, addr=2148528194, attrs=..., buf=0x7fff07d01120 "E", len=1, addr1=66, l=1, mr=0x555f4d7be8c0) at exec.c:3161 #5 0x0000555f4bc80c65 in flatview_write (fv=0x555f4d92c400, addr=2148528194, attrs=..., buf=0x7fff07d01120 "E", len=1) at exec.c:3201 #6 0x0000555f4bc80fb0 in address_space_write (as=0x555f4d7aa460, addr=2148528194, attrs=..., buf=0x7fff07d01120 "E", len=1) at exec.c:3291 #7 0x0000555f4bc8101d in address_space_rw (as=0x555f4d7aa460, addr=2148528194, attrs=..., buf=0x7fff07d01120 "E", len=1, is_write=true) at exec.c:3301 #8 0x0000555f4bcdb388 in qtest_process_command (chr=0x555f4c2ed7e0 <qtest_chr>, words=0x555f4db0c5d0) at qtest.c:432 Instead of crashing, log the access as unimplemented. Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: KONRAD Frederic <frederic.konrad@adacore.com> Message-Id: <20191025110114.27091-2-philmd@redhat.com> Signed-off-by: Laurent Vivier <laurent@vivier.eu>
ivq/dvq/svq/free_page_vq is forgot to cleanup in virtio_balloon_device_unrealize, the memory leak stack is as follow: Direct leak of 14336 byte(s) in 2 object(s) allocated from: #0 0x7f99fd9d8560 in calloc (/usr/lib64/libasan.so.3+0xc7560) #1 0x7f99fcb20015 in g_malloc0 (/usr/lib64/libglib-2.0.so.0+0x50015) #2 0x557d90638437 in virtio_add_queue hw/virtio/virtio.c:2327 #3 0x557d9064401d in virtio_balloon_device_realize hw/virtio/virtio-balloon.c:793 #4 0x557d906356f7 in virtio_device_realize hw/virtio/virtio.c:3504 #5 0x557d9073f081 in device_set_realized hw/core/qdev.c:876 #6 0x557d908b1f4d in property_set_bool qom/object.c:2080 #7 0x557d908b655e in object_property_set_qobject qom/qom-qobject.c:26 Reported-by: Euler Robot <euler.robot@huawei.com> Signed-off-by: Pan Nengyuan <pannengyuan@huawei.com> Message-Id: <1575444716-17632-2-git-send-email-pannengyuan@huawei.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: David Hildenbrand <david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: David Hildenbrand <david@redhat.com>
ivqs/ovqs/c_ivq/c_ovq is forgot to cleanup in virtio_serial_device_unrealize, the memory leak stack is as bellow: Direct leak of 1290240 byte(s) in 180 object(s) allocated from: #0 0x7fc9bfc27560 in calloc (/usr/lib64/libasan.so.3+0xc7560) #1 0x7fc9bed6f015 in g_malloc0 (/usr/lib64/libglib-2.0.so.0+0x50015) #2 0x5650e02b83e7 in virtio_add_queue hw/virtio/virtio.c:2327 #3 0x5650e02847b5 in virtio_serial_device_realize hw/char/virtio-serial-bus.c:1089 #4 0x5650e02b56a7 in virtio_device_realize hw/virtio/virtio.c:3504 #5 0x5650e03bf031 in device_set_realized hw/core/qdev.c:876 #6 0x5650e0531efd in property_set_bool qom/object.c:2080 #7 0x5650e053650e in object_property_set_qobject qom/qom-qobject.c:26 #8 0x5650e0533e14 in object_property_set_bool qom/object.c:1338 #9 0x5650e04c0e37 in virtio_pci_realize hw/virtio/virtio-pci.c:1801 Reported-by: Euler Robot <euler.robot@huawei.com> Signed-off-by: Pan Nengyuan <pannengyuan@huawei.com> Cc: Laurent Vivier <lvivier@redhat.com> Cc: Amit Shah <amit@kernel.org> Cc: "Marc-André Lureau" <marcandre.lureau@redhat.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <1575444716-17632-3-git-send-email-pannengyuan@huawei.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Currently the SLOF firmware for pseries guests will disable/re-enable a PCI device multiple times via IO/MEM/MASTER bits of PCI_COMMAND register after the initial probe/feature negotiation, as it tends to work with a single device at a time at various stages like probing and running block/network bootloaders without doing a full reset in-between. In QEMU, when PCI_COMMAND_MASTER is disabled we disable the corresponding IOMMU memory region, so DMA accesses (including to vring fields like idx/flags) will no longer undergo the necessary translation. Normally we wouldn't expect this to happen since it would be misbehavior on the driver side to continue driving DMA requests. However, in the case of pseries, with iommu_platform=on, we trigger the following sequence when tearing down the virtio-blk dataplane ioeventfd in response to the guest unsetting PCI_COMMAND_MASTER: #2 0x0000555555922651 in virtqueue_map_desc (vdev=vdev@entry=0x555556dbcfb0, p_num_sg=p_num_sg@entry=0x7fffe657e1a8, addr=addr@entry=0x7fffe657e240, iov=iov@entry=0x7fffe6580240, max_num_sg=max_num_sg@entry=1024, is_write=is_write@entry=false, pa=0, sz=0) at /home/mdroth/w/qemu.git/hw/virtio/virtio.c:757 #3 0x0000555555922a89 in virtqueue_pop (vq=vq@entry=0x555556dc8660, sz=sz@entry=184) at /home/mdroth/w/qemu.git/hw/virtio/virtio.c:950 #4 0x00005555558d3eca in virtio_blk_get_request (vq=0x555556dc8660, s=0x555556dbcfb0) at /home/mdroth/w/qemu.git/hw/block/virtio-blk.c:255 #5 0x00005555558d3eca in virtio_blk_handle_vq (s=0x555556dbcfb0, vq=0x555556dc8660) at /home/mdroth/w/qemu.git/hw/block/virtio-blk.c:776 #6 0x000055555591dd66 in virtio_queue_notify_aio_vq (vq=vq@entry=0x555556dc8660) at /home/mdroth/w/qemu.git/hw/virtio/virtio.c:1550 #7 0x000055555591ecef in virtio_queue_notify_aio_vq (vq=0x555556dc8660) at /home/mdroth/w/qemu.git/hw/virtio/virtio.c:1546 #8 0x000055555591ecef in virtio_queue_host_notifier_aio_poll (opaque=0x555556dc86c8) at /home/mdroth/w/qemu.git/hw/virtio/virtio.c:2527 #9 0x0000555555d02164 in run_poll_handlers_once (ctx=ctx@entry=0x55555688bfc0, timeout=timeout@entry=0x7fffe65844a8) at /home/mdroth/w/qemu.git/util/aio-posix.c:520 #10 0x0000555555d02d1b in try_poll_mode (timeout=0x7fffe65844a8, ctx=0x55555688bfc0) at /home/mdroth/w/qemu.git/util/aio-posix.c:607 #11 0x0000555555d02d1b in aio_poll (ctx=ctx@entry=0x55555688bfc0, blocking=blocking@entry=true) at /home/mdroth/w/qemu.git/util/aio-posix.c:639 #12 0x0000555555d0004d in aio_wait_bh_oneshot (ctx=0x55555688bfc0, cb=cb@entry=0x5555558d5130 <virtio_blk_data_plane_stop_bh>, opaque=opaque@entry=0x555556de86f0) at /home/mdroth/w/qemu.git/util/aio-wait.c:71 #13 0x00005555558d59bf in virtio_blk_data_plane_stop (vdev=<optimized out>) at /home/mdroth/w/qemu.git/hw/block/dataplane/virtio-blk.c:288 #14 0x0000555555b906a1 in virtio_bus_stop_ioeventfd (bus=bus@entry=0x555556dbcf38) at /home/mdroth/w/qemu.git/hw/virtio/virtio-bus.c:245 #15 0x0000555555b90dbb in virtio_bus_stop_ioeventfd (bus=bus@entry=0x555556dbcf38) at /home/mdroth/w/qemu.git/hw/virtio/virtio-bus.c:237 #16 0x0000555555b92a8e in virtio_pci_stop_ioeventfd (proxy=0x555556db4e40) at /home/mdroth/w/qemu.git/hw/virtio/virtio-pci.c:292 #17 0x0000555555b92a8e in virtio_write_config (pci_dev=0x555556db4e40, address=<optimized out>, val=1048832, len=<optimized out>) at /home/mdroth/w/qemu.git/hw/virtio/virtio-pci.c:613 I.e. the calling code is only scheduling a one-shot BH for virtio_blk_data_plane_stop_bh, but somehow we end up trying to process an additional virtqueue entry before we get there. This is likely due to the following check in virtio_queue_host_notifier_aio_poll: static bool virtio_queue_host_notifier_aio_poll(void *opaque) { EventNotifier *n = opaque; VirtQueue *vq = container_of(n, VirtQueue, host_notifier); bool progress; if (!vq->vring.desc || virtio_queue_empty(vq)) { return false; } progress = virtio_queue_notify_aio_vq(vq); namely the call to virtio_queue_empty(). In this case, since no new requests have actually been issued, shadow_avail_idx == last_avail_idx, so we actually try to access the vring via vring_avail_idx() to get the latest non-shadowed idx: int virtio_queue_empty(VirtQueue *vq) { bool empty; ... if (vq->shadow_avail_idx != vq->last_avail_idx) { return 0; } rcu_read_lock(); empty = vring_avail_idx(vq) == vq->last_avail_idx; rcu_read_unlock(); return empty; but since the IOMMU region has been disabled we get a bogus value (0 usually), which causes virtio_queue_empty() to falsely report that there are entries to be processed, which causes errors such as: "virtio: zero sized buffers are not allowed" or "virtio-blk missing headers" and puts the device in an error state. This patch works around the issue by introducing virtio_set_disabled(), which sets a 'disabled' flag to bypass checks like virtio_queue_empty() when bus-mastering is disabled. Since we'd check this flag at all the same sites as vdev->broken, we replace those checks with an inline function which checks for either vdev->broken or vdev->disabled. The 'disabled' flag is only migrated when set, which should be fairly rare, but to maintain migration compatibility we disable it's use for older machine types. Users requiring the use of the flag in conjunction with older machine types can set it explicitly as a virtio-device option. NOTES: - This leaves some other oddities in play, like the fact that DRIVER_OK also gets unset in response to bus-mastering being disabled, but not restored (however the device seems to continue working) - Similarly, we disable the host notifier via virtio_bus_stop_ioeventfd(), which seems to move the handling out of virtio-blk dataplane and back into the main IO thread, and it ends up staying there till a reset (but otherwise continues working normally) Cc: David Gibson <david@gibson.dropbear.id.au>, Cc: Alexey Kardashevskiy <aik@ozlabs.ru> Cc: "Michael S. Tsirkin" <mst@redhat.com> Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com> Message-Id: <20191120005003.27035-1-mdroth@linux.vnet.ibm.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
This avoid a memory leak when qom-set is called to set throttle_group limits, here is an easy way to reproduce: 1. run qemu-iotests as follow and check the result with asan: ./check -qcow2 184 Following is the asan output backtrack: Direct leak of 912 byte(s) in 3 object(s) allocated from: #0 0xffff8d7ab3c3 in __interceptor_calloc (/lib64/libasan.so.4+0xd33c3) #1 0xffff8d4c31cb in g_malloc0 (/lib64/libglib-2.0.so.0+0x571cb) #2 0x190c857 in qobject_input_start_struct /mnt/sdc/qemu-master/qemu-4.2.0-rc0/qapi/qobject-input-visitor.c:295 #3 0x19070df in visit_start_struct /mnt/sdc/qemu-master/qemu-4.2.0-rc0/qapi/qapi-visit-core.c:49 #4 0x1948b87 in visit_type_ThrottleLimits qapi/qapi-visit-block-core.c:3759 #5 0x17e4aa3 in throttle_group_set_limits /mnt/sdc/qemu-master/qemu-4.2.0-rc0/block/throttle-groups.c:900 #6 0x1650eff in object_property_set /mnt/sdc/qemu-master/qemu-4.2.0-rc0/qom/object.c:1272 #7 0x1658517 in object_property_set_qobject /mnt/sdc/qemu-master/qemu-4.2.0-rc0/qom/qom-qobject.c:26 #8 0x15880bb in qmp_qom_set /mnt/sdc/qemu-master/qemu-4.2.0-rc0/qom/qom-qmp-cmds.c:74 #9 0x157e3e3 in qmp_marshal_qom_set qapi/qapi-commands-qom.c:154 Reported-by: Euler Robot <euler.robot@huawei.com> Signed-off-by: PanNengyuan <pannengyuan@huawei.com> Message-id: 1574835614-42028-1-git-send-email-pannengyuan@huawei.com Signed-off-by: Max Reitz <mreitz@redhat.com>
There is an overflow, the source 'datain.data[2]' is 100 bytes, but the 'ss' is 252 bytes.This may cause a security issue because we can access a lot of unrelated memory data. The len for sbp copy data should take the minimum of mx_sb_len and sb_len_wr, not the maximum. If we use iscsi device for VM backend storage, ASAN show stack: READ of size 252 at 0xfffd149dcfc4 thread T0 #0 0xaaad433d0d34 in __asan_memcpy (aarch64-softmmu/qemu-system-aarch64+0x2cb0d34) #1 0xaaad45f9d6d0 in iscsi_aio_ioctl_cb /qemu/block/iscsi.c:996:9 #2 0xfffd1af0e2dc (/usr/lib64/iscsi/libiscsi.so.8+0xe2dc) #3 0xfffd1af0d174 (/usr/lib64/iscsi/libiscsi.so.8+0xd174) #4 0xfffd1af19fac (/usr/lib64/iscsi/libiscsi.so.8+0x19fac) #5 0xaaad45f9acc8 in iscsi_process_read /qemu/block/iscsi.c:403:5 #6 0xaaad4623733c in aio_dispatch_handler /qemu/util/aio-posix.c:467:9 #7 0xaaad4622f350 in aio_dispatch_handlers /qemu/util/aio-posix.c:510:20 #8 0xaaad4622f350 in aio_dispatch /qemu/util/aio-posix.c:520 #9 0xaaad46215944 in aio_ctx_dispatch /qemu/util/async.c:298:5 #10 0xfffd1bed12f4 in g_main_context_dispatch (/lib64/libglib-2.0.so.0+0x512f4) #11 0xaaad46227de0 in glib_pollfds_poll /qemu/util/main-loop.c:219:9 #12 0xaaad46227de0 in os_host_main_loop_wait /qemu/util/main-loop.c:242 #13 0xaaad46227de0 in main_loop_wait /qemu/util/main-loop.c:518 #14 0xaaad43d9d60c in qemu_main_loop /qemu/softmmu/vl.c:1662:9 #15 0xaaad4607a5b0 in main /qemu/softmmu/main.c:49:5 #16 0xfffd1a460b9c in __libc_start_main (/lib64/libc.so.6+0x20b9c) #17 0xaaad43320740 in _start (aarch64-softmmu/qemu-system-aarch64+0x2c00740) 0xfffd149dcfc4 is located 0 bytes to the right of 100-byte region [0xfffd149dcf60,0xfffd149dcfc4) allocated by thread T0 here: #0 0xaaad433d1e70 in __interceptor_malloc (aarch64-softmmu/qemu-system-aarch64+0x2cb1e70) #1 0xfffd1af0e254 (/usr/lib64/iscsi/libiscsi.so.8+0xe254) #2 0xfffd1af0d174 (/usr/lib64/iscsi/libiscsi.so.8+0xd174) #3 0xfffd1af19fac (/usr/lib64/iscsi/libiscsi.so.8+0x19fac) #4 0xaaad45f9acc8 in iscsi_process_read /qemu/block/iscsi.c:403:5 #5 0xaaad4623733c in aio_dispatch_handler /qemu/util/aio-posix.c:467:9 #6 0xaaad4622f350 in aio_dispatch_handlers /qemu/util/aio-posix.c:510:20 #7 0xaaad4622f350 in aio_dispatch /qemu/util/aio-posix.c:520 #8 0xaaad46215944 in aio_ctx_dispatch /qemu/util/async.c:298:5 #9 0xfffd1bed12f4 in g_main_context_dispatch (/lib64/libglib-2.0.so.0+0x512f4) #10 0xaaad46227de0 in glib_pollfds_poll /qemu/util/main-loop.c:219:9 #11 0xaaad46227de0 in os_host_main_loop_wait /qemu/util/main-loop.c:242 #12 0xaaad46227de0 in main_loop_wait /qemu/util/main-loop.c:518 #13 0xaaad43d9d60c in qemu_main_loop /qemu/softmmu/vl.c:1662:9 #14 0xaaad4607a5b0 in main /qemu/softmmu/main.c:49:5 #15 0xfffd1a460b9c in __libc_start_main (/lib64/libc.so.6+0x20b9c) #16 0xaaad43320740 in _start (aarch64-softmmu/qemu-system-aarch64+0x2c00740) Reported-by: Euler Robot <euler.robot@huawei.com> Signed-off-by: Chen Qun <kuhn.chenqun@huawei.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Message-id: 20200418062602.10776-1-kuhn.chenqun@huawei.com Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
virtio_net_device_realize() rejects invalid duplex and speed values. The error handling is broken: $ ../qemu/bld-sani/x86_64-softmmu/qemu-system-x86_64 -S -display none -monitor stdio QEMU 4.2.93 monitor - type 'help' for more information (qemu) device_add virtio-net,duplex=x Error: 'duplex' must be 'half' or 'full' (qemu) c ================================================================= ==15654==ERROR: AddressSanitizer: heap-use-after-free on address 0x62e000014590 at pc 0x560b75c8dc13 bp 0x7fffdf1a6950 sp 0x7fffdf1a6940 READ of size 8 at 0x62e000014590 thread T0 #0 0x560b75c8dc12 in object_dynamic_cast_assert /work/armbru/qemu/qom/object.c:826 #1 0x560b74c38ac0 in virtio_vmstate_change /work/armbru/qemu/hw/virtio/virtio.c:3210 #2 0x560b74d9765e in vm_state_notify /work/armbru/qemu/softmmu/vl.c:1271 #3 0x560b7494ba72 in vm_prepare_start /work/armbru/qemu/cpus.c:2156 #4 0x560b7494bacd in vm_start /work/armbru/qemu/cpus.c:2162 #5 0x560b75a7d890 in qmp_cont /work/armbru/qemu/monitor/qmp-cmds.c:160 #6 0x560b75a8d70a in hmp_cont /work/armbru/qemu/monitor/hmp-cmds.c:1043 #7 0x560b75a799f2 in handle_hmp_command /work/armbru/qemu/monitor/hmp.c:1082 [...] 0x62e000014590 is located 33168 bytes inside of 42288-byte region [0x62e00000c400,0x62e000016930) freed by thread T1 here: #0 0x7feadd39491f in __interceptor_free (/lib64/libasan.so.5+0x10d91f) #1 0x7feadcebcd7c in g_free (/lib64/libglib-2.0.so.0+0x55d7c) #2 0x560b75c8fd40 in object_unref /work/armbru/qemu/qom/object.c:1128 #3 0x560b7498a625 in memory_region_unref /work/armbru/qemu/memory.c:1762 #4 0x560b74999fa4 in do_address_space_destroy /work/armbru/qemu/memory.c:2788 #5 0x560b762362fc in call_rcu_thread /work/armbru/qemu/util/rcu.c:283 #6 0x560b761c8884 in qemu_thread_start /work/armbru/qemu/util/qemu-thread-posix.c:519 #7 0x7fead9be34bf in start_thread (/lib64/libpthread.so.0+0x84bf) previously allocated by thread T0 here: #0 0x7feadd394d18 in __interceptor_malloc (/lib64/libasan.so.5+0x10dd18) #1 0x7feadcebcc88 in g_malloc (/lib64/libglib-2.0.so.0+0x55c88) #2 0x560b75c8cf8a in object_new /work/armbru/qemu/qom/object.c:699 #3 0x560b75010ad9 in qdev_device_add /work/armbru/qemu/qdev-monitor.c:654 #4 0x560b750120c2 in qmp_device_add /work/armbru/qemu/qdev-monitor.c:805 #5 0x560b75012c1b in hmp_device_add /work/armbru/qemu/qdev-monitor.c:905 [...] ==15654==ABORTING Cause: virtio_net_device_realize() neglects to bail out after setting the error. Fix that. Fixes: 9473939 Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Jason Wang <jasowang@redhat.com> Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-Id: <20200422130719.28225-9-armbru@redhat.com> Acked-by: Michael S. Tsirkin <mst@redhat.com>
Similarly to commit 158b659 with the APB PnP registers, guests can crash QEMU when writting to the AHB PnP registers: $ echo 'writeb 0xfffff042 69' | qemu-system-sparc -M leon3_generic -S -bios /etc/magic -qtest stdio [I 1571938309.932255] OPENED [R +0.063474] writeb 0xfffff042 69 Segmentation fault (core dumped) (gdb) bt #0 0x0000000000000000 in () #1 0x0000562999110df4 in memory_region_write_with_attrs_accessor (mr=mr@entry=0x56299aa28ea0, addr=66, value=value@entry=0x7fff6abe13b8, size=size@entry=1, shift=<optimized out>, mask=mask@entry=255, attrs=...) at memory.c:503 #2 0x000056299911095e in access_with_adjusted_size (addr=addr@entry=66, value=value@entry=0x7fff6abe13b8, size=size@entry=1, access_size_min=<optimized out>, access_size_max=<optimized out>, access_fn=access_fn@entry= 0x562999110d70 <memory_region_write_with_attrs_accessor>, mr=0x56299aa28ea0, attrs=...) at memory.c:539 #3 0x0000562999114fba in memory_region_dispatch_write (mr=mr@entry=0x56299aa28ea0, addr=66, data=<optimized out>, op=<optimized out>, attrs=attrs@entry=...) at memory.c:1482 #4 0x00005629990c0860 in flatview_write_continue (fv=fv@entry=0x56299aa7d8a0, addr=addr@entry=4294963266, attrs=..., ptr=ptr@entry=0x7fff6abe1540, len=len@entry=1, addr1=<optimized out>, l=<optimized out>, mr=0x56299aa28ea0) at include/qemu/host-utils.h:164 #5 0x00005629990c0a76 in flatview_write (fv=0x56299aa7d8a0, addr=4294963266, attrs=..., buf=0x7fff6abe1540, len=1) at exec.c:3165 #6 0x00005629990c4c1b in address_space_write (as=<optimized out>, addr=<optimized out>, attrs=..., attrs@entry=..., buf=buf@entry=0x7fff6abe1540, len=len@entry=1) at exec.c:3256 #7 0x000056299910f807 in qtest_process_command (chr=chr@entry=0x5629995ee920 <qtest_chr>, words=words@entry=0x56299acfcfa0) at qtest.c:437 Instead of crashing, log the access as unimplemented. Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: KONRAD Frederic <frederic.konrad@adacore.com> Message-Id: <20200331105048.27989-3-f4bug@amsat.org>
During testing of the vhost-user-blk reconnect functionality the qemu SIGSEGV was triggered: start qemu as: x86_64-softmmu/qemu-system-x86_64 -m 1024M -M q35 \ -object memory-backend-file,id=ram-node0,size=1024M,mem-path=/dev/shm/qemu,share=on \ -numa node,cpus=0,memdev=ram-node0 \ -chardev socket,id=chardev0,path=./vhost.sock,noserver,reconnect=1 \ -device vhost-user-blk-pci,chardev=chardev0,num-queues=4 --enable-kvm start vhost-user-blk daemon: ./vhost-user-blk -s ./vhost.sock -b test-img.raw If vhost-user-blk will be killed during the vhost initialization process, for instance after getting VHOST_SET_VRING_CALL command, then QEMU will fail with the following backtrace: Thread 1 "qemu-system-x86" received signal SIGSEGV, Segmentation fault. 0x00005555559272bb in vhost_user_read (dev=0x7fffef2d53e0, msg=0x7fffffffd5b0) at ./hw/virtio/vhost-user.c:260 260 CharBackend *chr = u->user->chr; #0 0x00005555559272bb in vhost_user_read (dev=0x7fffef2d53e0, msg=0x7fffffffd5b0) at ./hw/virtio/vhost-user.c:260 #1 0x000055555592acb8 in vhost_user_get_config (dev=0x7fffef2d53e0, config=0x7fffef2d5394 "", config_len=60) at ./hw/virtio/vhost-user.c:1645 #2 0x0000555555925525 in vhost_dev_get_config (hdev=0x7fffef2d53e0, config=0x7fffef2d5394 "", config_len=60) at ./hw/virtio/vhost.c:1490 #3 0x00005555558cc46b in vhost_user_blk_device_realize (dev=0x7fffef2d51a0, errp=0x7fffffffd8f0) at ./hw/block/vhost-user-blk.c:429 #4 0x0000555555920090 in virtio_device_realize (dev=0x7fffef2d51a0, errp=0x7fffffffd948) at ./hw/virtio/virtio.c:3615 #5 0x0000555555a9779c in device_set_realized (obj=0x7fffef2d51a0, value=true, errp=0x7fffffffdb88) at ./hw/core/qdev.c:891 ... The problem is that vhost_user_write doesn't get an error after disconnect and try to call vhost_user_read(). The tcp_chr_write() routine should return -1 in case of disconnect. Indicate the EIO error if this routine is called in the disconnected state. Signed-off-by: Dima Stepanov <dimastep@yandex-team.ru> Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com> Message-Id: <aeb7806bfc945faadf09f64dcfa30f59de3ac053.1590396396.git.dimastep@yandex-team.ru> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
When we hotplug vcpus, cpu_update_state is added to vm_change_state_head in kvm_arch_init_vcpu(). But it forgot to delete in kvm_arch_destroy_vcpu() after unplug. Then it will cause a use-after-free access. This patch delete it in kvm_arch_destroy_vcpu() to fix that. Reproducer: virsh setvcpus vm1 4 --live virsh setvcpus vm1 2 --live virsh suspend vm1 virsh resume vm1 The UAF stack: ==qemu-system-x86_64==28233==ERROR: AddressSanitizer: heap-use-after-free on address 0x62e00002e798 at pc 0x5573c6917d9e bp 0x7fff07139e50 sp 0x7fff07139e40 WRITE of size 1 at 0x62e00002e798 thread T0 #0 0x5573c6917d9d in cpu_update_state /mnt/sdb/qemu/target/i386/kvm.c:742 #1 0x5573c699121a in vm_state_notify /mnt/sdb/qemu/vl.c:1290 #2 0x5573c636287e in vm_prepare_start /mnt/sdb/qemu/cpus.c:2144 #3 0x5573c6362927 in vm_start /mnt/sdb/qemu/cpus.c:2150 #4 0x5573c71e8304 in qmp_cont /mnt/sdb/qemu/monitor/qmp-cmds.c:173 #5 0x5573c727cb1e in qmp_marshal_cont qapi/qapi-commands-misc.c:835 #6 0x5573c7694c7a in do_qmp_dispatch /mnt/sdb/qemu/qapi/qmp-dispatch.c:132 #7 0x5573c7694c7a in qmp_dispatch /mnt/sdb/qemu/qapi/qmp-dispatch.c:175 #8 0x5573c71d9110 in monitor_qmp_dispatch /mnt/sdb/qemu/monitor/qmp.c:145 #9 0x5573c71dad4f in monitor_qmp_bh_dispatcher /mnt/sdb/qemu/monitor/qmp.c:234 Reported-by: Euler Robot <euler.robot@huawei.com> Signed-off-by: Pan Nengyuan <pannengyuan@huawei.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-Id: <20200513132630.13412-1-pannengyuan@huawei.com> Reviewed-by: Igor Mammedov <imammedo@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Trying libFuzzer on the vmport device, we get: AddressSanitizer:DEADLYSIGNAL ================================================================= ==29476==ERROR: AddressSanitizer: SEGV on unknown address 0x000000008840 (pc 0x56448bec4d79 bp 0x7ffeec9741b0 sp 0x7ffeec9740e0 T0) ==29476==The signal is caused by a READ memory access. #0 0x56448bec4d78 in vmport_ioport_read (qemu-fuzz-i386+0x1260d78) #1 0x56448bb5f175 in memory_region_read_accessor (qemu-fuzz-i386+0xefb175) #2 0x56448bb30c13 in access_with_adjusted_size (qemu-fuzz-i386+0xeccc13) #3 0x56448bb2ea27 in memory_region_dispatch_read1 (qemu-fuzz-i386+0xecaa27) #4 0x56448bb2e443 in memory_region_dispatch_read (qemu-fuzz-i386+0xeca443) #5 0x56448b961ab1 in flatview_read_continue (qemu-fuzz-i386+0xcfdab1) #6 0x56448b96336d in flatview_read (qemu-fuzz-i386+0xcff36d) #7 0x56448b962ec4 in address_space_read_full (qemu-fuzz-i386+0xcfeec4) This is easily reproducible using: $ echo inb 0x5658 | qemu-system-i386 -M isapc,accel=qtest -qtest stdio [I 1589796572.009763] OPENED [R +0.008069] inb 0x5658 Segmentation fault (core dumped) $ coredumpctl gdb -q Program terminated with signal SIGSEGV, Segmentation fault. #0 0x00005605b54d0f21 in vmport_ioport_read (opaque=0x5605b7531ce0, addr=0, size=4) at hw/i386/vmport.c:77 77 eax = env->regs[R_EAX]; (gdb) p cpu $1 = (X86CPU *) 0x0 (gdb) bt #0 0x00005605b54d0f21 in vmport_ioport_read (opaque=0x5605b7531ce0, addr=0, size=4) at hw/i386/vmport.c:77 #1 0x00005605b53db114 in memory_region_read_accessor (mr=0x5605b7531d80, addr=0, value=0x7ffc9d261a30, size=4, shift=0, mask=4294967295, attrs=...) at memory.c:434 #2 0x00005605b53db5d4 in access_with_adjusted_size (addr=0, value=0x7ffc9d261a30, size=1, access_size_min=4, access_size_max=4, access_fn= 0x5605b53db0d2 <memory_region_read_accessor>, mr=0x5605b7531d80, attrs=...) at memory.c:544 #3 0x00005605b53de156 in memory_region_dispatch_read1 (mr=0x5605b7531d80, addr=0, pval=0x7ffc9d261a30, size=1, attrs=...) at memory.c:1396 #4 0x00005605b53de228 in memory_region_dispatch_read (mr=0x5605b7531d80, addr=0, pval=0x7ffc9d261a30, op=MO_8, attrs=...) at memory.c:1424 #5 0x00005605b537c80a in flatview_read_continue (fv=0x5605b7650290, addr=22104, attrs=..., ptr=0x7ffc9d261b4b, len=1, addr1=0, l=1, mr=0x5605b7531d80) at exec.c:3200 #6 0x00005605b537c95d in flatview_read (fv=0x5605b7650290, addr=22104, attrs=..., buf=0x7ffc9d261b4b, len=1) at exec.c:3239 #7 0x00005605b537c9e6 in address_space_read_full (as=0x5605b5f74ac0 <address_space_io>, addr=22104, attrs=..., buf=0x7ffc9d261b4b, len=1) at exec.c:3252 #8 0x00005605b53d5a5d in address_space_read (len=1, buf=0x7ffc9d261b4b, attrs=..., addr=22104, as=0x5605b5f74ac0 <address_space_io>) at include/exec/memory.h:2401 #9 0x00005605b53d5a5d in cpu_inb (addr=22104) at ioport.c:88 X86CPU is NULL because QTest accelerator does not use CPU. Fix by returning default values when QTest accelerator is used. Reported-by: Clang AddressSanitizer Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
On restart, we were scheduling a BH to process queued requests, which would run before starting up the data plane, leading to those requests being assigned and started on coroutines on the main context. This could cause requests to be wrongly processed in parallel from different threads (the main thread and the iothread managing the data plane), potentially leading to multiple issues. For example, stopping and resuming a VM multiple times while the guest is generating I/O on a virtio_blk device can trigger a crash with a stack tracing looking like this one: <------> Thread 2 (Thread 0x7ff736765700 (LWP 1062503)): #0 0x00005567a13b99d6 in iov_memset (iov=0x6563617073206f4e, iov_cnt=1717922848, offset=516096, fillc=0, bytes=7018105756081554803) at util/iov.c:69 #1 0x00005567a13bab73 in qemu_iovec_memset (qiov=0x7ff73ec99748, offset=516096, fillc=0, bytes=7018105756081554803) at util/iov.c:530 #2 0x00005567a12f411c in qemu_laio_process_completion (laiocb=0x7ff6512ee6c0) at block/linux-aio.c:86 #3 0x00005567a12f42ff in qemu_laio_process_completions (s=0x7ff7182e8420) at block/linux-aio.c:217 #4 0x00005567a12f480d in ioq_submit (s=0x7ff7182e8420) at block/linux-aio.c:323 #5 0x00005567a12f43d9 in qemu_laio_process_completions_and_submit (s=0x7ff7182e8420) at block/linux-aio.c:236 #6 0x00005567a12f44c2 in qemu_laio_poll_cb (opaque=0x7ff7182e8430) at block/linux-aio.c:267 #7 0x00005567a13aed83 in run_poll_handlers_once (ctx=0x5567a2b58c70, timeout=0x7ff7367645f8) at util/aio-posix.c:520 #8 0x00005567a13aee9f in run_poll_handlers (ctx=0x5567a2b58c70, max_ns=16000, timeout=0x7ff7367645f8) at util/aio-posix.c:562 #9 0x00005567a13aefde in try_poll_mode (ctx=0x5567a2b58c70, timeout=0x7ff7367645f8) at util/aio-posix.c:597 #10 0x00005567a13af115 in aio_poll (ctx=0x5567a2b58c70, blocking=true) at util/aio-posix.c:639 #11 0x00005567a109acca in iothread_run (opaque=0x5567a2b29760) at iothread.c:75 #12 0x00005567a13b2790 in qemu_thread_start (args=0x5567a2b694c0) at util/qemu-thread-posix.c:519 #13 0x00007ff73eedf2de in start_thread () at /lib64/libpthread.so.0 #14 0x00007ff73ec10e83 in clone () at /lib64/libc.so.6 Thread 1 (Thread 0x7ff743986f00 (LWP 1062500)): #0 0x00005567a13b99d6 in iov_memset (iov=0x6563617073206f4e, iov_cnt=1717922848, offset=516096, fillc=0, bytes=7018105756081554803) at util/iov.c:69 #1 0x00005567a13bab73 in qemu_iovec_memset (qiov=0x7ff73ec99748, offset=516096, fillc=0, bytes=7018105756081554803) at util/iov.c:530 #2 0x00005567a12f411c in qemu_laio_process_completion (laiocb=0x7ff6512ee6c0) at block/linux-aio.c:86 #3 0x00005567a12f42ff in qemu_laio_process_completions (s=0x7ff7182e8420) at block/linux-aio.c:217 #4 0x00005567a12f480d in ioq_submit (s=0x7ff7182e8420) at block/linux-aio.c:323 #5 0x00005567a12f4a2f in laio_do_submit (fd=19, laiocb=0x7ff5f4ff9ae0, offset=472363008, type=2) at block/linux-aio.c:375 #6 0x00005567a12f4af2 in laio_co_submit (bs=0x5567a2b8c460, s=0x7ff7182e8420, fd=19, offset=472363008, qiov=0x7ff5f4ff9ca0, type=2) at block/linux-aio.c:394 #7 0x00005567a12f1803 in raw_co_prw (bs=0x5567a2b8c460, offset=472363008, bytes=20480, qiov=0x7ff5f4ff9ca0, type=2) at block/file-posix.c:1892 #8 0x00005567a12f1941 in raw_co_pwritev (bs=0x5567a2b8c460, offset=472363008, bytes=20480, qiov=0x7ff5f4ff9ca0, flags=0) at block/file-posix.c:1925 #9 0x00005567a12fe3e1 in bdrv_driver_pwritev (bs=0x5567a2b8c460, offset=472363008, bytes=20480, qiov=0x7ff5f4ff9ca0, qiov_offset=0, flags=0) at block/io.c:1183 #10 0x00005567a1300340 in bdrv_aligned_pwritev (child=0x5567a2b5b070, req=0x7ff5f4ff9db0, offset=472363008, bytes=20480, align=512, qiov=0x7ff72c0425b8, qiov_offset=0, flags=0) at block/io.c:1980 #11 0x00005567a1300b29 in bdrv_co_pwritev_part (child=0x5567a2b5b070, offset=472363008, bytes=20480, qiov=0x7ff72c0425b8, qiov_offset=0, flags=0) at block/io.c:2137 #12 0x00005567a12baba1 in qcow2_co_pwritev_task (bs=0x5567a2b92740, file_cluster_offset=472317952, offset=487305216, bytes=20480, qiov=0x7ff72c0425b8, qiov_offset=0, l2meta=0x0) at block/qcow2.c:2444 #13 0x00005567a12bacdb in qcow2_co_pwritev_task_entry (task=0x5567a2b48540) at block/qcow2.c:2475 #14 0x00005567a13167d8 in aio_task_co (opaque=0x5567a2b48540) at block/aio_task.c:45 #15 0x00005567a13cf00c in coroutine_trampoline (i0=738245600, i1=32759) at util/coroutine-ucontext.c:115 #16 0x00007ff73eb622e0 in __start_context () at /lib64/libc.so.6 #17 0x00007ff6626f1350 in () #18 0x0000000000000000 in () <------> This is also known to cause crashes with this message (assertion failed): aio_co_schedule: Co-routine was already scheduled in 'aio_co_schedule' RHBZ: https://bugzilla.redhat.com/show_bug.cgi?id=1812765 Signed-off-by: Sergio Lopez <slp@redhat.com> Message-Id: <20200603093240.40489-3-slp@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
'obj' forgot to free at the end of hmp_qom_get(). Fix that. The leak stack: Direct leak of 40 byte(s) in 1 object(s) allocated from: #0 0x7f4e3a779ae8 in __interceptor_malloc (/lib64/libasan.so.5+0xefae8) #1 0x7f4e398f91d5 in g_malloc (/lib64/libglib-2.0.so.0+0x531d5) #2 0x55c9fd9a3999 in qstring_from_substr /build/qemu/src/qobject/qstring.c:45 #3 0x55c9fd894bd3 in qobject_output_type_str /build/qemu/src/qapi/qobject-output-visitor.c:175 #4 0x55c9fd894bd3 in qobject_output_type_str /build/qemu/src/qapi/qobject-output-visitor.c:168 #5 0x55c9fd88b34d in visit_type_str /build/qemu/src/qapi/qapi-visit-core.c:308 #6 0x55c9fd59aa6b in property_get_str /build/qemu/src/qom/object.c:2064 #7 0x55c9fd5adb8a in object_property_get_qobject /build/qemu/src/qom/qom-qobject.c:38 #8 0x55c9fd4a029d in hmp_qom_get /build/qemu/src/qom/qom-hmp-cmds.c:66 Fixes: 89cf4fe Reported-by: Euler Robot <euler.robot@huawei.com> Signed-off-by: Pan Nengyuan <pannengyuan@huawei.com> Message-Id: <20200603070338.7922-1-pannengyuan@huawei.com> Reviewed-by: Li Qiang <liq3ea@gmail.com> Tested-by: Li Qiang <liq3ea@gmail.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Since commit d70c996, when enabling the PMU we get: $ qemu-system-aarch64 -cpu host,pmu=on -M virt,accel=kvm,gic-version=3 Segmentation fault (core dumped) Thread 1 "qemu-system-aar" received signal SIGSEGV, Segmentation fault. 0x0000aaaaaae356d0 in kvm_ioctl (s=0x0, type=44547) at accel/kvm/kvm-all.c:2588 2588 ret = ioctl(s->fd, type, arg); (gdb) bt #0 0x0000aaaaaae356d0 in kvm_ioctl (s=0x0, type=44547) at accel/kvm/kvm-all.c:2588 #1 0x0000aaaaaae31568 in kvm_check_extension (s=0x0, extension=126) at accel/kvm/kvm-all.c:916 #2 0x0000aaaaaafce254 in kvm_arm_pmu_supported (cpu=0xaaaaac214ab0) at target/arm/kvm.c:213 #3 0x0000aaaaaafc0f94 in arm_set_pmu (obj=0xaaaaac214ab0, value=true, errp=0xffffffffe438) at target/arm/cpu.c:1111 #4 0x0000aaaaab5533ac in property_set_bool (obj=0xaaaaac214ab0, v=0xaaaaac223a80, name=0xaaaaac11a970 "pmu", opaque=0xaaaaac222730, errp=0xffffffffe438) at qom/object.c:2170 #5 0x0000aaaaab5512f0 in object_property_set (obj=0xaaaaac214ab0, v=0xaaaaac223a80, name=0xaaaaac11a970 "pmu", errp=0xffffffffe438) at qom/object.c:1328 #6 0x0000aaaaab551e10 in object_property_parse (obj=0xaaaaac214ab0, string=0xaaaaac11b4c0 "on", name=0xaaaaac11a970 "pmu", errp=0xffffffffe438) at qom/object.c:1561 #7 0x0000aaaaab54ee8c in object_apply_global_props (obj=0xaaaaac214ab0, props=0xaaaaac018e20, errp=0xaaaaabd6fd88 <error_fatal>) at qom/object.c:407 #8 0x0000aaaaab1dd5a4 in qdev_prop_set_globals (dev=0xaaaaac214ab0) at hw/core/qdev-properties.c:1218 #9 0x0000aaaaab1d9fac in device_post_init (obj=0xaaaaac214ab0) at hw/core/qdev.c:1050 ... #15 0x0000aaaaab54f310 in object_initialize_with_type (obj=0xaaaaac214ab0, size=52208, type=0xaaaaabe237f0) at qom/object.c:512 #16 0x0000aaaaab54fa24 in object_new_with_type (type=0xaaaaabe237f0) at qom/object.c:687 #17 0x0000aaaaab54fa80 in object_new (typename=0xaaaaabe23970 "host-arm-cpu") at qom/object.c:702 #18 0x0000aaaaaaf04a74 in machvirt_init (machine=0xaaaaac0a8550) at hw/arm/virt.c:1770 #19 0x0000aaaaab1e8720 in machine_run_board_init (machine=0xaaaaac0a8550) at hw/core/machine.c:1138 #20 0x0000aaaaaaf95394 in qemu_init (argc=5, argv=0xffffffffea58, envp=0xffffffffea88) at softmmu/vl.c:4348 #21 0x0000aaaaaada3f74 in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at softmmu/main.c:48 This is because in frame #2, cpu->kvm_state is still NULL (the vCPU is not yet realized). KVM has a hard requirement of all cores supporting the same feature set. We only need to check if the accelerator supports a feature, not each vCPU individually. Fix by removing the 'CPUState *cpu' argument from the kvm_arm_<FEATURE>_supported() functions. Fixes: d70c996 ('Use CPUState::kvm_state in kvm_arm_pmu_supported') Reported-by: Haibo Xu <haibo.xu@linaro.org> Reviewed-by: Andrew Jones <drjones@redhat.com> Acked-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Suggested-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When adding the generic PCA955xClass in commit 736132e, we forgot to set the class_size field. Fill it now to avoid: (gdb) run -machine mcimx6ul-evk -m 128M -display none -serial stdio -kernel ./OS.elf Starting program: ../../qemu/qemu/arm-softmmu/qemu-system-arm -machine mcimx6ul-evk -m 128M -display none -serial stdio -kernel ./OS.elf double free or corruption (!prev) Thread 1 "qemu-system-arm" received signal SIGABRT, Aborted. __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50 (gdb) where #0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50 #1 0x00007ffff75d8859 in __GI_abort () at abort.c:79 #2 0x00007ffff76433ee in __libc_message (action=action@entry=do_abort, fmt=fmt@entry=0x7ffff776d285 "%s\n") at ../sysdeps/posix/libc_fatal.c:155 #3 0x00007ffff764b47c in malloc_printerr (str=str@entry=0x7ffff776f690 "double free or corruption (!prev)") at malloc.c:5347 #4 0x00007ffff764d12c in _int_free (av=0x7ffff779eb80 <main_arena>, p=0x5555567a3990, have_lock=<optimized out>) at malloc.c:4317 #5 0x0000555555c906c3 in type_initialize_interface (ti=ti@entry=0x5555565b8f40, interface_type=0x555556597ad0, parent_type=0x55555662ca10) at qom/object.c:259 #6 0x0000555555c902da in type_initialize (ti=ti@entry=0x5555565b8f40) at qom/object.c:323 #7 0x0000555555c90d20 in type_initialize (ti=0x5555565b8f40) at qom/object.c:1028 $ valgrind --track-origins=yes qemu-system-arm -M mcimx6ul-evk -m 128M -display none -serial stdio -kernel ./OS.elf ==77479== Memcheck, a memory error detector ==77479== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al. ==77479== Using Valgrind-3.15.0 and LibVEX; rerun with -h for copyright info ==77479== Command: qemu-system-arm -M mcimx6ul-evk -m 128M -display none -serial stdio -kernel ./OS.elf ==77479== ==77479== Invalid write of size 2 ==77479== at 0x6D8322: pca9552_class_init (pca9552.c:424) ==77479== by 0x844D1F: type_initialize (object.c:1029) ==77479== by 0x844D1F: object_class_foreach_tramp (object.c:1016) ==77479== by 0x4AE1057: g_hash_table_foreach (in /usr/lib/x86_64-linux-gnu/libglib-2.0.so.0.6400.2) ==77479== by 0x8453A4: object_class_foreach (object.c:1038) ==77479== by 0x8453A4: object_class_get_list (object.c:1095) ==77479== by 0x556194: select_machine (vl.c:2416) ==77479== by 0x556194: qemu_init (vl.c:3828) ==77479== by 0x40AF9C: main (main.c:48) ==77479== Address 0x583f108 is 0 bytes after a block of size 200 alloc'd ==77479== at 0x483DD99: calloc (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so) ==77479== by 0x4AF8D30: g_malloc0 (in /usr/lib/x86_64-linux-gnu/libglib-2.0.so.0.6400.2) ==77479== by 0x844258: type_initialize.part.0 (object.c:306) ==77479== by 0x844D1F: type_initialize (object.c:1029) ==77479== by 0x844D1F: object_class_foreach_tramp (object.c:1016) ==77479== by 0x4AE1057: g_hash_table_foreach (in /usr/lib/x86_64-linux-gnu/libglib-2.0.so.0.6400.2) ==77479== by 0x8453A4: object_class_foreach (object.c:1038) ==77479== by 0x8453A4: object_class_get_list (object.c:1095) ==77479== by 0x556194: select_machine (vl.c:2416) ==77479== by 0x556194: qemu_init (vl.c:3828) ==77479== by 0x40AF9C: main (main.c:48) Fixes: 736132e ("hw/misc/pca9552: Add generic PCA955xClass") Reported-by: Jean-Christophe DUBOIS <jcd@tribudubois.net> Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Tested-by: Jean-Christophe DUBOIS <jcd@tribudubois.net> Message-id: 20200629074704.23028-1-f4bug@amsat.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
tries to fix a leak detected when building with --enable-sanitizers: ./i386-softmmu/qemu-system-i386 Upon exit: ==13576==ERROR: LeakSanitizer: detected memory leaks Direct leak of 1216 byte(s) in 1 object(s) allocated from: #0 0x7f9d2ed5c628 in malloc (/usr/lib/x86_64-linux-gnu/libasan.so.5) #1 0x7f9d2e963500 in g_malloc (/usr/lib/x86_64-linux-gnu/libglib-2.0.so.) #2 0x55fa646d25cc in object_new_with_type /tmp/qemu/qom/object.c:686 #3 0x55fa63dbaa88 in qdev_new /tmp/qemu/hw/core/qdev.c:140 #4 0x55fa638a533f in pc_pflash_create /tmp/qemu/hw/i386/pc_sysfw.c:88 #5 0x55fa638a54c4 in pc_system_flash_create /tmp/qemu/hw/i386/pc_sysfw.c:106 #6 0x55fa646caa1d in object_init_with_type /tmp/qemu/qom/object.c:369 #7 0x55fa646d20b5 in object_initialize_with_type /tmp/qemu/qom/object.c:511 #8 0x55fa646d2606 in object_new_with_type /tmp/qemu/qom/object.c:687 #9 0x55fa639431e9 in qemu_init /tmp/qemu/softmmu/vl.c:3878 #10 0x55fa6335c1b8 in main /tmp/qemu/softmmu/main.c:48 #11 0x7f9d2cf06e0a in __libc_start_main ../csu/libc-start.c:308 #12 0x55fa6335f8e9 in _start (/tmp/qemu/build/i386-softmmu/qemu-system-i386) Signed-off-by: Alexander Bulekov <alxndr@bu.edu> Message-Id: <20200701145231.19531-1-alxndr@bu.edu> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
…tart When the disconnect event is triggered in the connecting stage, the tcp_chr_disconnect_locked may be called twice. The first call: #0 qemu_chr_socket_restart_timer (chr=0x55555582ee90) at chardev/char-socket.c:120 #1 0x000055555558e38c in tcp_chr_disconnect_locked (chr=<optimized out>) at chardev/char-socket.c:490 #2 0x000055555558e3cd in tcp_chr_disconnect (chr=0x55555582ee90) at chardev/char-socket.c:497 #3 0x000055555558ea32 in tcp_chr_new_client (chr=chr@entry=0x55555582ee90, sioc=sioc@entry=0x55555582f0b0) at chardev/char-socket.c:892 #4 0x000055555558eeb8 in qemu_chr_socket_connected (task=0x55555582f300, opaque=<optimized out>) at chardev/char-socket.c:1090 #5 0x0000555555574352 in qio_task_complete (task=task@entry=0x55555582f300) at io/task.c:196 #6 0x00005555555745f4 in qio_task_thread_result (opaque=0x55555582f300) at io/task.c:111 #7 qio_task_wait_thread (task=0x55555582f300) at io/task.c:190 #8 0x000055555558f17e in tcp_chr_wait_connected (chr=0x55555582ee90, errp=0x555555802a08 <error_abort>) at chardev/char-socket.c:1013 #9 0x0000555555567cbd in char_socket_client_reconnect_test (opaque=0x5555557fe020 <client8unix>) at tests/test-char.c:1152 The second call: #0 0x00007ffff5ac3277 in raise () from /lib64/libc.so.6 #1 0x00007ffff5ac4968 in abort () from /lib64/libc.so.6 #2 0x00007ffff5abc096 in __assert_fail_base () from /lib64/libc.so.6 #3 0x00007ffff5abc142 in __assert_fail () from /lib64/libc.so.6 #4 0x000055555558d10a in qemu_chr_socket_restart_timer (chr=0x55555582ee90) at chardev/char-socket.c:125 #5 0x000055555558df0c in tcp_chr_disconnect_locked (chr=<optimized out>) at chardev/char-socket.c:490 #6 0x000055555558df4d in tcp_chr_disconnect (chr=0x55555582ee90) at chardev/char-socket.c:497 #7 0x000055555558e5b2 in tcp_chr_new_client (chr=chr@entry=0x55555582ee90, sioc=sioc@entry=0x55555582f0b0) at chardev/char-socket.c:892 #8 0x000055555558e93a in tcp_chr_connect_client_sync (chr=chr@entry=0x55555582ee90, errp=errp@entry=0x7fffffffd178) at chardev/char-socket.c:944 #9 0x000055555558ec78 in tcp_chr_wait_connected (chr=0x55555582ee90, errp=0x555555802a08 <error_abort>) at chardev/char-socket.c:1035 #10 0x000055555556804b in char_socket_client_test (opaque=0x5555557fe020 <client8unix>) at tests/test-char.c:1023 Run test/test-char to reproduce this issue. test-char: chardev/char-socket.c:125: qemu_chr_socket_restart_timer: Assertion `!s->reconnect_timer' failed. Signed-off-by: Li Feng <fengli@smartx.com> Acked-by: Marc-André Lureau <marcandre.lureau@redhat.com> Message-Id: <20200522025554.41063-1-fengli@smartx.com>
With a reconnect socket, qemu_char_open() will start a background thread. It should keep a reference on the chardev. Fixes invalid read: READ of size 8 at 0x6040000ac858 thread T7 #0 0x5555598d37b8 in unix_connect_saddr /home/elmarco/src/qq/util/qemu-sockets.c:954 #1 0x5555598d4751 in socket_connect /home/elmarco/src/qq/util/qemu-sockets.c:1109 #2 0x555559707c34 in qio_channel_socket_connect_sync /home/elmarco/src/qq/io/channel-socket.c:145 #3 0x5555596adebb in tcp_chr_connect_client_task /home/elmarco/src/qq/chardev/char-socket.c:1104 #4 0x555559723d55 in qio_task_thread_worker /home/elmarco/src/qq/io/task.c:123 #5 0x5555598a6731 in qemu_thread_start /home/elmarco/src/qq/util/qemu-thread-posix.c:519 #6 0x7ffff40d4431 in start_thread (/lib64/libpthread.so.0+0x9431) #7 0x7ffff40029d2 in __clone (/lib64/libc.so.6+0x1019d2) Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com> Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> Message-Id: <20200420112012.567284-1-marcandre.lureau@redhat.com>
"tmp.tls_hostname" and "tmp.tls_creds" allocated by migrate_params_test_apply() is forgot to free at the end of qmp_migrate_set_parameters(). Fix that. The leak stack: Direct leak of 2 byte(s) in 2 object(s) allocated from: #0 0xffffb597c20b in __interceptor_malloc (/usr/lib64/libasan.so.4+0xd320b) #1 0xffffb52dcb1b in g_malloc (/usr/lib64/libglib-2.0.so.0+0x58b1b) #2 0xffffb52f8143 in g_strdup (/usr/lib64/libglib-2.0.so.0+0x74143) #3 0xaaaac52447fb in migrate_params_test_apply (/usr/src/debug/qemu-4.1.0/migration/migration.c:1377) #4 0xaaaac52fdca7 in qmp_migrate_set_parameters (/usr/src/debug/qemu-4.1.0/qapi/qapi-commands-migration.c:192) #5 0xaaaac551d543 in qmp_dispatch (/usr/src/debug/qemu-4.1.0/qapi/qmp-dispatch.c:165) #6 0xaaaac52a0a8f in qmp_dispatch (/usr/src/debug/qemu-4.1.0/monitor/qmp.c:125) #7 0xaaaac52a1c7f in monitor_qmp_dispatch (/usr/src/debug/qemu-4.1.0/monitor/qmp.c:214) #8 0xaaaac55cb0cf in aio_bh_call (/usr/src/debug/qemu-4.1.0/util/async.c:117) #9 0xaaaac55d4543 in aio_bh_poll (/usr/src/debug/qemu-4.1.0/util/aio-posix.c:459) #10 0xaaaac55cae0f in aio_dispatch (/usr/src/debug/qemu-4.1.0/util/async.c:268) #11 0xffffb52d6a7b in g_main_context_dispatch (/usr/lib64/libglib-2.0.so.0+0x52a7b) #12 0xaaaac55d1e3b(/usr/bin/qemu-kvm-4.1.0+0x1622e3b) #13 0xaaaac4e314bb(/usr/bin/qemu-kvm-4.1.0+0xe824bb) #14 0xaaaac47f45ef(/usr/bin/qemu-kvm-4.1.0+0x8455ef) #15 0xffffb4bfef3f in __libc_start_main (/usr/lib64/libc.so.6+0x23f3f) #16 0xaaaac47ffacb(/usr/bin/qemu-kvm-4.1.0+0x850acb) Direct leak of 2 byte(s) in 2 object(s) allocated from: #0 0xffffb597c20b in __interceptor_malloc (/usr/lib64/libasan.so.4+0xd320b) #1 0xffffb52dcb1b in g_malloc (/usr/lib64/libglib-2.0.so.0+0x58b1b) #2 0xffffb52f8143 in g_strdup (/usr/lib64/libglib-2.0.so.0+0x74143) #3 0xaaaac5244893 in migrate_params_test_apply (/usr/src/debug/qemu-4.1.0/migration/migration.c:1382) #4 0xaaaac52fdca7 in qmp_migrate_set_parameters (/usr/src/debug/qemu-4.1.0/qapi/qapi-commands-migration.c:192) #5 0xaaaac551d543 in qmp_dispatch (/usr/src/debug/qemu-4.1.0/qapi/qmp-dispatch.c) #6 0xaaaac52a0a8f in qmp_dispatch (/usr/src/debug/qemu-4.1.0/monitor/qmp.c:125) #7 0xaaaac52a1c7f in monitor_qmp_dispatch (/usr/src/debug/qemu-4.1.0/monitor/qmp.c:214) #8 0xaaaac55cb0cf in aio_bh_call (/usr/src/debug/qemu-4.1.0/util/async.c:117) #9 0xaaaac55d4543 in aio_bh_poll (/usr/src/debug/qemu-4.1.0/util/aio-posix.c:459) #10 0xaaaac55cae0f in in aio_dispatch (/usr/src/debug/qemu-4.1.0/util/async.c:268) #11 0xffffb52d6a7b in g_main_context_dispatch (/usr/lib64/libglib-2.0.so.0+0x52a7b) #12 0xaaaac55d1e3b(/usr/bin/qemu-kvm-4.1.0+0x1622e3b) #13 0xaaaac4e314bb(/usr/bin/qemu-kvm-4.1.0+0xe824bb) #14 0xaaaac47f45ef (/usr/bin/qemu-kvm-4.1.0+0x8455ef) #15 0xffffb4bfef3f in __libc_start_main (/usr/lib64/libc.so.6+0x23f3f) #16 0xaaaac47ffacb(/usr/bin/qemu-kvm-4.1.0+0x850acb) Signed-off-by: Chuan Zheng <zhengchuan@huawei.com> Reviewed-by: KeQian Zhu <zhukeqian1@huawei.com> Reviewed-by: HaiLiang <zhang.zhanghailiang@huawei.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
bdrv_aio_cancel() calls aio_poll() on the AioContext for the given I/O request until it has completed. ENOMEDIUM requests are special because there is no BlockDriverState when the drive has no medium! Define a .get_aio_context() function for BlkAioEmAIOCB requests so that bdrv_aio_cancel() can find the AioContext where the completion BH is pending. Without this function bdrv_aio_cancel() aborts on ENOMEDIUM requests! libFuzzer triggered the following assertion: cat << EOF | qemu-system-i386 -M pc-q35-5.0 \ -nographic -monitor none -serial none \ -qtest stdio -trace ide\* outl 0xcf8 0x8000fa24 outl 0xcfc 0xe106c000 outl 0xcf8 0x8000fa04 outw 0xcfc 0x7 outl 0xcf8 0x8000fb20 write 0x0 0x3 0x2780e7 write 0xe106c22c 0xd 0x1130c218021130c218021130c2 write 0xe106c218 0x15 0x110010110010110010110010110010110010110010 EOF ide_exec_cmd IDE exec cmd: bus 0x56170a77a2b8; state 0x56170a77a340; cmd 0xe7 ide_reset IDEstate 0x56170a77a340 Aborted (core dumped) (gdb) bt #1 0x00007ffff4f93895 in abort () at /lib64/libc.so.6 #2 0x0000555555dc6c00 in bdrv_aio_cancel (acb=0x555556765550) at block/io.c:2745 #3 0x0000555555dac202 in blk_aio_cancel (acb=0x555556765550) at block/block-backend.c:1546 #4 0x0000555555b1bd74 in ide_reset (s=0x555557213340) at hw/ide/core.c:1318 #5 0x0000555555b1e3a1 in ide_bus_reset (bus=0x5555572132b8) at hw/ide/core.c:2422 #6 0x0000555555b2aa27 in ahci_reset_port (s=0x55555720eb50, port=2) at hw/ide/ahci.c:650 #7 0x0000555555b29fd7 in ahci_port_write (s=0x55555720eb50, port=2, offset=44, val=16) at hw/ide/ahci.c:360 #8 0x0000555555b2a564 in ahci_mem_write (opaque=0x55555720eb50, addr=556, val=16, size=1) at hw/ide/ahci.c:513 #9 0x000055555598415b in memory_region_write_accessor (mr=0x55555720eb80, addr=556, value=0x7fffffffb838, size=1, shift=0, mask=255, attrs=...) at softmmu/memory.c:483 Looking at bdrv_aio_cancel: 2728 /* async I/Os */ 2729 2730 void bdrv_aio_cancel(BlockAIOCB *acb) 2731 { 2732 qemu_aio_ref(acb); 2733 bdrv_aio_cancel_async(acb); 2734 while (acb->refcnt > 1) { 2735 if (acb->aiocb_info->get_aio_context) { 2736 aio_poll(acb->aiocb_info->get_aio_context(acb), true); 2737 } else if (acb->bs) { 2738 /* qemu_aio_ref and qemu_aio_unref are not thread-safe, so 2739 * assert that we're not using an I/O thread. Thread-safe 2740 * code should use bdrv_aio_cancel_async exclusively. 2741 */ 2742 assert(bdrv_get_aio_context(acb->bs) == qemu_get_aio_context()); 2743 aio_poll(bdrv_get_aio_context(acb->bs), true); 2744 } else { 2745 abort(); <=============== 2746 } 2747 } 2748 qemu_aio_unref(acb); 2749 } Fixes: 02c50ef ("block: Add bdrv_aio_cancel_async") Reported-by: Alexander Bulekov <alxndr@bu.edu> Buglink: https://bugs.launchpad.net/qemu/+bug/1878255 Originally-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20200720100141.129739-1-stefanha@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
To make deallocating partially constructed objects work, the visit_type_STRUCT() need to succeed without doing anything when passed a null object. Commit cdd2b22 "qapi: Smooth visitor error checking in generated code" broke that. To reproduce, run tests/test-qobject-input-visitor with AddressSanitizer: ==4353==ERROR: LeakSanitizer: detected memory leaks Direct leak of 16 byte(s) in 1 object(s) allocated from: #0 0x7f192d0c5d28 in __interceptor_calloc (/usr/lib/x86_64-linux-gnu/libasan.so.4+0xded28) #1 0x7f192cd21b10 in g_malloc0 (/usr/lib/x86_64-linux-gnu/libglib-2.0.so.0+0x51b10) #2 0x556725f6bbee in visit_next_list qapi/qapi-visit-core.c:86 #3 0x556725f49e15 in visit_type_UserDefOneList tests/test-qapi-visit.c:474 #4 0x556725f4489b in test_visitor_in_fail_struct_in_list tests/test-qobject-input-visitor.c:1086 #5 0x7f192cd42f29 (/usr/lib/x86_64-linux-gnu/libglib-2.0.so.0+0x72f29) SUMMARY: AddressSanitizer: 16 byte(s) leaked in 1 allocation(s). Test case /visitor/input/fail/struct-in-list feeds a list with a bad element to the QObject input visitor. Visiting that element duly fails, and aborts the visit with the list only partially constructed: the faulty object is null. Cleaning up the partially constructed list visits that null object, fails, and aborts the visit before the list node gets freed. Fix the the generated visit_type_STRUCT() to succeed for null objects. Fixes: cdd2b22 Reported-by: Li Qiang <liq3ea@163.com> Signed-off-by: Markus Armbruster <armbru@redhat.com> Message-Id: <20200716150617.4027356-1-armbru@redhat.com> Tested-by: Li Qiang <liq3ea@gmail.com> Reviewed-by: Li Qiang <liq3ea@gmail.com>
In legacy mode, virtio_pci_queue_enabled() falls back to virtio_queue_enabled() to know if the queue is enabled. But virtio_queue_enabled() calls again virtio_pci_queue_enabled() if k->queue_enabled is set. This ends in a crash after a stack overflow. The problem can be reproduced with "-device virtio-net-pci,disable-legacy=off,disable-modern=true -net tap,vhost=on" And a look to the backtrace is very explicit: ... #4 0x000000010029a438 in virtio_queue_enabled () #5 0x0000000100497a9c in virtio_pci_queue_enabled () ... #130902 0x000000010029a460 in virtio_queue_enabled () #130903 0x0000000100497a9c in virtio_pci_queue_enabled () #130904 0x000000010029a460 in virtio_queue_enabled () #130905 0x0000000100454a20 in vhost_net_start () ... This patch fixes the problem by introducing a new function for the legacy case and calls it from virtio_pci_queue_enabled(). It also calls it from virtio_queue_enabled() to avoid code duplication. Fixes: f19bcdf ("virtio-pci: implement queue_enabled method") Cc: Jason Wang <jasowang@redhat.com> Cc: Cindy Lu <lulu@redhat.com> CC: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Laurent Vivier <lvivier@redhat.com> Message-Id: <20200727153319.43716-1-lvivier@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
It should be safe to reenter qio_channel_yield() on io/channel read/write path, so it's safe to reduce in_flight and allow attaching new aio context. And no problem to allow drain itself: connection attempt is not a guest request. Moreover, if remote server is down, we can hang in negotiation, blocking drain section and provoking a dead lock. How to reproduce the dead lock: 1. Create nbd-fault-injector.conf with the following contents: [inject-error "mega1"] event=data io=readwrite when=before 2. In one terminal run nbd-fault-injector in a loop, like this: n=1; while true; do echo $n; ((n++)); ./nbd-fault-injector.py 127.0.0.1:10000 nbd-fault-injector.conf; done 3. In another terminal run qemu-io in a loop, like this: n=1; while true; do echo $n; ((n++)); ./qemu-io -c 'read 0 512' nbd://127.0.0.1:10000; done After some time, qemu-io will hang trying to drain, for example, like this: #3 aio_poll (ctx=0x55f006bdd890, blocking=true) at util/aio-posix.c:600 #4 bdrv_do_drained_begin (bs=0x55f006bea710, recursive=false, parent=0x0, ignore_bds_parents=false, poll=true) at block/io.c:427 #5 bdrv_drained_begin (bs=0x55f006bea710) at block/io.c:433 #6 blk_drain (blk=0x55f006befc80) at block/block-backend.c:1710 #7 blk_unref (blk=0x55f006befc80) at block/block-backend.c:498 #8 bdrv_open_inherit (filename=0x7fffba1563bc "nbd+tcp://127.0.0.1:10000", reference=0x0, options=0x55f006be86d0, flags=24578, parent=0x0, child_class=0x0, child_role=0, errp=0x7fffba154620) at block.c:3491 #9 bdrv_open (filename=0x7fffba1563bc "nbd+tcp://127.0.0.1:10000", reference=0x0, options=0x0, flags=16386, errp=0x7fffba154620) at block.c:3513 #10 blk_new_open (filename=0x7fffba1563bc "nbd+tcp://127.0.0.1:10000", reference=0x0, options=0x0, flags=16386, errp=0x7fffba154620) at block/block-backend.c:421 And connection_co stack like this: #0 qemu_coroutine_switch (from_=0x55f006bf2650, to_=0x7fe96e07d918, action=COROUTINE_YIELD) at util/coroutine-ucontext.c:302 #1 qemu_coroutine_yield () at util/qemu-coroutine.c:193 #2 qio_channel_yield (ioc=0x55f006bb3c20, condition=G_IO_IN) at io/channel.c:472 #3 qio_channel_readv_all_eof (ioc=0x55f006bb3c20, iov=0x7fe96d729bf0, niov=1, errp=0x7fe96d729eb0) at io/channel.c:110 #4 qio_channel_readv_all (ioc=0x55f006bb3c20, iov=0x7fe96d729bf0, niov=1, errp=0x7fe96d729eb0) at io/channel.c:143 #5 qio_channel_read_all (ioc=0x55f006bb3c20, buf=0x7fe96d729d28 "\300.\366\004\360U", buflen=8, errp=0x7fe96d729eb0) at io/channel.c:247 #6 nbd_read (ioc=0x55f006bb3c20, buffer=0x7fe96d729d28, size=8, desc=0x55f004f69644 "initial magic", errp=0x7fe96d729eb0) at /work/src/qemu/master/include/block/nbd.h:365 #7 nbd_read64 (ioc=0x55f006bb3c20, val=0x7fe96d729d28, desc=0x55f004f69644 "initial magic", errp=0x7fe96d729eb0) at /work/src/qemu/master/include/block/nbd.h:391 #8 nbd_start_negotiate (aio_context=0x55f006bdd890, ioc=0x55f006bb3c20, tlscreds=0x0, hostname=0x0, outioc=0x55f006bf19f8, structured_reply=true, zeroes=0x7fe96d729dca, errp=0x7fe96d729eb0) at nbd/client.c:904 #9 nbd_receive_negotiate (aio_context=0x55f006bdd890, ioc=0x55f006bb3c20, tlscreds=0x0, hostname=0x0, outioc=0x55f006bf19f8, info=0x55f006bf1a00, errp=0x7fe96d729eb0) at nbd/client.c:1032 #10 nbd_client_connect (bs=0x55f006bea710, errp=0x7fe96d729eb0) at block/nbd.c:1460 #11 nbd_reconnect_attempt (s=0x55f006bf19f0) at block/nbd.c:287 #12 nbd_co_reconnect_loop (s=0x55f006bf19f0) at block/nbd.c:309 #13 nbd_connection_entry (opaque=0x55f006bf19f0) at block/nbd.c:360 #14 coroutine_trampoline (i0=113190480, i1=22000) at util/coroutine-ucontext.c:173 Note, that the hang may be triggered by another bug, so the whole case is fixed only together with commit "block/nbd: on shutdown terminate connection attempt". Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20200727184751.15704-3-vsementsov@virtuozzo.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Eric Blake <eblake@redhat.com>
Running the WDR opcode triggers a segfault: $ cat > foo.S << EOF > __start: > wdr > EOF $ avr-gcc -nostdlib -nostartfiles -mmcu=avr6 foo.S -o foo.elf $ qemu-system-avr -serial mon:stdio -nographic -no-reboot \ -M mega -bios foo.elf -d in_asm --singlestep IN: 0x00000000: WDR Segmentation fault (core dumped) (gdb) bt #0 0x00005555add0b23a in gdb_get_cpu_pid (cpu=0x5555af5a4af0) at ../gdbstub.c:718 #1 0x00005555add0b2dd in gdb_get_cpu_process (cpu=0x5555af5a4af0) at ../gdbstub.c:743 #2 0x00005555add0e477 in gdb_set_stop_cpu (cpu=0x5555af5a4af0) at ../gdbstub.c:2742 #3 0x00005555adc99b96 in cpu_handle_guest_debug (cpu=0x5555af5a4af0) at ../softmmu/cpus.c:306 #4 0x00005555adcc66ab in rr_cpu_thread_fn (arg=0x5555af5a4af0) at ../accel/tcg/tcg-accel-ops-rr.c:224 #5 0x00005555adefaf12 in qemu_thread_start (args=0x5555af5d9870) at ../util/qemu-thread-posix.c:521 #6 0x00007f692d940ea5 in start_thread () from /lib64/libpthread.so.0 #7 0x00007f692d6699fd in clone () from /lib64/libc.so.6 Since the watchdog peripheral is not implemented, simply log the opcode as unimplemented and keep going. Reported-by: Fred Konrad <konrad@adacore.com> Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: KONRAD Frederic <frederic.konrad@adacore.com> Message-Id: <20210502190900.604292-1-f4bug@amsat.org> Signed-off-by: Laurent Vivier <laurent@vivier.eu>
This is a partial revert of commits 77542d4 and bc79c87. Usually, an error during initialisation means that the configuration was wrong. Reconnecting won't make the error go away, but just turn the error condition into an endless loop. Avoid this and return errors again. Additionally, calling vhost_user_blk_disconnect() from the chardev event handler could result in use-after-free because none of the initialisation code expects that the device could just go away in the middle. So removing the call fixes crashes in several places. For example, using a num-queues setting that is incompatible with the backend would result in a crash like this (dereferencing dev->opaque, which is already NULL): #0 0x0000555555d0a4bd in vhost_user_read_cb (source=0x5555568f4690, condition=(G_IO_IN | G_IO_HUP), opaque=0x7fffffffcbf0) at ../hw/virtio/vhost-user.c:313 #1 0x0000555555d950d3 in qio_channel_fd_source_dispatch (source=0x555557c3f750, callback=0x555555d0a478 <vhost_user_read_cb>, user_data=0x7fffffffcbf0) at ../io/channel-watch.c:84 #2 0x00007ffff7b32a9f in g_main_context_dispatch () at /lib64/libglib-2.0.so.0 #3 0x00007ffff7b84a98 in g_main_context_iterate.constprop () at /lib64/libglib-2.0.so.0 #4 0x00007ffff7b32163 in g_main_loop_run () at /lib64/libglib-2.0.so.0 #5 0x0000555555d0a724 in vhost_user_read (dev=0x555557bc62f8, msg=0x7fffffffcc50) at ../hw/virtio/vhost-user.c:402 #6 0x0000555555d0ee6b in vhost_user_get_config (dev=0x555557bc62f8, config=0x555557bc62ac "", config_len=60) at ../hw/virtio/vhost-user.c:2133 #7 0x0000555555d56d46 in vhost_dev_get_config (hdev=0x555557bc62f8, config=0x555557bc62ac "", config_len=60) at ../hw/virtio/vhost.c:1566 #8 0x0000555555cdd150 in vhost_user_blk_device_realize (dev=0x555557bc60b0, errp=0x7fffffffcf90) at ../hw/block/vhost-user-blk.c:510 #9 0x0000555555d08f6d in virtio_device_realize (dev=0x555557bc60b0, errp=0x7fffffffcff0) at ../hw/virtio/virtio.c:3660 Note that this removes the ability to reconnect during initialisation (but not during operation) when there is no permanent error, but the backend restarts, as the implementation was buggy. This feature can be added back in a follow-up series after changing error paths to distinguish cases where retrying could help from cases with permanent errors. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-Id: <20210429171316.162022-3-kwolf@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Include the qtest reproducer provided by Alexander Bulekov in https://gitlab.com/qemu-project/qemu/-/issues/542. Without the previous commit, we get: $ make check-qtest-i386 ... Running test tests/qtest/intel-hda-test AddressSanitizer:DEADLYSIGNAL ================================================================= ==1580408==ERROR: AddressSanitizer: stack-overflow on address 0x7ffc3d566fe0 #0 0x63d297cf in address_space_translate_internal softmmu/physmem.c:356 #1 0x63d27260 in flatview_do_translate softmmu/physmem.c:499:15 #2 0x63d27af5 in flatview_translate softmmu/physmem.c:565:15 #3 0x63d4ce84 in flatview_write softmmu/physmem.c:2850:10 #4 0x63d4cb18 in address_space_write softmmu/physmem.c:2950:18 #5 0x63d4d387 in address_space_rw softmmu/physmem.c:2960:16 #6 0x62ae12f2 in dma_memory_rw_relaxed include/sysemu/dma.h:89:12 #7 0x62ae104a in dma_memory_rw include/sysemu/dma.h:132:12 #8 0x62ae6157 in dma_memory_write include/sysemu/dma.h:173:12 #9 0x62ae5ec0 in stl_le_dma include/sysemu/dma.h:275:1 #10 0x62ae5ba2 in stl_le_pci_dma include/hw/pci/pci.h:871:1 #11 0x62ad59a6 in intel_hda_response hw/audio/intel-hda.c:372:12 #12 0x62ad2afb in hda_codec_response hw/audio/intel-hda.c:107:5 #13 0x62aec4e1 in hda_audio_command hw/audio/hda-codec.c:655:5 #14 0x62ae05d9 in intel_hda_send_command hw/audio/intel-hda.c:307:5 #15 0x62adff54 in intel_hda_corb_run hw/audio/intel-hda.c:342:9 #16 0x62adc13b in intel_hda_set_corb_wp hw/audio/intel-hda.c:548:5 #17 0x62ae5942 in intel_hda_reg_write hw/audio/intel-hda.c:977:9 #18 0x62ada10a in intel_hda_mmio_write hw/audio/intel-hda.c:1054:5 #19 0x63d8f383 in memory_region_write_accessor softmmu/memory.c:492:5 #20 0x63d8ecc1 in access_with_adjusted_size softmmu/memory.c:554:18 #21 0x63d8d5d6 in memory_region_dispatch_write softmmu/memory.c:1504:16 #22 0x63d5e85e in flatview_write_continue softmmu/physmem.c:2812:23 #23 0x63d4d05b in flatview_write softmmu/physmem.c:2854:12 #24 0x63d4cb18 in address_space_write softmmu/physmem.c:2950:18 #25 0x63d4d387 in address_space_rw softmmu/physmem.c:2960:16 #26 0x62ae12f2 in dma_memory_rw_relaxed include/sysemu/dma.h:89:12 #27 0x62ae104a in dma_memory_rw include/sysemu/dma.h:132:12 #28 0x62ae6157 in dma_memory_write include/sysemu/dma.h:173:12 #29 0x62ae5ec0 in stl_le_dma include/sysemu/dma.h:275:1 #30 0x62ae5ba2 in stl_le_pci_dma include/hw/pci/pci.h:871:1 #31 0x62ad59a6 in intel_hda_response hw/audio/intel-hda.c:372:12 #32 0x62ad2afb in hda_codec_response hw/audio/intel-hda.c:107:5 #33 0x62aec4e1 in hda_audio_command hw/audio/hda-codec.c:655:5 #34 0x62ae05d9 in intel_hda_send_command hw/audio/intel-hda.c:307:5 #35 0x62adff54 in intel_hda_corb_run hw/audio/intel-hda.c:342:9 #36 0x62adc13b in intel_hda_set_corb_wp hw/audio/intel-hda.c:548:5 #37 0x62ae5942 in intel_hda_reg_write hw/audio/intel-hda.c:977:9 #38 0x62ada10a in intel_hda_mmio_write hw/audio/intel-hda.c:1054:5 #39 0x63d8f383 in memory_region_write_accessor softmmu/memory.c:492:5 #40 0x63d8ecc1 in access_with_adjusted_size softmmu/memory.c:554:18 #41 0x63d8d5d6 in memory_region_dispatch_write softmmu/memory.c:1504:16 #42 0x63d5e85e in flatview_write_continue softmmu/physmem.c:2812:23 #43 0x63d4d05b in flatview_write softmmu/physmem.c:2854:12 #44 0x63d4cb18 in address_space_write softmmu/physmem.c:2950:18 #45 0x63d4d387 in address_space_rw softmmu/physmem.c:2960:16 #46 0x62ae12f2 in dma_memory_rw_relaxed include/sysemu/dma.h:89:12 #47 0x62ae104a in dma_memory_rw include/sysemu/dma.h:132:12 #48 0x62ae6157 in dma_memory_write include/sysemu/dma.h:173:12 ... SUMMARY: AddressSanitizer: stack-overflow softmmu/physmem.c:356 in address_space_translate_internal ==1580408==ABORTING Broken pipe Aborted (core dumped) Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Acked-by: Thomas Huth <thuth@redhat.com> Message-Id: <20211218160912.1591633-4-philmd@redhat.com> Signed-off-by: Thomas Huth <thuth@redhat.com>
The issue reported by OSS-Fuzz produces the following backtrace: ==447470==ERROR: AddressSanitizer: heap-buffer-overflow READ of size 1 at 0x61500002a080 thread T0 #0 0x71766d47 in sdhci_read_dataport hw/sd/sdhci.c:474:18 #1 0x7175f139 in sdhci_read hw/sd/sdhci.c:1022:19 #2 0x721b937b in memory_region_read_accessor softmmu/memory.c:440:11 #3 0x72171e51 in access_with_adjusted_size softmmu/memory.c:554:18 #4 0x7216f47c in memory_region_dispatch_read1 softmmu/memory.c:1424:16 #5 0x7216ebb9 in memory_region_dispatch_read softmmu/memory.c:1452:9 #6 0x7212db5d in flatview_read_continue softmmu/physmem.c:2879:23 #7 0x7212f958 in flatview_read softmmu/physmem.c:2921:12 #8 0x7212f418 in address_space_read_full softmmu/physmem.c:2934:18 #9 0x721305a9 in address_space_rw softmmu/physmem.c:2962:16 #10 0x7175a392 in dma_memory_rw_relaxed include/sysemu/dma.h:89:12 #11 0x7175a0ea in dma_memory_rw include/sysemu/dma.h:132:12 #12 0x71759684 in dma_memory_read include/sysemu/dma.h:152:12 #13 0x7175518c in sdhci_do_adma hw/sd/sdhci.c:823:27 #14 0x7174bf69 in sdhci_data_transfer hw/sd/sdhci.c:935:13 #15 0x7176aaa7 in sdhci_send_command hw/sd/sdhci.c:376:9 #16 0x717629ee in sdhci_write hw/sd/sdhci.c:1212:9 #17 0x72172513 in memory_region_write_accessor softmmu/memory.c:492:5 #18 0x72171e51 in access_with_adjusted_size softmmu/memory.c:554:18 #19 0x72170766 in memory_region_dispatch_write softmmu/memory.c:1504:16 #20 0x721419ee in flatview_write_continue softmmu/physmem.c:2812:23 #21 0x721301eb in flatview_write softmmu/physmem.c:2854:12 #22 0x7212fca8 in address_space_write softmmu/physmem.c:2950:18 #23 0x721d9a53 in qtest_process_command softmmu/qtest.c:727:9 A DMA descriptor is previously filled in RAM. An I/O access to the device (frames #22 to #16) start the DMA engine (frame #13). The engine fetch the descriptor and execute the request, which itself accesses the SDHCI I/O registers (frame #1 and #0), triggering a re-entrancy issue. Fix by prohibit transactions from the DMA to devices. The DMA engine is thus restricted to memories. Reported-by: OSS-Fuzz (Issue 36391) Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Thomas Huth <thuth@redhat.com> Resolves: https://gitlab.com/qemu-project/qemu/-/issues/451 Message-Id: <20211215205656.488940-3-philmd@redhat.com> Signed-off-by: Thomas Huth <thuth@redhat.com>
Include the qtest reproducer provided by Alexander Bulekov in https://gitlab.com/qemu-project/qemu/-/issues/451. Without the previous commit, we get: $ make check-qtest-i386 ... Running test qtest-i386/fuzz-sdcard-test ==447470==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x61500002a080 at pc 0x564c71766d48 bp 0x7ffc126c62b0 sp 0x7ffc126c62a8 READ of size 1 at 0x61500002a080 thread T0 #0 0x564c71766d47 in sdhci_read_dataport hw/sd/sdhci.c:474:18 #1 0x564c7175f139 in sdhci_read hw/sd/sdhci.c:1022:19 #2 0x564c721b937b in memory_region_read_accessor softmmu/memory.c:440:11 #3 0x564c72171e51 in access_with_adjusted_size softmmu/memory.c:554:18 #4 0x564c7216f47c in memory_region_dispatch_read1 softmmu/memory.c:1424:16 #5 0x564c7216ebb9 in memory_region_dispatch_read softmmu/memory.c:1452:9 #6 0x564c7212db5d in flatview_read_continue softmmu/physmem.c:2879:23 #7 0x564c7212f958 in flatview_read softmmu/physmem.c:2921:12 #8 0x564c7212f418 in address_space_read_full softmmu/physmem.c:2934:18 #9 0x564c721305a9 in address_space_rw softmmu/physmem.c:2962:16 #10 0x564c7175a392 in dma_memory_rw_relaxed include/sysemu/dma.h:89:12 #11 0x564c7175a0ea in dma_memory_rw include/sysemu/dma.h:132:12 #12 0x564c71759684 in dma_memory_read include/sysemu/dma.h:152:12 #13 0x564c7175518c in sdhci_do_adma hw/sd/sdhci.c:823:27 #14 0x564c7174bf69 in sdhci_data_transfer hw/sd/sdhci.c:935:13 #15 0x564c7176aaa7 in sdhci_send_command hw/sd/sdhci.c:376:9 #16 0x564c717629ee in sdhci_write hw/sd/sdhci.c:1212:9 #17 0x564c72172513 in memory_region_write_accessor softmmu/memory.c:492:5 #18 0x564c72171e51 in access_with_adjusted_size softmmu/memory.c:554:18 #19 0x564c72170766 in memory_region_dispatch_write softmmu/memory.c:1504:16 #20 0x564c721419ee in flatview_write_continue softmmu/physmem.c:2812:23 #21 0x564c721301eb in flatview_write softmmu/physmem.c:2854:12 #22 0x564c7212fca8 in address_space_write softmmu/physmem.c:2950:18 #23 0x564c721d9a53 in qtest_process_command softmmu/qtest.c:727:9 0x61500002a080 is located 0 bytes to the right of 512-byte region [0x615000029e80,0x61500002a080) allocated by thread T0 here: #0 0x564c708e1737 in __interceptor_calloc (qemu-system-i386+0x1e6a737) #1 0x7ff05567b5e0 in g_malloc0 (/lib64/libglib-2.0.so.0+0x5a5e0) #2 0x564c71774adb in sdhci_pci_realize hw/sd/sdhci-pci.c:36:5 SUMMARY: AddressSanitizer: heap-buffer-overflow hw/sd/sdhci.c:474:18 in sdhci_read_dataport Shadow bytes around the buggy address: 0x0c2a7fffd3c0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c2a7fffd3d0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x0c2a7fffd3e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x0c2a7fffd3f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x0c2a7fffd400: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 =>0x0c2a7fffd410:[fa]fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c2a7fffd420: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd 0x0c2a7fffd430: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd 0x0c2a7fffd440: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd 0x0c2a7fffd450: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd 0x0c2a7fffd460: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa Shadow byte legend (one shadow byte represents 8 application bytes): Addressable: 00 Heap left redzone: fa Freed heap region: fd ==447470==ABORTING Broken pipe ERROR qtest-i386/fuzz-sdcard-test - too few tests run (expected 3, got 2) Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Acked-by: Thomas Huth <thuth@redhat.com> Message-Id: <20211215205656.488940-4-philmd@redhat.com> [thuth: Replaced "-m 4G" with "-m 512M"] Signed-off-by: Thomas Huth <thuth@redhat.com>
Since commit 0439c5a ("block/block-backend.c: assertions for block-backend") QEMU crashes when using Cocoa on Darwin hosts. Example on macOS: $ qemu-system-i386 Assertion failed: (qemu_in_main_thread()), function blk_all_next, file block-backend.c, line 552. Abort trap: 6 Looking with lldb: Assertion failed: (qemu_in_main_thread()), function blk_all_next, file block-backend.c, line 552. Process 76914 stopped * thread #1, queue = 'com.apple.main-thread', stop reason = hit program assert frame #4: 0x000000010057c2d4 qemu-system-i386`blk_all_next.cold.1 at block-backend.c:552:5 [opt] 549 */ 550 BlockBackend *blk_all_next(BlockBackend *blk) 551 { --> 552 GLOBAL_STATE_CODE(); 553 return blk ? QTAILQ_NEXT(blk, link) 554 : QTAILQ_FIRST(&block_backends); 555 } Target 1: (qemu-system-i386) stopped. (lldb) bt * thread #1, queue = 'com.apple.main-thread', stop reason = hit program assert frame #0: 0x00000001908c99b8 libsystem_kernel.dylib`__pthread_kill + 8 frame #1: 0x00000001908fceb0 libsystem_pthread.dylib`pthread_kill + 288 frame #2: 0x000000019083a314 libsystem_c.dylib`abort + 164 frame #3: 0x000000019083972c libsystem_c.dylib`__assert_rtn + 300 * frame #4: 0x000000010057c2d4 qemu-system-i386`blk_all_next.cold.1 at block-backend.c:552:5 [opt] frame #5: 0x00000001003c00b4 qemu-system-i386`blk_all_next(blk=<unavailable>) at block-backend.c:552:5 [opt] frame #6: 0x00000001003d8f04 qemu-system-i386`qmp_query_block(errp=0x0000000000000000) at qapi.c:591:16 [opt] frame #7: 0x000000010003ab0c qemu-system-i386`main [inlined] addRemovableDevicesMenuItems at cocoa.m:1756:21 [opt] frame #8: 0x000000010003ab04 qemu-system-i386`main(argc=<unavailable>, argv=<unavailable>) at cocoa.m:1980:5 [opt] frame #9: 0x00000001012690f4 dyld`start + 520 As we are in passed release 7.0 hard freeze, disable the block backend assertion which, while being valuable during development, is not helpful to users. We'll restore this assertion immediately once 7.0 is released and work on a fix. Suggested-by: Akihiko Odaki <akihiko.odaki@gmail.com> Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Akihiko Odaki <akihiko.odaki@gmail.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-Id: <20220325183707.85733-1-philippe.mathieu.daude@gmail.com>
Fixes the appended use-after-free. The root cause is that during tb invalidation we use CPU_FOREACH, and therefore to safely free a vCPU we must wait for an RCU grace period to elapse. $ x86_64-linux-user/qemu-x86_64 tests/tcg/x86_64-linux-user/munmap-pthread ================================================================= ==1800604==ERROR: AddressSanitizer: heap-use-after-free on address 0x62d0005f7418 at pc 0x5593da6704eb bp 0x7f4961a7ac70 sp 0x7f4961a7ac60 READ of size 8 at 0x62d0005f7418 thread T2 #0 0x5593da6704ea in tb_jmp_cache_inval_tb ../accel/tcg/tb-maint.c:244 #1 0x5593da6704ea in do_tb_phys_invalidate ../accel/tcg/tb-maint.c:290 #2 0x5593da670631 in tb_phys_invalidate__locked ../accel/tcg/tb-maint.c:306 #3 0x5593da670631 in tb_invalidate_phys_page_range__locked ../accel/tcg/tb-maint.c:542 #4 0x5593da67106d in tb_invalidate_phys_range ../accel/tcg/tb-maint.c:614 #5 0x5593da6a64d4 in target_munmap ../linux-user/mmap.c:766 #6 0x5593da6dba05 in do_syscall1 ../linux-user/syscall.c:10105 #7 0x5593da6f564c in do_syscall ../linux-user/syscall.c:13329 #8 0x5593da49e80c in cpu_loop ../linux-user/x86_64/../i386/cpu_loop.c:233 #9 0x5593da6be28c in clone_func ../linux-user/syscall.c:6633 #10 0x7f496231cb42 in start_thread nptl/pthread_create.c:442 #11 0x7f49623ae9ff (/lib/x86_64-linux-gnu/libc.so.6+0x1269ff) 0x62d0005f7418 is located 28696 bytes inside of 32768-byte region [0x62d0005f0400,0x62d0005f8400) freed by thread T148 here: #0 0x7f49627b6460 in __interceptor_free ../../../../src/libsanitizer/asan/asan_malloc_linux.cpp:52 #1 0x5593da5ac057 in cpu_exec_unrealizefn ../cpu.c:180 #2 0x5593da81f851 (/home/cota/src/qemu/build/qemu-x86_64+0x484851) Signed-off-by: Emilio Cota <cota@braap.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20230111151628.320011-2-cota@braap.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20230124180127.1881110-27-alex.bennee@linaro.org>
Fixes this tsan crash, easy to reproduce with any large enough program: $ tests/unit/test-qht 1..2 ThreadSanitizer: CHECK failed: sanitizer_deadlock_detector.h:67 "((n_all_locks_)) < (((sizeof(all_locks_with_contexts_)/sizeof((all_locks_with_contexts_)[0]))))" (0x40, 0x40) (tid=1821568) #0 __tsan::CheckUnwind() ../../../../src/libsanitizer/tsan/tsan_rtl.cpp:353 (libtsan.so.2+0x90034) #1 __sanitizer::CheckFailed(char const*, int, char const*, unsigned long long, unsigned long long) ../../../../src/libsanitizer/sanitizer_common/sanitizer_termination.cpp:86 (libtsan.so.2+0xca555) #2 __sanitizer::DeadlockDetectorTLS<__sanitizer::TwoLevelBitVector<1ul, __sanitizer::BasicBitVector<unsigned long> > >::addLock(unsigned long, unsigned long, unsigned int) ../../../../src/libsanitizer/sanitizer_common/sanitizer_deadlock_detector.h:67 (libtsan.so.2+0xb3616) #3 __sanitizer::DeadlockDetectorTLS<__sanitizer::TwoLevelBitVector<1ul, __sanitizer::BasicBitVector<unsigned long> > >::addLock(unsigned long, unsigned long, unsigned int) ../../../../src/libsanitizer/sanitizer_common/sanitizer_deadlock_detector.h:59 (libtsan.so.2+0xb3616) #4 __sanitizer::DeadlockDetector<__sanitizer::TwoLevelBitVector<1ul, __sanitizer::BasicBitVector<unsigned long> > >::onLockAfter(__sanitizer::DeadlockDetectorTLS<__sanitizer::TwoLevelBitVector<1ul, __sanitizer::BasicBitVector<unsigned long> > >*, unsigned long, unsigned int) ../../../../src/libsanitizer/sanitizer_common/sanitizer_deadlock_detector.h:216 (libtsan.so.2+0xb3616) #5 __sanitizer::DD::MutexAfterLock(__sanitizer::DDCallback*, __sanitizer::DDMutex*, bool, bool) ../../../../src/libsanitizer/sanitizer_common/sanitizer_deadlock_detector1.cpp:169 (libtsan.so.2+0xb3616) #6 __tsan::MutexPostLock(__tsan::ThreadState*, unsigned long, unsigned long, unsigned int, int) ../../../../src/libsanitizer/tsan/tsan_rtl_mutex.cpp:200 (libtsan.so.2+0xa3382) #7 __tsan_mutex_post_lock ../../../../src/libsanitizer/tsan/tsan_interface_ann.cpp:384 (libtsan.so.2+0x76bc3) #8 qemu_spin_lock /home/cota/src/qemu/include/qemu/thread.h:259 (test-qht+0x44a97) #9 qht_map_lock_buckets ../util/qht.c:253 (test-qht+0x44a97) #10 do_qht_iter ../util/qht.c:809 (test-qht+0x45f33) #11 qht_iter ../util/qht.c:821 (test-qht+0x45f33) #12 iter_check ../tests/unit/test-qht.c:121 (test-qht+0xe473) #13 qht_do_test ../tests/unit/test-qht.c:202 (test-qht+0xe473) #14 qht_test ../tests/unit/test-qht.c:240 (test-qht+0xe7c1) #15 test_default ../tests/unit/test-qht.c:246 (test-qht+0xe828) #16 <null> <null> (libglib-2.0.so.0+0x7daed) #17 <null> <null> (libglib-2.0.so.0+0x7d80a) #18 <null> <null> (libglib-2.0.so.0+0x7d80a) #19 g_test_run_suite <null> (libglib-2.0.so.0+0x7dfe9) #20 g_test_run <null> (libglib-2.0.so.0+0x7e055) #21 main ../tests/unit/test-qht.c:259 (test-qht+0xd2c6) #22 __libc_start_call_main ../sysdeps/nptl/libc_start_call_main.h:58 (libc.so.6+0x29d8f) #23 __libc_start_main_impl ../csu/libc-start.c:392 (libc.so.6+0x29e3f) #24 _start <null> (test-qht+0xdb44) Signed-off-by: Emilio Cota <cota@braap.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20230111151628.320011-5-cota@braap.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20230124180127.1881110-30-alex.bennee@linaro.org>
After live migration with virtio block device, qemu crash at: #0 0x000055914f46f795 in object_dynamic_cast_assert (obj=0x559151b7b090, typename=0x55914f80fbc4 "qio-channel", file=0x55914f80fb90 "/images/testvfe/sw/qemu.gerrit/include/io/channel.h", line=30, func=0x55914f80fcb8 <__func__.17257> "QIO_CHANNEL") at ../qom/object.c:872 #1 0x000055914f480d68 in QIO_CHANNEL (obj=0x559151b7b090) at /images/testvfe/sw/qemu.gerrit/include/io/channel.h:29 #2 0x000055914f4812f8 in qio_net_listener_set_client_func_full (listener=0x559151b7a720, func=0x55914f580b97 <tcp_chr_accept>, data=0x5591519f4ea0, notify=0x0, context=0x0) at ../io/net-listener.c:166 #3 0x000055914f580059 in tcp_chr_update_read_handler (chr=0x5591519f4ea0) at ../chardev/char-socket.c:637 #4 0x000055914f583dca in qemu_chr_be_update_read_handlers (s=0x5591519f4ea0, context=0x0) at ../chardev/char.c:226 #5 0x000055914f57b7c9 in qemu_chr_fe_set_handlers_full (b=0x559152bf23a0, fd_can_read=0x0, fd_read=0x0, fd_event=0x0, be_change=0x0, opaque=0x0, context=0x0, set_open=false, sync_state=true) at ../chardev/char-fe.c:279 #6 0x000055914f57b86d in qemu_chr_fe_set_handlers (b=0x559152bf23a0, fd_can_read=0x0, fd_read=0x0, fd_event=0x0, be_change=0x0, opaque=0x0, context=0x0, set_open=false) at ../chardev/char-fe.c:304 #7 0x000055914f378caf in vhost_user_async_close (d=0x559152bf21a0, chardev=0x559152bf23a0, vhost=0x559152bf2420, cb=0x55914f2fb8c1 <vhost_user_blk_disconnect>) at ../hw/virtio/vhost-user.c:2725 #8 0x000055914f2fba40 in vhost_user_blk_event (opaque=0x559152bf21a0, event=CHR_EVENT_CLOSED) at ../hw/block/vhost-user-blk.c:395 #9 0x000055914f58388c in chr_be_event (s=0x5591519f4ea0, event=CHR_EVENT_CLOSED) at ../chardev/char.c:61 #10 0x000055914f583905 in qemu_chr_be_event (s=0x5591519f4ea0, event=CHR_EVENT_CLOSED) at ../chardev/char.c:81 #11 0x000055914f581275 in char_socket_finalize (obj=0x5591519f4ea0) at ../chardev/char-socket.c:1083 #12 0x000055914f46f073 in object_deinit (obj=0x5591519f4ea0, type=0x5591519055c0) at ../qom/object.c:680 #13 0x000055914f46f0e5 in object_finalize (data=0x5591519f4ea0) at ../qom/object.c:694 #14 0x000055914f46ff06 in object_unref (objptr=0x5591519f4ea0) at ../qom/object.c:1202 #15 0x000055914f4715a4 in object_finalize_child_property (obj=0x559151b76c50, name=0x559151b7b250 "char3", opaque=0x5591519f4ea0) at ../qom/object.c:1747 #16 0x000055914f46ee86 in object_property_del_all (obj=0x559151b76c50) at ../qom/object.c:632 #17 0x000055914f46f0d2 in object_finalize (data=0x559151b76c50) at ../qom/object.c:693 #18 0x000055914f46ff06 in object_unref (objptr=0x559151b76c50) at ../qom/object.c:1202 #19 0x000055914f4715a4 in object_finalize_child_property (obj=0x559151b6b560, name=0x559151b76630 "chardevs", opaque=0x559151b76c50) at ../qom/object.c:1747 #20 0x000055914f46ef67 in object_property_del_child (obj=0x559151b6b560, child=0x559151b76c50) at ../qom/object.c:654 #21 0x000055914f46f042 in object_unparent (obj=0x559151b76c50) at ../qom/object.c:673 #22 0x000055914f58632a in qemu_chr_cleanup () at ../chardev/char.c:1189 #23 0x000055914f16c66c in qemu_cleanup () at ../softmmu/runstate.c:830 #24 0x000055914eee7b9e in qemu_default_main () at ../softmmu/main.c:38 #25 0x000055914eee7bcc in main (argc=86, argv=0x7ffc97cb8d88) at ../softmmu/main.c:48 In char_socket_finalize after s->listener freed, event callback function vhost_user_blk_event will be called to handle CHR_EVENT_CLOSED. vhost_user_blk_event is calling qio_net_listener_set_client_func_full which is still using s->listener. Setting s->listener = NULL after object_unref(OBJECT(s->listener)) can solve this issue. Signed-off-by: Yajun Wu <yajunw@nvidia.com> Acked-by: Jiri Pirko <jiri@nvidia.com> Message-Id: <20230214021430.3638579-1-yajunw@nvidia.com> Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
This leakage can be seen through test-io-channel-tls: $ ../configure --target-list=aarch64-softmmu --enable-sanitizers $ make ./tests/unit/test-io-channel-tls $ ./tests/unit/test-io-channel-tls Indirect leak of 104 byte(s) in 1 object(s) allocated from: #0 0x7f81d1725808 in __interceptor_malloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cc:144 #1 0x7f81d135ae98 in g_malloc (/lib/x86_64-linux-gnu/libglib-2.0.so.0+0x57e98) #2 0x55616c5d4c1b in object_new_with_propv ../qom/object.c:795 #3 0x55616c5d4a83 in object_new_with_props ../qom/object.c:768 #4 0x55616c5c5415 in test_tls_creds_create ../tests/unit/test-io-channel-tls.c:70 #5 0x55616c5c5a6b in test_io_channel_tls ../tests/unit/test-io-channel-tls.c:158 #6 0x7f81d137d58d (/lib/x86_64-linux-gnu/libglib-2.0.so.0+0x7a58d) Indirect leak of 32 byte(s) in 1 object(s) allocated from: #0 0x7f81d1725a06 in __interceptor_calloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cc:153 #1 0x7f81d1472a20 in gnutls_dh_params_init (/lib/x86_64-linux-gnu/libgnutls.so.30+0x46a20) #2 0x55616c6485ff in qcrypto_tls_creds_x509_load ../crypto/tlscredsx509.c:634 #3 0x55616c648ba2 in qcrypto_tls_creds_x509_complete ../crypto/tlscredsx509.c:694 #4 0x55616c5e1fea in user_creatable_complete ../qom/object_interfaces.c:28 #5 0x55616c5d4c8c in object_new_with_propv ../qom/object.c:807 #6 0x55616c5d4a83 in object_new_with_props ../qom/object.c:768 #7 0x55616c5c5415 in test_tls_creds_create ../tests/unit/test-io-channel-tls.c:70 #8 0x55616c5c5a6b in test_io_channel_tls ../tests/unit/test-io-channel-tls.c:158 #9 0x7f81d137d58d (/lib/x86_64-linux-gnu/libglib-2.0.so.0+0x7a58d) ... SUMMARY: AddressSanitizer: 49143 byte(s) leaked in 184 allocation(s). The docs for `g_source_add_child_source(source, child_source)` says "source will hold a reference on child_source while child_source is attached to it." Therefore, we should unreference the child source at `qio_channel_tls_read_watch()` after attaching it to `source`. With this change, ./tests/unit/test-io-channel-tls shows no leakages. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Matheus Tavares Bernardino <quic_mathbern@quicinc.com> Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
xbzrle_encode_buffer_avx512() checks for overflows too scarcely in its outer loop, causing out-of-bounds writes: $ ../configure --target-list=aarch64-softmmu --enable-sanitizers --enable-avx512bw $ make tests/unit/test-xbzrle && ./tests/unit/test-xbzrle ==5518==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x62100000b100 at pc 0x561109a7714d bp 0x7ffed712a440 sp 0x7ffed712a430 WRITE of size 1 at 0x62100000b100 thread T0 #0 0x561109a7714c in uleb128_encode_small ../util/cutils.c:831 #1 0x561109b67f6a in xbzrle_encode_buffer_avx512 ../migration/xbzrle.c:275 #2 0x5611099a7428 in test_encode_decode_overflow ../tests/unit/test-xbzrle.c:153 #3 0x7fb2fb65a58d (/lib/x86_64-linux-gnu/libglib-2.0.so.0+0x7a58d) #4 0x7fb2fb65a333 (/lib/x86_64-linux-gnu/libglib-2.0.so.0+0x7a333) #5 0x7fb2fb65aa79 in g_test_run_suite (/lib/x86_64-linux-gnu/libglib-2.0.so.0+0x7aa79) #6 0x7fb2fb65aa94 in g_test_run (/lib/x86_64-linux-gnu/libglib-2.0.so.0+0x7aa94) #7 0x5611099a3a23 in main ../tests/unit/test-xbzrle.c:218 #8 0x7fb2fa78c082 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x24082) #9 0x5611099a608d in _start (/qemu/build/tests/unit/test-xbzrle+0x28408d) 0x62100000b100 is located 0 bytes to the right of 4096-byte region [0x62100000a100,0x62100000b100) allocated by thread T0 here: #0 0x7fb2fb823a06 in __interceptor_calloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cc:153 #1 0x7fb2fb637ef0 in g_malloc0 (/lib/x86_64-linux-gnu/libglib-2.0.so.0+0x57ef0) Fix that by performing the overflow check in the inner loop, instead. Signed-off-by: Matheus Tavares Bernardino <quic_mathbern@quicinc.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
For ex, when resetting the xlnx-zcu102 machine: (lldb) bt * thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x50) * frame #0: 0x10020a740 gd_vc_send_chars(vc=0x000000000) at gtk.c:1759:41 [opt] frame #1: 0x100636264 qemu_chr_fe_accept_input(be=<unavailable>) at char-fe.c:159:9 [opt] frame #2: 0x1000608e0 cadence_uart_reset_hold [inlined] uart_rx_reset(s=0x10810a960) at cadence_uart.c:158:5 [opt] frame #3: 0x1000608d4 cadence_uart_reset_hold(obj=0x10810a960) at cadence_uart.c:530:5 [opt] frame #4: 0x100580ab4 resettable_phase_hold(obj=0x10810a960, opaque=0x000000000, type=<unavailable>) at resettable.c:0 [opt] frame #5: 0x10057d1b0 bus_reset_child_foreach(obj=<unavailable>, cb=(resettable_phase_hold at resettable.c:162), opaque=0x000000000, type=RESET_TYPE_COLD) at bus.c:97:13 [opt] frame #6: 0x1005809f8 resettable_phase_hold [inlined] resettable_child_foreach(rc=0x000060000332d2c0, obj=0x0000600002c1c180, cb=<unavailable>, opaque=0x000000000, type=RESET_TYPE_COLD) at resettable.c:96:9 [opt] frame #7: 0x1005809d8 resettable_phase_hold(obj=0x0000600002c1c180, opaque=0x000000000, type=RESET_TYPE_COLD) at resettable.c:173:5 [opt] frame #8: 0x1005803a0 resettable_assert_reset(obj=0x0000600002c1c180, type=<unavailable>) at resettable.c:60:5 [opt] frame #9: 0x10058027c resettable_reset(obj=0x0000600002c1c180, type=RESET_TYPE_COLD) at resettable.c:45:5 [opt] While the chardev is created early, the VirtualConsole is associated after, during qemu_init_displays(). Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com> Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-Id: <20230220072251.3385878-1-marcandre.lureau@redhat.com>
blk_get_geometry() eventually calls bdrv_nb_sectors(), which is a co_wrapper_mixed_bdrv_rdlock. This means that when it is called from coroutine context, it already assume to have the graph locked. However, virtio_blk_sect_range_ok() in block/export/virtio-blk-handler.c (used by vhost-user-blk and VDUSE exports) runs in a coroutine, but doesn't take the graph lock - blk_*() functions are generally expected to do that internally. This causes an assertion failure when accessing an export for the first time if it runs in an iothread. This is an example of the crash: $ ./storage-daemon/qemu-storage-daemon --object iothread,id=th0 --blockdev file,filename=/home/kwolf/images/hd.img,node-name=disk --export vhost-user-blk,addr.type=unix,addr.path=/tmp/vhost.sock,node-name=disk,id=exp0,iothread=th0 qemu-storage-daemon: ../block/graph-lock.c:268: void assert_bdrv_graph_readable(void): Assertion `qemu_in_main_thread() || reader_count()' failed. (gdb) bt #0 0x00007ffff6eafe5c in __pthread_kill_implementation () from /lib64/libc.so.6 #1 0x00007ffff6e5fa76 in raise () from /lib64/libc.so.6 #2 0x00007ffff6e497fc in abort () from /lib64/libc.so.6 #3 0x00007ffff6e4971b in __assert_fail_base.cold () from /lib64/libc.so.6 #4 0x00007ffff6e58656 in __assert_fail () from /lib64/libc.so.6 #5 0x00005555556337a3 in assert_bdrv_graph_readable () at ../block/graph-lock.c:268 #6 0x00005555555fd5a2 in bdrv_co_nb_sectors (bs=0x5555564c5ef0) at ../block.c:5847 #7 0x00005555555ee949 in bdrv_nb_sectors (bs=0x5555564c5ef0) at block/block-gen.c:256 #8 0x00005555555fd6b9 in bdrv_get_geometry (bs=0x5555564c5ef0, nb_sectors_ptr=0x7fffef7fedd0) at ../block.c:5884 #9 0x000055555562ad6d in blk_get_geometry (blk=0x5555564cb200, nb_sectors_ptr=0x7fffef7fedd0) at ../block/block-backend.c:1624 #10 0x00005555555ddb74 in virtio_blk_sect_range_ok (blk=0x5555564cb200, block_size=512, sector=0, size=512) at ../block/export/virtio-blk-handler.c:44 #11 0x00005555555dd80d in virtio_blk_process_req (handler=0x5555564cbb98, in_iov=0x7fffe8003830, out_iov=0x7fffe8003860, in_num=1, out_num=0) at ../block/export/virtio-blk-handler.c:189 #12 0x00005555555dd546 in vu_blk_virtio_process_req (opaque=0x7fffe8003800) at ../block/export/vhost-user-blk-server.c:66 #13 0x00005555557bf4a1 in coroutine_trampoline (i0=-402635264, i1=32767) at ../util/coroutine-ucontext.c:177 #14 0x00007ffff6e75c20 in ?? () from /lib64/libc.so.6 #15 0x00007fffefffa870 in ?? () #16 0x0000000000000000 in ?? () Fix this by creating a new blk_co_get_geometry() that takes the lock, and changing blk_get_geometry() to be a co_wrapper_mixed around it. To make the resulting code cleaner, virtio-blk-handler.c can directly call the coroutine version now (though that wouldn't be necessary for fixing the bug, taking the lock in blk_co_get_geometry() is what fixes it). Fixes: 8ab8140 Reported-by: Lukáš Doktor <ldoktor@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-Id: <20230327113959.60071-1-kwolf@redhat.com> Reviewed-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
If there is a pending DMA operation during ide_bus_reset(), the fact that the IDEState is already reset before the operation is canceled can be problematic. In particular, ide_dma_cb() might be called and then use the reset IDEState which contains the signature after the reset. When used to construct the IO operation this leads to ide_get_sector() returning 0 and nsector being 1. This is particularly bad, because a write command will thus destroy the first sector which often contains a partition table or similar. Traces showing the unsolicited write happening with IDEState 0x5595af6949d0 being used after reset: > ahci_port_write ahci(0x5595af6923f0)[0]: port write [reg:PxSCTL] @ 0x2c: 0x00000300 > ahci_reset_port ahci(0x5595af6923f0)[0]: reset port > ide_reset IDEstate 0x5595af6949d0 > ide_reset IDEstate 0x5595af694da8 > ide_bus_reset_aio aio_cancel > dma_aio_cancel dbs=0x7f64600089a0 > dma_blk_cb dbs=0x7f64600089a0 ret=0 > dma_complete dbs=0x7f64600089a0 ret=0 cb=0x5595acd40b30 > ahci_populate_sglist ahci(0x5595af6923f0)[0] > ahci_dma_prepare_buf ahci(0x5595af6923f0)[0]: prepare buf limit=512 prepared=512 > ide_dma_cb IDEState 0x5595af6949d0; sector_num=0 n=1 cmd=DMA WRITE > dma_blk_io dbs=0x7f6420802010 bs=0x5595ae2c6c30 offset=0 to_dev=1 > dma_blk_cb dbs=0x7f6420802010 ret=0 > (gdb) p *qiov > $11 = {iov = 0x7f647c76d840, niov = 1, {{nalloc = 1, local_iov = {iov_base = 0x0, > iov_len = 512}}, {__pad = "\001\000\000\000\000\000\000\000\000\000\000", > size = 512}}} > (gdb) bt > #0 blk_aio_pwritev (blk=0x5595ae2c6c30, offset=0, qiov=0x7f6420802070, flags=0, > cb=0x5595ace6f0b0 <dma_blk_cb>, opaque=0x7f6420802010) > at ../block/block-backend.c:1682 > #1 0x00005595ace6f185 in dma_blk_cb (opaque=0x7f6420802010, ret=<optimized out>) > at ../softmmu/dma-helpers.c:179 > #2 0x00005595ace6f778 in dma_blk_io (ctx=0x5595ae0609f0, > sg=sg@entry=0x5595af694d00, offset=offset@entry=0, align=align@entry=512, > io_func=io_func@entry=0x5595ace6ee30 <dma_blk_write_io_func>, > io_func_opaque=io_func_opaque@entry=0x5595ae2c6c30, > cb=0x5595acd40b30 <ide_dma_cb>, opaque=0x5595af6949d0, > dir=DMA_DIRECTION_TO_DEVICE) at ../softmmu/dma-helpers.c:244 > #3 0x00005595ace6f90a in dma_blk_write (blk=0x5595ae2c6c30, > sg=sg@entry=0x5595af694d00, offset=offset@entry=0, align=align@entry=512, > cb=cb@entry=0x5595acd40b30 <ide_dma_cb>, opaque=opaque@entry=0x5595af6949d0) > at ../softmmu/dma-helpers.c:280 > #4 0x00005595acd40e18 in ide_dma_cb (opaque=0x5595af6949d0, ret=<optimized out>) > at ../hw/ide/core.c:953 > #5 0x00005595ace6f319 in dma_complete (ret=0, dbs=0x7f64600089a0) > at ../softmmu/dma-helpers.c:107 > #6 dma_blk_cb (opaque=0x7f64600089a0, ret=0) at ../softmmu/dma-helpers.c:127 > #7 0x00005595ad12227d in blk_aio_complete (acb=0x7f6460005b10) > at ../block/block-backend.c:1527 > #8 blk_aio_complete (acb=0x7f6460005b10) at ../block/block-backend.c:1524 > #9 blk_aio_write_entry (opaque=0x7f6460005b10) at ../block/block-backend.c:1594 > #10 0x00005595ad258cfb in coroutine_trampoline (i0=<optimized out>, > i1=<optimized out>) at ../util/coroutine-ucontext.c:177 Signed-off-by: Fiona Ebner <f.ebner@proxmox.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Tested-by: simon.rowe@nutanix.com Message-ID: <20230906130922.142845-1-f.ebner@proxmox.com> Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
No description provided.