forked from DPDK/dpdk
-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support RPC server and client functions #3
Merged
kailiangz1
merged 2 commits into
Mellanox:main
from
LiZhang-2020:lizh-dpdk-vfe-rpc-0511
Jul 19, 2022
Merged
Support RPC server and client functions #3
kailiangz1
merged 2 commits into
Mellanox:main
from
LiZhang-2020:lizh-dpdk-vfe-rpc-0511
Jul 19, 2022
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
LiZhang-2020
force-pushed
the
lizh-dpdk-vfe-rpc-0511
branch
2 times, most recently
from
May 17, 2022 08:34
23e7211
to
d46ec9b
Compare
kailiangz1
requested changes
May 17, 2022
kailiangz1
reviewed
May 17, 2022
LiZhang-2020
force-pushed
the
lizh-dpdk-vfe-rpc-0511
branch
2 times, most recently
from
May 18, 2022 08:43
08e5d66
to
d976d48
Compare
kailiangz1
requested changes
May 19, 2022
LiZhang-2020
force-pushed
the
lizh-dpdk-vfe-rpc-0511
branch
3 times, most recently
from
May 20, 2022 10:28
b6d0afd
to
e78c7c8
Compare
GavinLi-NV
reviewed
May 23, 2022
LiZhang-2020
force-pushed
the
lizh-dpdk-vfe-rpc-0511
branch
7 times, most recently
from
May 26, 2022 09:32
a7cfc09
to
0a32e80
Compare
GavinLi-NV
reviewed
May 27, 2022
GavinLi-NV
reviewed
May 27, 2022
GavinLi-NV
reviewed
May 27, 2022
LiZhang-2020
force-pushed
the
lizh-dpdk-vfe-rpc-0511
branch
2 times, most recently
from
June 13, 2022 02:06
adbf4ba
to
ea4afee
Compare
GavinLi-NV
reviewed
Jun 13, 2022
lib/jsonrpc/rte_rpc.h
Outdated
enum vdpa_vf_modify_flags { | ||
VDPA_VF_MSIX_NUM, | ||
VDPA_VF_QUEUE_NUM, | ||
VDPA_VF_QUEUE_SZIE, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Typo
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
LiZhang-2020
force-pushed
the
lizh-dpdk-vfe-rpc-0511
branch
4 times, most recently
from
June 16, 2022 11:21
b77d691
to
c7c16ee
Compare
LiZhang-2020
force-pushed
the
lizh-dpdk-vfe-rpc-0511
branch
2 times, most recently
from
June 21, 2022 04:23
ffac404
to
db2bdb4
Compare
LiZhang-2020
force-pushed
the
lizh-dpdk-vfe-rpc-0511
branch
4 times, most recently
from
June 29, 2022 08:32
a91f5f0
to
3a179c3
Compare
LiZhang-2020
force-pushed
the
lizh-dpdk-vfe-rpc-0511
branch
4 times, most recently
from
July 12, 2022 01:55
fdd7c34
to
4291972
Compare
LiZhang-2020
force-pushed
the
lizh-dpdk-vfe-rpc-0511
branch
5 times, most recently
from
July 14, 2022 06:43
c91c7ce
to
ec15ce5
Compare
Add basic Json RPC functions, such as RPC server and client. The following files is porting from open source and author is hmg. cJSON.c/cJSON.h/jsonrpc-c.c/jsonrpc-c.h https://github.com/hmng/jsonrpc-c These new files depend on libev, http://software.schmorp.de/pkg/libev.html It need install libev in system. Signed-off-by: Li Zhang <lizh@nvidia.com>
Support RPC message with application user to configure VDPA. Create RPC thread to handle RPC message from application. Call the relation API rte_vdpa to trigger driver configuration HW. python file vhostmgmt as RPC example. The following command as example: python vhostmgmt -h python vhostmgmt mgmtpf -h python vhostmgmt mgmtpf -a 000:04:00.0 python vhostmgmt mgmtpf -r 000:04:00.0 python vhostmgmt mgmtpf -l python vhostmgmt vf -p 000:04:00.3 -n 24 -q 256 -s 1024 -f 200000002 -t 2048 -m 00:11:22:33:44:55 python vhostmgmt vf -a 000:04:00.3 -v /tmp/sock1 python vhostmgmt vf -l 000:04:00.3 python vhostmgmt vf -i 000:04:00.3 python vhostmgmt vf -r 000:04:00.3 python vhostmgmt vf -h python vhostmgmt vf -d 000:04:00.3 -o 1 python vhostmgmt vf -d 000:04:00.3 -o 4 -b 2 -e 4096 -g 4096 Signed-off-by: Li Zhang <lizh@nvidia.com>
LiZhang-2020
force-pushed
the
lizh-dpdk-vfe-rpc-0511
branch
from
July 19, 2022 01:48
ec15ce5
to
66b63b5
Compare
yajwu
added a commit
to yajwu/dpdk-vhost-vfe
that referenced
this pull request
Aug 28, 2023
In rte_vhost_driver_unregister which is called from vdpa-rpc thread, vsocket should be removed from reconn_list again after remove vsocket from conn_list. Because vhost_user_read_cb which is called in vhost-events thread can add vsocket to reconn_list again. When qemu close domain socket server, vhost_user_read_cb will be called to clean up vhost device. vsocket->path is NULL #0 0x00007f07665834d1 in __strnlen_sse2 () from /lib64/libc.so.6 Mellanox#1 0x00007f076aee79da in vhost_user_add_connection (fd=160, vsocket=0x7f070406d160) at ../lib/vhost/socket.c:226 Mellanox#2 0x00007f076aee7d63 in vhost_user_client_reconnect (arg=<optimized out>) at ../lib/vhost/socket.c:481 Mellanox#3 0x00007f07668cbdd5 in start_thread () from /lib64/libpthread.so.0 Mellanox#4 0x00007f07665f4ead in clone () from /lib64/libc.so.6 RM: 3585558 Signed-off-by: Yajun Wu <yajunw@nvidia.com>
kailiangz1
pushed a commit
that referenced
this pull request
Aug 28, 2023
In rte_vhost_driver_unregister which is called from vdpa-rpc thread, vsocket should be removed from reconn_list again after remove vsocket from conn_list. Because vhost_user_read_cb which is called in vhost-events thread can add vsocket to reconn_list again. When qemu close domain socket server, vhost_user_read_cb will be called to clean up vhost device. vsocket->path is NULL #0 0x00007f07665834d1 in __strnlen_sse2 () from /lib64/libc.so.6 #1 0x00007f076aee79da in vhost_user_add_connection (fd=160, vsocket=0x7f070406d160) at ../lib/vhost/socket.c:226 #2 0x00007f076aee7d63 in vhost_user_client_reconnect (arg=<optimized out>) at ../lib/vhost/socket.c:481 #3 0x00007f07668cbdd5 in start_thread () from /lib64/libpthread.so.0 #4 0x00007f07665f4ead in clone () from /lib64/libc.so.6 RM: 3585558 Signed-off-by: Yajun Wu <yajunw@nvidia.com>
yajwu
added a commit
to yajwu/dpdk-vhost-vfe
that referenced
this pull request
Sep 11, 2023
When MSIX configured less then queue number, quit testpmd in VM, cause vDPA crash. Mellanox#3 0x00007fbc8421b489 in _int_free () from /lib64/libc.so.6 Mellanox#4 0x0000000001a471c5 in virtio_vdpa_virtq_doorbell_relay_disable (vq_idx=vq_idx@entry=11, priv=<optimized out>, priv=<optimized out>) at ../drivers/vdpa/virtio/virtio_vdpa.c:349 Mellanox#5 0x0000000001a47275 in virtio_vdpa_virtq_disable () at ../drivers/vdpa/virtio/virtio_vdpa.c:413 Mellanox#6 0x0000000001a47a5a in virtio_vdpa_vring_state_set () at ../drivers/vdpa/virtio/virtio_vdpa.c:588 Mellanox#7 0x00000000005ad8af in vhost_user_notify_queue_state (dev=0x17ffcd000, index=11, enable=0) at ../lib/vhost/vhost_user.c:283 Mellanox#8 0x00000000005b0414 in vhost_user_msg_handler (vid=<optimized out>, fd=<optimized out>) at ../lib/vhost/vhost_user.c:3164 Mellanox#9 0x00000000012f812f in vhost_user_read_cb () at ../lib/vhost/socket.c:310 When callfd == -1, virtio_pci_dev_interrupt_enable is skilled. But in virtio_vdpa_virtq_disable, no such check to skip virtio_pci_dev_interrupt_disable. virtio_vdpa_virtq_disable return error without changing queue state to disable. Double free is caused by this wrong queue state. The fix is to add/check vector_enable variable for virtio_pci_dev_interrupt_disable. And remove error return in virtio_vdpa_virtq_disable. RM: 3587409 Signed-off-by: Yajun Wu <yajunw@nvidia.com>
yajwu
added a commit
to yajwu/dpdk-vhost-vfe
that referenced
this pull request
Sep 11, 2023
When MSIX configured less then queue number, quit testpmd in VM, cause vDPA crash. Mellanox#3 0x00007fbc8421b489 in _int_free () from /lib64/libc.so.6 Mellanox#4 0x0000000001a471c5 in virtio_vdpa_virtq_doorbell_relay_disable (vq_idx=vq_idx@entry=11, priv=<optimized out>, priv=<optimized out>) at ../drivers/vdpa/virtio/virtio_vdpa.c:349 Mellanox#5 0x0000000001a47275 in virtio_vdpa_virtq_disable () at ../drivers/vdpa/virtio/virtio_vdpa.c:413 Mellanox#6 0x0000000001a47a5a in virtio_vdpa_vring_state_set () at ../drivers/vdpa/virtio/virtio_vdpa.c:588 Mellanox#7 0x00000000005ad8af in vhost_user_notify_queue_state (dev=0x17ffcd000, index=11, enable=0) at ../lib/vhost/vhost_user.c:283 Mellanox#8 0x00000000005b0414 in vhost_user_msg_handler (vid=<optimized out>, fd=<optimized out>) at ../lib/vhost/vhost_user.c:3164 Mellanox#9 0x00000000012f812f in vhost_user_read_cb () at ../lib/vhost/socket.c:310 When callfd == -1, virtio_pci_dev_interrupt_enable is skipped. But in virtio_vdpa_virtq_disable, no such check to skip virtio_pci_dev_interrupt_disable. virtio_vdpa_virtq_disable return error without changing queue state to disable. Double free is caused by this wrong queue state. The fix is to add/check vector_enable variable for virtio_pci_dev_interrupt_disable. And remove error return in virtio_vdpa_virtq_disable. RM: 3587409 Signed-off-by: Yajun Wu <yajunw@nvidia.com>
kailiangz1
pushed a commit
that referenced
this pull request
Sep 18, 2023
When MSIX configured less then queue number, quit testpmd in VM, cause vDPA crash. #3 0x00007fbc8421b489 in _int_free () from /lib64/libc.so.6 #4 0x0000000001a471c5 in virtio_vdpa_virtq_doorbell_relay_disable (vq_idx=vq_idx@entry=11, priv=<optimized out>, priv=<optimized out>) at ../drivers/vdpa/virtio/virtio_vdpa.c:349 #5 0x0000000001a47275 in virtio_vdpa_virtq_disable () at ../drivers/vdpa/virtio/virtio_vdpa.c:413 #6 0x0000000001a47a5a in virtio_vdpa_vring_state_set () at ../drivers/vdpa/virtio/virtio_vdpa.c:588 #7 0x00000000005ad8af in vhost_user_notify_queue_state (dev=0x17ffcd000, index=11, enable=0) at ../lib/vhost/vhost_user.c:283 #8 0x00000000005b0414 in vhost_user_msg_handler (vid=<optimized out>, fd=<optimized out>) at ../lib/vhost/vhost_user.c:3164 #9 0x00000000012f812f in vhost_user_read_cb () at ../lib/vhost/socket.c:310 When callfd == -1, virtio_pci_dev_interrupt_enable is skipped. But in virtio_vdpa_virtq_disable, no such check to skip virtio_pci_dev_interrupt_disable. virtio_vdpa_virtq_disable return error without changing queue state to disable. Double free is caused by this wrong queue state. The fix is to add/check vector_enable variable for virtio_pci_dev_interrupt_disable. And remove error return in virtio_vdpa_virtq_disable. RM: 3587409 Signed-off-by: Yajun Wu <yajunw@nvidia.com>
Ch3n60x
pushed a commit
to Ch3n60x/dpdk-vhost-vfe
that referenced
this pull request
Mar 27, 2024
[ upstream commit 1c80a40 ] The net/vhost pmd currently provides a -1 vid when disabling interrupt after a virtio port got disconnected. This can be caught when running with ASan. First, start dpdk-l3fwd-power in interrupt mode with a net/vhost port. $ ./build-clang/examples/dpdk-l3fwd-power -l0,1 --in-memory \ -a 0000:00:00.0 \ --vdev net_vhost0,iface=plop.sock,client=1\ -- \ -p 0x1 \ --interrupt-only \ --config '(0,0,1)' \ --parse-ptype 0 Then start testpmd with virtio-user. $ ./build-clang/app/dpdk-testpmd -l0,2 --single-file-segment --in-memory \ -a 0000:00:00.0 \ --vdev net_virtio_user0,path=plop.sock,server=1 \ -- \ -i Finally stop testpmd. ASan then splats in dpdk-l3fwd-power: ================================================================= ==3641005==ERROR: AddressSanitizer: global-buffer-overflow on address 0x000005ed0778 at pc 0x000001270f81 bp 0x7fddbd2eee20 sp 0x7fddbd2eee18 READ of size 8 at 0x000005ed0778 thread T2 #0 0x1270f80 in get_device .../lib/vhost/vhost.h:801:27 Mellanox#1 0x1270f80 in rte_vhost_get_vhost_vring .../lib/vhost/vhost.c:951:8 Mellanox#2 0x3ac95cb in eth_rxq_intr_disable .../drivers/net/vhost/rte_eth_vhost.c:647:8 Mellanox#3 0x170e0bf in rte_eth_dev_rx_intr_disable .../lib/ethdev/rte_ethdev.c:5443:25 Mellanox#4 0xf72ba7 in turn_on_off_intr .../examples/l3fwd-power/main.c:881:4 Mellanox#5 0xf71045 in main_intr_loop .../examples/l3fwd-power/main.c:1061:6 Mellanox#6 0x17f9292 in eal_thread_loop .../lib/eal/common/eal_common_thread.c:210:9 Mellanox#7 0x18373f5 in eal_worker_thread_loop .../lib/eal/linux/eal.c:915:2 Mellanox#8 0x7fddc16ae12c in start_thread (/lib64/libc.so.6+0x8b12c) (BuildId: 81daba31ee66dbd63efdc4252a872949d874d136) Mellanox#9 0x7fddc172fbbf in __GI___clone3 (/lib64/libc.so.6+0x10cbbf) (BuildId: 81daba31ee66dbd63efdc4252a872949d874d136) 0x000005ed0778 is located 8 bytes to the left of global variable 'vhost_devices' defined in '.../lib/vhost/vhost.c:24' (0x5ed0780) of size 8192 0x000005ed0778 is located 20 bytes to the right of global variable 'vhost_config_log_level' defined in '.../lib/vhost/vhost.c:2174' (0x5ed0760) of size 4 SUMMARY: AddressSanitizer: global-buffer-overflow .../lib/vhost/vhost.h:801:27 in get_device Shadow bytes around the buggy address: 0x000080bd2090: f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 0x000080bd20a0: f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 0x000080bd20b0: f9 f9 f9 f9 00 f9 f9 f9 00 f9 f9 f9 00 f9 f9 f9 0x000080bd20c0: 00 00 00 00 00 00 00 f9 f9 f9 f9 f9 04 f9 f9 f9 0x000080bd20d0: 00 00 00 00 00 f9 f9 f9 f9 f9 f9 f9 00 00 00 00 =>0x000080bd20e0: 00 f9 f9 f9 f9 f9 f9 f9 04 f9 f9 f9 04 f9 f9[f9] 0x000080bd20f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x000080bd2100: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x000080bd2110: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x000080bd2120: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x000080bd2130: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 Shadow byte legend (one shadow byte represents 8 application bytes): Addressable: 00 Partially addressable: 01 02 03 04 05 06 07 Heap left redzone: fa Freed heap region: fd Stack left redzone: f1 Stack mid redzone: f2 Stack right redzone: f3 Stack after return: f5 Stack use after scope: f8 Global redzone: f9 Global init order: f6 Poisoned by user: f7 Container overflow: fc Array cookie: ac Intra object redzone: bb ASan internal: fe Left alloca redzone: ca Right alloca redzone: cb Thread T2 created by T0 here: #0 0xe98996 in __interceptor_pthread_create (.examples/dpdk-l3fwd-power+0xe98996) (BuildId: d0b984a3b0287b9e0f301b73426fa921aeecca3a) Mellanox#1 0x1836767 in eal_worker_thread_create .../lib/eal/linux/eal.c:952:6 Mellanox#2 0x1834b83 in rte_eal_init .../lib/eal/linux/eal.c:1257:9 Mellanox#3 0xf68902 in main .../examples/l3fwd-power/main.c:2496:8 Mellanox#4 0x7fddc164a50f in __libc_start_call_main (/lib64/libc.so.6+0x2750f) (BuildId: 81daba31ee66dbd63efdc4252a872949d874d136) ==3641005==ABORTING More generally, any application passing an incorrect vid would trigger such an OOB access. Fixes: 4796ad6 ("examples/vhost: import userspace vhost application") Signed-off-by: David Marchand <david.marchand@redhat.com> Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
jsonrpc: support RPC server and client functions
Support basic Json RPC functions,
such as RPC server and client.
Depends on libev http://software.schmorp.de/pkg/libev.html
examples/vdpa: support RPC message and handle
Create VDPA RPC thread to handle RPC message from App.