Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

vsock does not support --mmap #3810

Closed
pingberlin opened this issue Mar 29, 2023 · 12 comments
Closed

vsock does not support --mmap #3810

pingberlin opened this issue Mar 29, 2023 · 12 comments
Labels
bug Something isn't working

Comments

@pingberlin
Copy link

pingberlin commented Mar 29, 2023

Describe the bug
xpra attach vsock://3:14500/ --mmap=/dev/shm/linux-alpine-v3.17.2-xpra-plasma-2023-03-15 -d mmap
leads to:

...
2023-03-29 18:41:34,262 debug enabled for xpra.net.mmap_pipe / ('mmap',)
2023-03-29 18:41:34,293 debug enabled for xpra.client.mixins.mmap / ('mmap',)
...
2023-03-29 18:41:34,999 init_mmap(/dev/shm/linux-alpine-v3.17.2-xpra-plasma-2023-03-15, auto, host)
2023-03-29 18:41:35,000 init_mmap('auto', 'host', 536870912, '/dev/shm/linux-alpine-v3.17.2-xpra-plasma-2023-03-15')
2023-03-29 18:41:35,000 Using existing mmap file '/dev/shm/linux-alpine-v3.17.2-xpra-plasma-2023-03-15': 512MB
2023-03-29 18:41:35,000 xpra_group() group(xpra)=972, groups=[10, 975, 1000]
2023-03-29 18:41:35,000 Warning: missing valid socket filename to set mmap group
2023-03-29 18:41:35,001 using mmap file /dev/shm/linux-alpine-v3.17.2-xpra-plasma-2023-03-15, fd=20, size=536870912
2023-03-29 18:41:35,001 write_mmap_token(<mmap.mmap closed=False, access=ACCESS_DEFAULT, length=536870912, pos=0, offset=0>, 0x45c950d1150e416d9f199d733657c16c, 0x13a6d108, 0x80)
...
2023-03-29 18:41:35,457 parse_server_capabilities(..) mmap_enabled=True
2023-03-29 18:41:35,457 read_mmap_token(<mmap.mmap closed=False, access=ACCESS_DEFAULT, length=536870912, pos=0, offset=0>, 0x13b7a72c, 0x80)=0xcc6b36d8ffa9462a8aee0dea93d80196
2023-03-29 18:41:35,457 enabled fast mmap transfers using 512MB shared memory area
2023-03-29 18:41:35,457 XpraClient.clean_mmap() mmap_filename=/dev/shm/linux-alpine-v3.17.2-xpra-plasma-2023-03-15
2023-03-29 18:41:35,457 enabled remote logging
2023-03-29 18:41:35,458 Xpra X11 seamless server version 4.4
2023-03-29 18:41:35,470 Attached to xpra server at vsock://3:14500
2023-03-29 18:41:35,470  (press Control-C to detach)

2023-03-29 18:41:35,506 running

Specifically: Warning: missing valid socket filename to set mmap group resulting in very laggy performance, because neiter mmap nor compression seems to be used.

To Reproduce
Steps to reproduce the behavior:

  1. server command:
server.argv=('/usr/bin/xpra', 'start', ':14500', '--bind-vsock=auto:14500', '--debug=', '--daemon=yes', '--log-file=xpra.log', '--log-dir=/home/user/.xpra', '--pidfile=/home/user/.xpra/proxy.pid', '--mmap=/sys/devices/pci0000:00/0000:00:02.0/0000:01:00.0/0000:02:01.0/resource2')
  1. client command: xpra attach vsock://3:14500/ --mmap=/dev/shm/linux-alpine-v3.17.2-xpra-plasma-2023-03-15 -d mmap

System Information (please complete the following information):

  • Server OS: Alpine Linux v3.17
  • Client OS: Fedora Linux 36
  • Xpra Server Version 4.4.3-r1
  • Xpra Client Version 5.0 revision 32985 commit gdd9b8b007 from master branch

Additional context
Last time, we talked about this here: #1387

Thanks! :)

@pingberlin pingberlin added the bug Something isn't working label Mar 29, 2023
@pingberlin
Copy link
Author

It's seems the performance of the IVSHMEM-device is very low on AMD, which I switched to:
https://community.amd.com/t5/server-gurus-discussions/amd-terrible-performance-with-qemu-ivshmem/m-p/518643
https://www.reddit.com/r/VFIO/comments/trjmuf/ivshmem_is_very_slow_on_ryzen_5900x_and_possibly/?rdt=52326

Using --mmap=/sys/devices/pci0000:00/0000:00:02.0/0000:01:00.0/0000:02:01.0/resource2_wc as recommended in the links, resolved the issue.

I was debugging this the whole day, before finding these links...

The ticket can be closed.

@totaam
Copy link
Collaborator

totaam commented Mar 30, 2023

@pingberlin would you mind helping in documenting how you used ivshmem and where you got the --mmap=/sys/devices/... syntax from?

@pingberlin
Copy link
Author

@totaam Would you like this information here or in the wiki?

On the host, I used the tool virt-install to build the qemu command line:

VM_NAME="alpine-linux"
VM_VERSION="1.32"
virt-install --osinfo alpinelinux3.17 \
...$further_disk_and_graphics_and_vsock_options_here... \
--shmem name="${VM_NAME}-${VM_VERSION}",model.type=ivshmem-plain,size.unit=M,size=512

That would create the shared memory file at /dev/shm/alpine-linux-1.32 with a size of 512MB.
(This is a simpler procedure than the one discussed in the ticket 7 years ago.)


In the guest, I used lspci to get the device PCI-ID:

localhost:/$ lspci | grep shared
02:01.0 RAM memory: Red Hat, Inc. Inter-VM shared memory (rev 01)

Since everything is a file under Linux, this results in the following path:

/sys/devices/pci0000:00/0000:00:02.0/0000:01:00.0/0000:02:01.0

We need to enable the device:

echo 1 > /sys/devices/pci0000:00/0000:00:02.0/0000:01:00.0/0000:02:01.0/enable

We then use the shared memory path (don't use the file resource2 - use resource2_wc instead):

xpra start ... --mmap=/sys/devices/pci0000:00/0000:00:02.0/0000:01:00.0/0000:02:01.0/resource2_wc

If you have further question, don't hesitate to ask.
But I would have one:
Would you think, xpra should find the path for the user or should the user write a script to find it and supply the result to xpra start --mmap=...? Remember: The PCI-ID could be different for each VM, depending on the PCI-devices attached.

@totaam
Copy link
Collaborator

totaam commented Mar 30, 2023

I think this deserves its own wiki page under https://github.com/Xpra-org/xpra/tree/master/docs/Network

I guess we should be able to figure out the path? Maybe something like --mmap=ivshmem?
Is there any validation we could do with it?
What does your script look like?

@pingberlin
Copy link
Author

pingberlin commented Mar 30, 2023

Well I currently have no script, but the following command to find proper devices would suffice:

/ # find /sys/devices/ -name "resource2_wc"
/sys/devices/pci0000:00/0000:00:02.0/0000:01:00.0/0000:02:01.0/resource2_wc

If validation is wished for, I would check for the proper device IDs in that directory.

Also note: More than one IVSHMEM-device may be attached. So instead of --mmap=ivshmem, I would use --mmap=ivshmem:2 e.g. for the second IVSHMEM-device attached, and --mmap=ivshmem:1 for the first. Though omitting the id like --mmap=ivshmem could also reference the first.


Concerning the documentation:

  1. Would you like to see this under the "Examples" section here? https://github.com/Xpra-org/xpra/tree/master/docs/Network#examples
  2. For ease of use, I would like to note a oneliner using the tool virt-install, as well as the qemu commandline.

@totaam
Copy link
Collaborator

totaam commented Mar 30, 2023

If validation is wished for, I would check for the proper device IDs in that directory.

What would the "proper device IDs" be?

Would you like to see this under the "Examples" section here?

Probably not.
virsh is too specialized, the examples can go on the new dedicated page.

For ease of use, I would like to note a oneliner using the tool virt-install, as well as the qemu commandline.

Sounds good.

@pingberlin
Copy link
Author

The proper device ID is 1af4:1110.
Also see: https://pci-ids.ucw.cz/read/PC/1af4/1110

I'll give you an update, once I have the documentation done.

@totaam
Copy link
Collaborator

totaam commented Jul 4, 2023

@pingberlin ping!

@pingberlin
Copy link
Author

Hi totaam!
Sorry for the long delay. It's on my list and I'll document it in the end of august.
Please leave the ticket open.
Thanks for the patience!

@pingberlin
Copy link
Author

Hi @totaam
Some corrections:

  1. Since the IVSHMEM-device only exists in the guest, you can not enable it from the host.
    So please change enable the shared memory device on the host:
    to enable the shared memory device on the guest:
  2. Since only the host sees the mmap file in it's directory /dev/shm,
    please change from the guest, connect to the same device:
    to from the host, connect to the same device:

Only then it will work. Please also check for yourself, if you may.
Thanks.

totaam added a commit that referenced this issue Oct 4, 2023
@totaam
Copy link
Collaborator

totaam commented Oct 4, 2023

@pingberlin the commit above should do that.
FYI: you can just edit the files using the github GUI and it will create a PR from it.
I really don't have the time to try this right now. It is what it is.

@totaam
Copy link
Collaborator

totaam commented Nov 17, 2024

Some more ivshmem documentation links:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants