Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MelonDS appimage crashes when a specific joystick is selected #2261

Closed
lestcape opened this issue Jan 17, 2025 · 14 comments
Closed

MelonDS appimage crashes when a specific joystick is selected #2261

lestcape opened this issue Jan 17, 2025 · 14 comments

Comments

@lestcape
Copy link

I use the appimage from 1.0 RC in Debian 13 Trixie (uncompressed). I have 3 controllers and only the Logitech G29 Driving Force Racing Wheel cause a melonDS crash instantly after being selected. The flathub version 0.9.5 works without a problem with that controller, although of course melon doesn't see it as what it really is, nor do I expect it to do that way. I think it would be pointless to detect it for what it is, because games wouldn't be able to take advantage of this controller anyway.

In Debian 13 there are missing dependencies to be able to compile melonDS directly from source code, so that it can be ruled out if the error is caused by the appimage, from melon, or being an upstream error from SLD.

The segmentation fault:

$ ./squashfs-root/AppRun
melonDS 1.0 RC
https://melonds.kuribo64.net
Opened "/home/lestcape/.config/melonDS/melonDS.toml" with FileMode 0x1 (effective mode "rb")
Opened "/home/lestcape/.config/melonDS/melonDS.toml" with FileMode 0x20 (effective mode "ab")
Opened "/home/lestcape/.config/melonDS/melonDS.toml" with FileMode 0x1 (effective mode "rb")
MP comm init OK
Audio output frequency: 48000 Hz
Violación de segmento
@nadiaholmquist
Copy link
Collaborator

Does it happen with the 1.0 RC flatpak (from the flathub beta repo)?

@lestcape
Copy link
Author

I didn't know I could install that one, let me try.

gdb logs:

$ gdb ./squashfs-root/usr/bin/melonDS
GNU gdb (Debian 15.2-1+b1) 15.2
Copyright (C) 2024 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later http://gnu.org/licenses/gpl.html
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Type "show copying" and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
https://www.gnu.org/software/gdb/bugs/.
Find the GDB manual and other documentation resources online at:
http://www.gnu.org/software/gdb/documentation/.

For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from ./squashfs-root/usr/bin/melonDS...
(No debugging symbols found in ./squashfs-root/usr/bin/melonDS)
(gdb) star
Ambiguous command "star": start, starti.
(gdb) start
Temporary breakpoint 1 at 0x80a90
Starting program: /Datos/Games/Emuladores-Soft/Emuladores/melonDS-NDS/melonDS-appimage-x86_64/squashfs-root/usr/bin/melonDS
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".

Temporary breakpoint 1, 0x00005555555d4a90 in main ()
(gdb) continue
Continuing.
melonDS 1.0 RC
https://melonds.kuribo64.net
[New Thread 0x7fffe59ff6c0 (LWP 230578)]
[New Thread 0x7fffe51fe6c0 (LWP 230579)]
[New Thread 0x7fffe48d06c0 (LWP 230586)]
[New Thread 0x7fffd3fff6c0 (LWP 230587)]
[New Thread 0x7fffd37fe6c0 (LWP 230588)]
[New Thread 0x7fffd2ffd6c0 (LWP 230589)]
[New Thread 0x7fffd27fc6c0 (LWP 230590)]
[New Thread 0x7fffd1ffb6c0 (LWP 230591)]
[New Thread 0x7fffd17fa6c0 (LWP 230592)]
[New Thread 0x7fffd0ff96c0 (LWP 230593)]
[New Thread 0x7fffb3fff6c0 (LWP 230594)]
[New Thread 0x7fffb37fe6c0 (LWP 230595)]
[Thread 0x7fffb37fe6c0 (LWP 230595) exited]
[Thread 0x7fffb3fff6c0 (LWP 230594) exited]
[Thread 0x7fffd0ff96c0 (LWP 230593) exited]
[New Thread 0x7fffd0ff96c0 (LWP 230596)]
[New Thread 0x7fffb3fff6c0 (LWP 230597)]
[New Thread 0x7fffb37fe6c0 (LWP 230598)]
[Thread 0x7fffb37fe6c0 (LWP 230598) exited]
[Thread 0x7fffb3fff6c0 (LWP 230597) exited]
[Thread 0x7fffd0ff96c0 (LWP 230596) exited]
Opened "/home/lestcape/.config/melonDS/melonDS.toml" with FileMode 0x1 (effective mode "rb")
Opened "/home/lestcape/.config/melonDS/melonDS.toml" with FileMode 0x20 (effective mode "ab")
Opened "/home/lestcape/.config/melonDS/melonDS.toml" with FileMode 0x1 (effective mode "rb")
MP comm init OK
[New Thread 0x7fffd0ff96c0 (LWP 230600)]
Audio output frequency: 48000 Hz
[New Thread 0x7fffb3fff6c0 (LWP 230608)]
[New Thread 0x7fffb37fe6c0 (LWP 230609)]
[New Thread 0x7fffb2ffd6c0 (LWP 230610)]
[New Thread 0x7fffb27fc6c0 (LWP 230611)]
[New Thread 0x7fffb1ffb6c0 (LWP 230612)]
[New Thread 0x7fffb13816c0 (LWP 230613)]
[New Thread 0x7fffb0b806c0 (LWP 230614)]
[New Thread 0x7fff8ffff6c0 (LWP 230615)]

Thread 1 "melonDS" received signal SIGSEGV, Segmentation fault.
0x00007ffff78ab613 in ?? ()
from /Datos/Games/Emuladores-Soft/Emuladores/melonDS-NDS/melonDS-appimage-x86_64/squashfs-root/usr/bin/../lib/libSDL2-2.0.so.0
(gdb) quit
A debugging session is active.

Inferior 1 [process 230538] will be killed.

Quit anyway? (y or n) y

@lestcape
Copy link
Author

Does it happen with the 1.0 RC flatpak (from the flathub beta repo)?

The flatpak version of the flathub beta repo works without any issue.

@nadiaholmquist
Copy link
Collaborator

I guess it must be some weird issue with the SDL2 from Ubuntu 22.04 that the appimages are built with. Probably not much that can be done other than hoping they update it or switching to a newer distro for the builds.

@lestcape
Copy link
Author

Ohh, well yes, i have Debian 13 installed because SDL2 works really bad in Debian 12, in Debian 13 all my devices works. I will close the issue with the hope of the switching to a newer distro for the builds will be sooner than later.

@Samueru-sama
Copy link

The current AppImage built using linuxdeploy.

I don't like linuxdeploy and a lot of the old appimage tooling, they basically make appimages that depend on the host libraries and want you to build on ubuntu 20.04 to prevent the glibc incompatibility issues. Which just complicates things since a lot of modern build dependencies are not found on 20.04

For example, I maintain this AppImage of dolphin-emu that I recently switched to bundle all dependencies and build on Artixlinux, this is the build part on Artixlinux: https://github.com/pkgforge-dev/Dolphin-emu-AppImage/blob/main/.github/workflows/Dolphin_sharun.yml#L22-L72

And this is what it used to be when built on ubuntu which neede a zillion ppas and hacks to get it to build: https://github.com/pkgforge-dev/Dolphin-emu-AppImage/blob/main/.github/workflows/blank.yml#L22-L79


Because we bundle all the dependencies it means the appimage will be bigger, so in this case the appimage ends up being ~160 MiB instead of the 100 MiB it used to be. The good news is that the AppImage is truly portable as it works on any linux system, even on musl distros and distros that have namespaces disabled.

iirc there are only two tools that deploy appimages with all dependencies, those are go-appimage and sharun. The later is the one I like to use since it has a mode with strace that finds most of the dlopened libraries, however I often still need to bundle several libs manually though.

@nadiaholmquist
Copy link
Collaborator

nadiaholmquist commented Jan 17, 2025

Yeah I'm not entirely happy with the way that is set up either. Having to build on some old distro to get sufficiently old libraries and then downloading and running some binaries that may change at any time feels like a very fragile solution to me.

I have considered switching the Linux builds to using vcpkg like the Windows and macOS ones, that one all the dependencies can be built and linked statically into the binary, technically removing the need for having an AppImage at all, though maybe should still provide one if that's what people are used to, maybe built with some other tooling.

Changing over to the arm64 runners to reuse the same build stuff as the x86_64 builds again reminded me of wanting to do that, because linuxdeploy just dies with some filesystem exception on the arm64 runner and I can't reproduce it locally and don't know what to do about it.

@Samueru-sama
Copy link

@nadiaholmquist Yeah building statically is the perfect solution, but if I'm not mistaken you can't statically link libraries like vulkan and opengl. But still even statically linking some libraries helps a ton.

For example this mpv appimage that bundles all the libs as well used to be 80 MiB, I later then switched to using the mpv build script, which links ffmpeg statically and that reduced the size of the AppImage to 22 MiB.


And yes even if you make a fully static binary, shipping it in an AppImage is still ideal, because you can for example have working delta updates with appimageupdate, if you make a directory nex to the AppImage with the name of the AppImage + .home the AppImage runtime sets that as $HOME and you can carry the application together along with all its config files anywhere, etc, etc.

@lestcape
Copy link
Author

lestcape commented Jan 17, 2025

I use the appimage only for gamed because:

  • I need/want a setting per game, no one globally. So, i have one different copy of the emulator per game (with the AppImage + .home folder to isolate the settings of one copy with the others).
  • I need/want to port the settings (or at less the keybindings) easy to other machine (or the same machine after a new installation).
  • I need/want to have an individual launcher per game that launch the game at full-screen with all the predefined settings set and do not force me to see the emulator anymore after it's configured the first time.

I can say it in summary: I don't want to go through the trouble of configure the emulator more than one time, I want to configure the emulator once for each game and then just have to launch the game, without changing anything again.

If all that is provided i don't care about the name of how this mechanism is called. Today, the closest thing to what I want is called appimage.

@nadiaholmquist
Copy link
Collaborator

melonDS-x86_64.zip

Here's an AppImage build using vcpkg dependencies, the way I was considering doing it for CI builds. I would appreciate if one of you would give it a try and tell me if it works fine on your systems.

@Samueru-sama
Copy link

melonDS-x86_64.zip

Here's an AppImage build using vcpkg dependencies, the way I was considering doing it for CI builds. I would appreciate if one of you would give it a try and tell me if it works fine on your systems.

Trying on ubuntu 20.04:

./AppRun: /lib/x86_64-linux-gnu/libstdc++.so.6: version `GLIBCXX_3.4.30' not found (required by ./AppRun)
./AppRun: /lib/x86_64-linux-gnu/libstdc++.so.6: version `GLIBCXX_3.4.29' not found (required by ./AppRun)
./AppRun: /lib/x86_64-linux-gnu/libstdc++.so.6: version `CXXABI_1.3.13' not found (required by ./AppRun)
./AppRun: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.35' not found (required by ./AppRun)
./AppRun: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.34' not found (required by ./AppRun)
./AppRun: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.32' not found (required by ./AppRun)
./AppRun: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.33' not found (required by ./AppRun)

This is because the melonDS binary links againts several libraries including glibc, and the version used is newer than the one that comes on 20.04.

The fix is easy and I could PR this hopefully within next week, since all that needs to be done is bundle whats missing and set a few env variables.

Also using patchelf I find that the melonDS binary already contains an rpath of $ORIGIN/../lib, which makes the dynamic linker check for libraries in the ../lib dir relative to the binary.

This actually isn't needed, a better method is to instead bundle the dynamic linker and call the dynamic linker, pass the location of the bundle libraries with --library-path and then finally pass the binary as an argument to the dynamic linker, other people use LD_LIBRARY_PATH but that's an env variable that propagates to child processes and often leads to issues as result.

@lestcape
Copy link
Author

lestcape commented Jan 17, 2025

The image work ok on Debian 13, not problem detected on my system. However, @Samueru-sama is a kind of appimage guru and he can tell you where the appimage won't work and where it will work, just by knowing which system and which tool you generated it with. He has already tested many mechanisms and generated many images. He simply has the advantage of experience and that is very important when it comes to predicting possible anomalies behaviors.

@Samueru-sama
Copy link

The image work ok on Debian 13, not problem detected on my system. However, @Samueru-sama is a kind of appimage guru and he can tell you where the appimage won't work and where it will work, just by knowing which system and which tool you generated it with. He has already tested many mechanisms and generated many images. He simply has the advantage of experience and that is very important when it comes to predicting possible anomalies behaviors.

Yeah I mean the AppImage I tested will work for about ~85% of linux users, since most linux users are either on ubuntu 24.04, Fedora 40+, Archlinux, etc, etc.

This will not work on ubuntu 20.04 which iirc it is the equivalent of debian stable, it won't also work on distros that don't have glibc at all like alpine or chimera, which doing what I was talking about would fix.

@Samueru-sama
Copy link

@nadiaholmquist I'm looking forward to have the vcpkg builds setup, that way I can gladly do the rest of steps to make the appimage fully be able to run anywhere like here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants