-
Notifications
You must be signed in to change notification settings - Fork 34
libatomic.so.1: cannot open shared object file: No such file or directory #30
Comments
Hmmm, that’s an interesting predicament.
|
Thanks for looking into this issue @leafac 👍 I was able to reproduce the error with the Produce a
Then execute
Note how executing
If it worked, perhaps! We would not want to interfere with the shared libraries of the target / destination system (e.g. different versions of the shared library), which means that the shared libraries would have to be located in a different folder than the standard, and perhaps env vars used to tell the embedded Node.js executable to look for the libraries in the alternative location. It sounds to me like it would be cleaner to embed a fully statically compiled Node.js binary in the caxa executable (assuming it would work and avoid the |
@maxb2: Did you run into this issue with the Raspberry Pi builds of Dungeon Revealer? How did you fix it? |
If we can’t find a pre-compiled statically linked Node.js for ARM, then I guess the solution would be to come up with one ourselves. I believe that’s outside the scope of caxa as caxa’s job is just to package the Node.js you brought to the party. But we could take on such a project. We could use Docker to emulate ARM, use GitHub Actions to run the tasks, and GitHub Releases to distribute. Pretty much the infrastructure we have to compile the stubs. The only part we’d have to figure out is the incantations necessary to statically compile Node.js. Also, the builds will take forever. But it sounds doable… |
I did occasionally run into this on the raspi at runtime. I just installed libatomic the same as @pdcastro. It really just depends on the distro and what the user has already installed. It's not ideal, but we are talking about Linux users. They probably are fine with installing an extra library. I'm guessing that libatomic is left out of "slim" images.
This is actually pretty easy
Actions have a time limit unfortunately. Emulating arm will also make it take even longer.
I'm going to time how long it takes to statically compile node for armv7 on my desktop. I'll check back in when it is done. |
Related issue: nodejs/node#37219 |
Thanks for the information! On my laptop it took around 2 hours to compile Node.js. I suppose that we’d stay under the 6-hour time limit if we were to compile using ARM on GitHub Actions. Worst-case scenario I guess we could run it on one of our machines… I just hope that it’s as simple as that Stack Overflow answer seems to indicate. With these things the devil is always in the details… Are y’all interested in taking over this project? I probably won’t have the opportunity to work on this in the near future… |
I tried both emulated and cross-compiling for a static arm build of node. I kept running into new issues. It turned into whack-a-mole. I also don't have time to work on this in the near future. |
Yeah, that’s how I thought it’d turn out. I’ll keep the issue open for when someone steps up. |
After whacking enough moles and waiting enough hours, :-) I can share some early, encouraging results. With very similar Dockerfiles as the one from StackOverflow (linked in an earlier comment), I've got static builds of Node.js v12 (a version I wanted to use) and also "accidentally" the very latest Node.js v17.0.0-pre: Dockerfile for Node.js v12FROM alpine:3.11.3
RUN apk add git python gcc g++ linux-headers make
WORKDIR /usr/src/app
ENV NODE_VERSION=v12.22.3
RUN git clone https://github.com/nodejs/node && cd node && git checkout ${NODE_VERSION}
RUN cd node && ./configure --fully-static --enable-static
RUN cd node && make Dockerfile for Node.js' master branch (v17.0.0-pre on 06 July 2021)FROM alpine:3.11.3
RUN apk add git python3 gcc g++ linux-headers make
WORKDIR /usr/src/app
RUN git clone https://github.com/nodejs/node
RUN cd node && ./configure --fully-static --enable-static
RUN cd node && make Note that the two Dockerfiles use different versions of Python and checkout different branches of Node.js. They were built with Docker v20.10.7, a command line similar to: docker build -t node12-armv7-static-alpine --platform linux/arm/v7 . In both cases, an ARM
Above, the Node.js binary that produced the Note also the file sizes in the I haven't yet tested using the statically compiled Node.js versions with |
Other notes:
That sounds like users would not only have to install shared libraries, but also match the version used during Node.js compilation. If so, it would be much worse than a dynamically linked Node.js binary! Googling it, I found this other Node.js issue and comment:
Well that's good to know! |
@pdcastro, I adapted what you've done so far at maxb2/static-node-binaries. I've compiled v12, v14, and v16 on my local machine and created releases with the binaries. I've also pushed the final docker images to dockerhub I doubt that we'll be able to compile these on Github Actions due to the usage limits.
It may be possible to set up a self-hosted runner to do the compilation, however it also has limitations:
Someone would have to volunteer some hardware for that though. I do have a crazy idea to get around the job time limit. We could use |
That's clever! Related to this idea:
But the |
I think I like this better. It would go something like:
I also need to figure out some stopping logic.
We'd have to do that manually or specify the make targets. I'd rather not do that. |
Granted, this is assuming that the |
Y’all are doing some awesome work here. I love the hack to work around GitHub Actions time limits. You’re really pushing the envelope what the tool’s supposed to do. A few questions:
|
Absolutely!
Briefly. It's surprisingly difficult to find the time limits for each service.
Breaking the compilation into chunks may be the only option without costing money. Might as well keep it on Github then. |
Quoting myself, I often say, "if it's not tested, it's broken," and so it is: (The statically compiled Node.js binaries for armv7, using the Dockerfiles proposed earlier in this issue, fail to execute |
A new chapter in this saga. @maxb2 managed to fix the openssl issue on ARMv7 (maxb2/static-node-binaries#6), 🎉 and I went on to test it further. Caxa's code executes all right with the statically compiled Node.js, and my app (experimentally, the balena CLI) gets extracted all right. But when I run certain balena CLI commands, I get: 💥
This error happens on Intel / amd64 as well, not just on ARM. What happens, I gather, is that native node modules cannot be dynamically loaded when Node.js is compiled statically. Native node modules are files with the Talking of
ARMv7 is important for the balena CLI, so I assume, but I don't know for sure, that it is not possible to enable the feature of dynamically loading native node modules when Node.js is compiled statically. In this case, the approach of using a statically compiled Node.js binary is fundamentally flawed for apps that make use of native node modules, like the balena CLI. To me, it now sounds like going back to square one:
This approach could have complications: The libraries are definitely different between Debian and Alpine. And even if we discarded Alpine and considered only glibc-based distros like Debian and Ubuntu, I wonder if we would have to match the version of glibc (?), or some other library, installed in the system. It might possible, we'd have to investigate. I've just had a related idea. Instead of bundling the libraries, caxa's Go stub could offer to install them, e.g. automatically executing Also: If we found that |
Thank y’all for the amazing investigative work. I’m learning so much from you! It’s too bad that statically linked Node can’t load native modules… But I’m sure we’ll come up with something that works! I like the idea of using the dynamically linked Node and just installing the missing dependencies as a courtesy to the user. But I propose that we don’t do it in the stub. I believe the stubs should be as simple as possible. First because I’m trying to avoid writing Go 😛 But also we have multiple packaging strategies: the all-popular Go stub, the macOS application bundle (.app) (which probably wouldn’t be affected by this change we’re discussing here), and the brand-new Shell Stub. The simpler the stubs, the easier it’s to keep them all in sync. Here’s what I propose instead: Running Node happens after the extraction, at which point we could run a shell script that prompts the user to install the missing libraries. The beauty of this solution is that it’s all in user-land from caxa’s perspective—you can get it working today. The downside is that it relies on whatever shell there is on the user’s machine, but for something as simple as what we need, probably the lowest common denominator, sh, will suffice. Think of it as an addendum to the Shell Stub. Of course, once we get something working we can include such script in caxa as a courtesy to the packagers. |
I agree that a post-extraction shell script is the right way to go. Would |
I’m happy to host and distribute some scripts with caxa for the packager’s convenience, but ultimately it’s their responsibility to make sure the scripts work for them—the scripts become part of their application. |
Nice ideas. 👍 Just adding / emphasising that there are still alternatives to be explored further:
Note also that, in a fresh environment like Of course we can also have multiple solutions: the convenience shell script that detects missing shared libraries (*) and installs them, which could work "today", and static Node.js binaries if/when we find a solution to make them work with native node modules, or for apps that don't use native node modules. (*) Detecting whether shared libraries are missing before attempting |
Of course, I believe that fixing this issue closer to the source would be ideal. Either by addressing nodejs/node#37219 or by finding the right combination of options for compilation. Meanwhile, the workaround script we’ve been talking about doesn’t necessarily have to |
How distro-specific is
So that's a certain version of Not saying it couldn't / shouldn't be done, just pointing out some complications and things to consider.
Then the user would have to manually run
I had mentioned this earlier, but it is not to say that I think we shouldn't do it, just that it is a disadvantage compared to an ideal static Node.js binary that worked with native Node modules. But an even bigger disadvantage is an ideal static Node.js binary that did not exist, :-) or a static Node.js binary that did not work with my application (no support for native Node modules). |
Hi y’all, Thanks for using caxa and for the conversation here. I’ve been thinking about the broad strategy employed by caxa and concluded that there is a better way to solve the problem. It doesn’t address the issue of statically linked Node.js binaries, but I wonder if that’s still a use case that you need to support 🤔 It’s a different enough approach that I think it deserves a new name, and it’s part of a bigger toolset that I’m building, which I call Radically Straightforward · Package. I’m deprecating caxa and archiving this repository. I invite you to continue the conversation in Radically Straightforward’s issues. Best. |
I don't think this a bug with caxa (it depends on one's point of view!), yet I think it is an issue that caxa users will come across. For one, I am still looking for a convenient solution (that didn't require me to compile Node.js from source).
I have created a caxa-based executable for my app, for ARM v7. Then, when trying to run it in a Docker container for the ARM platform, I got the following error regarding a missing shared library:
I believe that the problem is that, when creating the caxa executable, I was using a standard Node.js installation that uses shared / dynamically linked libraries. Then caxa bundled in the base Node.js executable, but not the shared libraries. For the
libatomic.so.1
library in particular, the error above can be avoided if the end users of my app install the library before running the caxa-based executable:However, at least my use case for caxa is to simplify end users' life by avoiding pre-requisites like installing Node.js (#20), "just download this executable and run it", and if I had to ask end users to install shared libraries before running the caxa executable, it would spoil the experience.
I assume that the solution is to use a fully statically compiled version of Node.js (including
libatomic.so.1
) when creating the caxa executable. Where to find that though? For all architectures supported by caxa: x64, ARM v6, ARM v7, ARM 64. I gather that the standard Node.js builds offered for download are dynamically linked: https://nodejs.org/en/download/The text was updated successfully, but these errors were encountered: