-
Notifications
You must be signed in to change notification settings - Fork 148
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ImportError: No module named catkin.environment_cache #378
Comments
So what's strange is that all of those errors are from the |
Hence why I'm inclined to suspect an issue with the cowbuilder environment, which makes it that much less likely that anyone else has seen anything like this (or indeed, that there's much of anything catkin_tools can do about it). Still, had to ask. :) |
@mikepurvis why not an environment variable: export CATKIN_TOOLS_DEFINITION_OF_INSANITY=n # retry failed build stages n times |
(In the meantime, I'm experimenting with switching my build environment to use pbuilder rather than cowbuilder— it's a longer startup time, but obviously worth it if it eliminates this kind of flakiness.) |
@wjwwood Would you be open to the addition of a stage retry hack as described above? |
Well, perhaps somewhat obviously, I'd prefer to fix the underlying issue. However, if it were the case that the problem is actually a user's flaky build then it might be useful to have some options to restart catkin build from where it failed. However, I'm wary of having the looping logic internal to catkin build to prevent that code from getting too complicated. So, I was thinking of like |
I guess |
|
Alas, I cannot make |
Based on my findings and @davetcoleman's report on the other ticket (ros/catkin#806 (comment)), it doesn't seem like re-building would actually fix the problem anyway, without cleaning the build folder of the failed package. Anyone have any other ideas what could be going on here to intermittently cause this issue? Digging a bit further on the catkin side, the
The next step here is probably dumping |
Okay, so I had switched our infrastructure to use pbuilder rather than cowbuilder, on the hunch that it was an issue with the copy-on-write FS, but the issue struck again today— so I'm back to square one as far as looking for a root cause in catkin/catkin_tools. (FYI @jjekircp) |
To start the process of grinding toward a MWE of this issue, I set a VM last night to repeatedly build a ros_base workspace. Setup like so:
And then running the following script:
The error which I eventually caught (on iteration 228!) was this:
This isn't the exact |
Another possible fix is putting a mutex on reads of the setup files. It's a rabbit hole here, but it looks like the magic happens with job.getenv(os.environ), which finally actually calls the load_env function returned here (which in turns calls through to get_resultspace_environment which uses repeated The catkin build Job gets built up here. However, the IMO, it should be moved into a stage, and each stage passed a common dict() for the env argument, so that when the getenv stage completes, it can mutate that dict and all successive stages will be automatically updated. If we don't like the common dict, then a new GetEnvironmentStage could be created, with another block in |
Closed via #391. |
For the future visitors, I hit the similar (presumably the same) error (on a private Docker container) with catkin_tools 0.4.4. Solution for me was to
|
@130s I have the same issues when using Clion and just solve it. But I don't think our problem is the same with the one they mentioned. /home/*******/APPlication/clion-2019.1.4/bin/cmake/linux/bin/cmake -DCMAKE_BUILD_TYPE=Debug -DCATKIN_DEVEL_PREFIX:PATH=/home/vickylzy/WorkSPacesROS/catkin_ws/devel -G "CodeBlocks - Unix Makefiles" /home/vickylzy/WorkSPacesROS/catkin_ws/src
-- Using CATKIN_DEVEL_PREFIX: /home/vickylzy/WorkSPacesROS/catkin_ws/devel
-- Using CMAKE_PREFIX_PATH:
-- Using PYTHON_EXECUTABLE: /usr/bin/python
-- Using Debian Python package layout
-- Using empy: /usr/bin/empy
-- Using CATKIN_ENABLE_TESTING: ON
-- Call enable_testing()
-- Using CATKIN_TEST_RESULTS_DIR: /home/vickylzy/WorkSPacesROS/catkin_ws/build/test_results
-- Found gmock sources under '/usr/src/gmock': gmock will be built
-- Found gtest sources under '/usr/src/gmock': gtests will be built
-- Using Python nosetests: /usr/bin/nosetests-2.7
-- catkin 0.7.18
-- BUILD_SHARED_LIBS is on
Traceback (most recent call last):
File "/home/*******/WorkSPacesROS/catkin_ws/build/catkin_generated/generate_cached_setup.py", line 20, in <module>
from catkin.environment_cache import generate_environment_script
ImportError: No module named catkin.environment_cache
CMake Error at /opt/ros/kinetic/share/catkin/cmake/safe_execute_process.cmake:11 (message):
execute_process(/usr/bin/python
"/home/*******/WorkSPacesROS/catkin_ws/build/catkin_generated/generate_cached_setup.py")
returned error code 1
Call Stack (most recent call first):
/opt/ros/kinetic/share/catkin/cmake/all.cmake:207 (safe_execute_process)
/opt/ros/kinetic/share/catkin/cmake/catkinConfig.cmake:20 (include)
CMakeLists.txt:56 (find_package)
-- Configuring incomplete, errors occurred!
See also "/home/*******/WorkSPacesROS/catkin_ws/build/CMakeFiles/CMakeOutput.log".
See also "/home/*******/WorkSPacesROS/catkin_ws/build/CMakeFiles/CMakeError.log".
[Failed to reload]
Thanks for any advice on why this problem occur! |
Hello everyone, I am getting this same error, but I am using Windows. After reading the discussion above, I couldn't figure out how to get rid of this error. I am a newbie at this, so could someone please help me with this issue? --Using CATKIN_DEVEL_PREFIX: E:/trac_ik_git/trac_ik/trac_ik_csharp/build/devel -- Configuring incomplete, errors occurred!
|
System Info
Build / Run Issue
I have a large build which runs via Jenkins inside a cowbuilder environment. I periodically have this build fail with an error which looks like the following:
The package which triggers the issue is arbitrary— it seems to simply happen whenever, on various different packages. It looks like some kind of a race condition, but it's unclear to me whether the root cause is in catkin_tools or catkin itself.
I don't believe I ever saw this with 0.3.x, but we also switched to 0.4.x pretty early on.
I'm not using eatmydata, so all writes to disk should be fully synced, but it's certainly possible this is coming from an interaction between catkin and the cowbuilder's copy-on-write fs overlay.
I don't have concrete steps to reproduce, but please let me know if there are ways I could instrument my build to supply more meaningful diagnostic information.
The text was updated successfully, but these errors were encountered: