Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Quote paths in docker invocation #622

Open
juliocartier opened this issue May 26, 2024 · 4 comments · May be fixed by #670
Open

Quote paths in docker invocation #622

juliocartier opened this issue May 26, 2024 · 4 comments · May be fixed by #670
Labels
bug Something isn't working

Comments

@juliocartier
Copy link

When I try to run it through Windows on the Docker machine, it gives this error. However, I updated the python I'm running. Python is currently version 3.11.4 and still presents this error. The docker process dies.

Running cmd docker kill tpot.test.test.docker.20240526T200606.fSf0WHzreua0GuH_Z7N4ZA__
Error response from daemon: cannot kill container: tpot.test.test.docker.20240526T200606.fSf0WHzreua0GuH_Z7N4ZA__: No such container: tpot.test.test.docker.20240526T200606.fSf0WHzreua0GuH_Z7N4ZA__

Job docker.test.test.all_tasks.all_folds.TPOT failed with error: Command 'docker run --name tpot.test.test.docker.20240526T200606.fSf0WHzreua0GuH_Z7N4ZA__ --shm-size=2048M -v C:\Users\Maicon Santos.openml:/input -v C:\projetos-pythons\autobenchmark\automlbenchmark\re
sults\tpot.test.test.docker.20240526T200606:/output -v C:\Users\Maicon Santos.config\automlbenchmark:/custom --rm automlbenchmark/tpot:stable-v2.1.7 tpot test test -Xseed=auto -i /input -o /output -u /custom -s skip -Xrun_mode=docker --session=' returned non-zero exit
status 125.
Traceback (most recent call last):
File "C:\projetos-pythons\autobenchmark\automlbenchmark\amlb\job.py", line 120, in start
result = self._run()
^^^^^^^^^^^
File "C:\projetos-pythons\autobenchmark\automlbenchmark\amlb\runners\container.py", line 109, in _run
self.start_container("{framework} {benchmark} {constraint} {task_param} {folds_param} -Xseed={seed}".format(
File "C:\projetos-pythons\autobenchmark\automlbenchmark\amlb\runners\docker.py", line 76, in start_container
run_cmd(cmd, capture_error=False) # console logs are written on stderr by default: not capturing allows live display
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\projetos-pythons\autobenchmark\automlbenchmark\amlb\utils\process.py", line 281, in run_cmd
raise e
File "C:\projetos-pythons\autobenchmark\automlbenchmark\amlb\utils\process.py", line 255, in run_cmd
completed = run_subprocess(str_cmd if params.shell else full_cmd,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\projetos-pythons\autobenchmark\automlbenchmark\amlb\utils\process.py", line 98, in run_subprocess
raise subprocess.CalledProcessError(retcode, process.args, output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command 'docker run --name tpot.test.test.docker.20240526T200606.fSf0WHzreua0GuH_Z7N4ZA
--shm-size=2048M -v C:\Users\Maicon Santos.openml:/input -v C:\projetos-pythons\autobenchmark\automlbenchmark\results\tpot.test.test.docker.20240526
T200606:/output -v C:\Users\Maicon Santos.config\automlbenchmark:/custom --rm automlbenchmark/tpot:stable-v2.1.7 tpot test test -Xseed=auto -i /input -o /output -u /custom -s skip -Xrun_mode=docker --session=' returned non-zero exit status 125

@PGijsbers
Copy link
Collaborator

Looking up the error code 125 for docker run indicates that the command is invalid/contains invalid values. Based on your log, my suspicion is that the space in the mount path (between Maicon and Santos) is invalid. This can likely be resolved on our end by making sure the file paths are quoted.
On your end, you could change the benchmark configuration user_dir and input_dir in resources\config.yaml to point to paths without spaces.

If you can try a run with that configuration and can confirm that works, I would appreciate that. If my hunch is correct, I'll try to set up a patch branch for you to try out that quotes provided paths when I can find the time.

@PGijsbers PGijsbers added the bug Something isn't working label May 27, 2024
@juliocartier
Copy link
Author

@PGijsbers I configured it in two ways and even so it still goes to the folder that has space. That's why I find it strange. I created the form following the path C:\projetos-pythons\autobenchmark\automlbenchmark and also as '{root}'. At first, when you upload the image, it recognizes the folder paths, but when you run the framework, the error continues.

Starting job docker.test.test.all_tasks.all_folds.RandomForest. [MONITORING] [docker.test.test.all_tasks.all_folds.RandomForest] Disk Usage: 71.8% Starting docker: docker run --name randomforest.test.test.docker.20240528T002142.V0ZVrW1FfJgeYEim_PCNnQ__ --shm-size=2048M -v C:\Users\Maicon Santos\.openml:/input -v C:\projetos-pythons\autobenchmark\automlbenchmark\results\randomforest.test.test.docker.20240528T002142 :/output -v C:\Users\Maicon Santos\.config\automlbenchmark:/custom --rm automlbenchmark/randomforest:stable randomforest test test -Xseed=auto -i /input -o /output -u /custom -s skip -Xrun_mode=docker --session=. Datasets are loaded by default from folder C:\Users\Maicon Santos\.openml. Generated files will be available in folder C:\projetos-pythons\autobenchmark\automlbenchmark\results. Running cmd docker run --name randomforest.test.test.docker.20240528T002142.V0ZVrW1FfJgeYEim_PCNnQ__ --shm-size=2048M -v C:\Users\Maicon Santos.openml:/input -v C:\projetos-pythons\autobenchmark\automlbenchmark\results\randomforest.test.test.docker.20240528T002142:/ou
tput -v C:\Users\Maicon Santos.config\automlbenchmark:/custom --rm automlbenchmark/randomforest:stable randomforest test test -Xseed=auto -i /input -o /output -u /custom -s skip -Xrun_mode=docker --session=docker: invalid reference format. See 'docker run --help'. Running cmddocker kill randomforest.test.test.docker.20240528T002142.V0ZVrW1FfJgeYEim_PCNnQ__`
Error response from daemon: cannot kill container: randomforest.test.test.docker.20240528T002142.V0ZVrW1FfJgeYEim_PCNnQ__: No such container: randomforest.test.test.docker.20240528T002142.V0ZVrW1FfJgeYEim_PCNnQ__

Job docker.test.test.all_tasks.all_folds.RandomForest failed with error: Command 'docker run --name randomforest.test.test.docker.20240528T002142.V0ZVrW1FfJgeYEim_PCNnQ__ --shm-size=2048M -v C:\Users\Maicon Santos.openml:/input -v C:\projetos-pythons\autobenchmark\au
tomlbenchmark\results\randomforest.test.test.docker.20240528T002142:/output -v C:\Users\Maicon Santos.config\automlbenchmark:/custom --rm automlbenchmark/randomforest:stable randomforest test test -Xseed=auto -i /input -o /output -u /custom -s skip -Xrun_mode=docker -
-session=' returned non-zero exit status 125.
Traceback (most recent call last):
File "C:\projetos-pythons\autobenchmark\automlbenchmark\amlb\job.py", line 120, in start
result = self._run()
^^^^^^^^^^^
File "C:\projetos-pythons\autobenchmark\automlbenchmark\amlb\runners\container.py", line 109, in _run
self.start_container("{framework} {benchmark} {constraint} {task_param} {folds_param} -Xseed={seed}".format(
File "C:\projetos-pythons\autobenchmark\automlbenchmark\amlb\runners\docker.py", line 76, in start_container
run_cmd(cmd, capture_error=False) # console logs are written on stderr by default: not capturing allows live display
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\projetos-pythons\autobenchmark\automlbenchmark\amlb\utils\process.py", line 281, in run_cmd
raise e
File "C:\projetos-pythons\autobenchmark\automlbenchmark\amlb\utils\process.py", line 255, in run_cmd
completed = run_subprocess(str_cmd if params.shell else full_cmd,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\projetos-pythons\autobenchmark\automlbenchmark\amlb\utils\process.py", line 98, in run_subprocess
raise subprocess.CalledProcessError(retcode, process.args, output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command 'docker run --name randomforest.test.test.docker.20240528T002142.V0ZVrW1FfJgeYEim_PCNnQ
--shm-size=2048M -v C:\Users\Maicon Santos.openml:/input -v C:\projetos-pythons\autobenchmark\automlbenchmark\results\randomforest.test.test
.docker.20240528T002142:/output -v C:\Users\Maicon Santos.config\automlbenchmark:/custom --rm automlbenchmark/randomforest:stable randomforest test test -Xseed=auto -i /input -o /output -u /custom -s skip -Xrun_mode=docker --session=' returned non-zero exit status 125
.
`

C:\projetos-pythons\autobenchmark\automlbenchmark\venv\Lib\site-packages\_distutils_hack\__init__.py:33: UserWarning: Setuptools is replacing distutils. warnings.warn("Setuptools is replacing distutils.") Running benchmark randomforestontestframework indockermode. Loading frameworks definitions from ['C:\\projetos-pythons\\autobenchmark\\automlbenchmark\\resources\\frameworks.yaml']. Loading benchmark constraint definitions from ['C:\\projetos-pythons\\autobenchmark\\automlbenchmark\\resources\\constraints.yaml']. Loading benchmark definitions from C:\projetos-pythons\autobenchmark\automlbenchmark\resources\benchmarks\test.yaml. Running cmddocker images -q automlbenchmark/randomforest:stableRunning cmddocker pull automlbenchmark/randomforest:stable stable: Pulling from automlbenchmark/randomforest

@juliocartier
Copy link
Author

Is possible execute in ubuntu 22.04 in docker?

@PGijsbers
Copy link
Collaborator

Sorry for the slow reply, somehow this issue fell off the radar. In case it is still relevant to you or anyone else:
What seems to be happening is that the input path defaults to the openml cache directory, which defaults to a directory in your user directory, which in this case contains spaces, which again leads to the error.

Specifying the input directory explicitly to a quoted path (or one without spaces) may work. I'll try to patch the underlying issue of unquoted mounth paths.

@PGijsbers PGijsbers changed the title Error when running on Windows Quote paths in docker invocation Dec 9, 2024
@PGijsbers PGijsbers linked a pull request Dec 12, 2024 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants