Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some problems with the evaluation on leaderboard #138

Closed
Sou0602 opened this issue Dec 29, 2022 · 4 comments
Closed

Some problems with the evaluation on leaderboard #138

Sou0602 opened this issue Dec 29, 2022 · 4 comments

Comments

@Sou0602
Copy link

Sou0602 commented Dec 29, 2022

I was trying to submit the transfuser model to the CARLA Leaderboard, and the submission says it was pending as it uses computes and never progresses. I tested the image on a local docker container and found that run_evaluation.sh times out as the client cannot connect to the server. Normally, I run the server on a different terminal. When I try to do this on the container, make_docker.sh does not copy the ./CarlaUE4.sh to the image, so we don't have a usual way to run the CARLA server on the container. Can someone lend more details on how to connect/run the server on the docker container, or is if someone faced a similar issue?

@Kait0
Copy link
Collaborator

Kait0 commented Dec 29, 2022

You simply run CARLA outside of the docker container.
The server can communicate with the client running inside the docker container.
You can see how to configure the docker start properly here.

@Sou0602
Copy link
Author

Sou0602 commented Jan 6, 2023

I tried running the run_docker.sh in the tools folder of the transfuser directory. I get an error saying:
docker: Error response from daemon: invalid mount config for type "bind": bind source path does not exist: /home/transfuser/results/. See 'docker run --help'.
There exists a folder already but the error says a source path does not exist.
I changed the --mount to -v and this is the error:
docker: Error response from daemon: could not select device driver "" with capabilities: [[gpu]].

I am not sure how to get around this.

Another question I had is that, run_docker,sh just ensures the local version of the docker can be tested. But my submission to the leaderboard server ran for ~150 hours with a status of finished. However, I cannot see any values on the log and this is the data shown on leaderboard:
{ "values":[] "labels":[ 0: "Driving score" 1: "Route completion" 2: "Infraction penalty" 3: "Collisions pedestrians" 4: "Collisions vehicles" 5: "Collisions layout" 6: "Red light infractions" 7: "Stop sign infractions" 8: "Off-road infractions" 9: "Route deviations" 10: "Route timeouts" 11: "Agent blocked" ] "progress":[ 0: 4 1: 100 ] "icons":[ 0: "carla_camera" 1: "carla_camera" 2: "carla_camera" 3: "carla_imu" 4: "carla_gnss" 5: "carla_speedometer" 6: "carla_lidar" ] "entry_status":"Started" }

Are there are any suggestions/ similar problems you faced regarding this ?

@Kait0
Copy link
Collaborator

Kait0 commented Jan 9, 2023

You might need to install nvidia support for docker, see here.

For problems with the online leaderboard you should talk to the organizers they will usually help you.
You can talk with Guillermo López, Joel Moriana or German Ros.
You can do that via their e-mail or the CARLA discord.

@Sou0602
Copy link
Author

Sou0602 commented Jan 9, 2023

@Kait0 , thanks for clarifying that. I am now able to run the evaluation locally but keep running into other problems with the submission to leaderboard.

@Sou0602 Sou0602 closed this as completed Jan 9, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants