-
Notifications
You must be signed in to change notification settings - Fork 665
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(perception_benchmark_tool): add perception benchmark tool #603
feat(perception_benchmark_tool): add perception benchmark tool #603
Conversation
@1222-takeshi I approved your review request, but this pull request is still a draft. |
e9c2deb
to
c920d44
Compare
Signed-off-by: Kaan Colak <kcolak@leodrive.ai>
c920d44
to
a342974
Compare
…enchmark tool package Signed-off-by: Kaan Colak <kcolak@leodrive.ai>
Signed-off-by: Kaan Colak <kcolak@leodrive.ai>
Signed-off-by: Kaan Colak <kcolak@leodrive.ai>
Signed-off-by: Kaan Colak <kcolak@leodrive.ai>
Signed-off-by: Kaan Colak <kcolak@leodrive.ai>
Signed-off-by: Kaan Çolak <kaancolak95@gmail.com>
Signed-off-by: Kaan Çolak <kaancolak95@gmail.com>
I shared the initial 3D tracking benchmark results in the README file of the PR link. It contains only the results of the lidar-only pipeline. For running the camera-lidar-fusion pipeline, we need to run it with multiple GPUs, the Waymo dataset contains 5 cameras. After this PR #736, I will add its result. For vehicles, there is no blockage. But for pedestrians, We have some problems. We are giving a constant length and width size to the pedestrian bounding boxes in Autoware.Universe, it's equal to 1 meter. But, Waymo Dataset has so strict 3D IoU scores for matching tracked ground truth objects and tracked object predictions. When we give a fixed size to pedestrians, it falls below the cutoff score. (Vehicle: 0.7 , Pedestrian and Cyclist: 0.5). For this reason, pedestrian scores are almost equal to zero. We can use bounding boxes directly coming from 3D detection nodes just for evaluation but it needs an external change of the perception stack and reproducing benchmarking results could be really hard, or we can change the score threshold from the dataset config file but in this way, we can't compare our results with other. I think there are only 2 options. If you have any suggestions or advice please share them. |
If the current stack is getting 0 points for pedestrians, it's ok. We can work on improving the detection pipeline to increase it later on. @1222-takeshi when can you review this? Will you be able to reproduce the results by following instructions in the readme file? |
Signed-off-by: Kaan Colak <kcolak@leodrive.ai>
@kaancolak I believe this benchmark should be placed under "perception" directory. Is there any reason for making benchmark directly at top-level ? |
Thanks, @miursh for your feedback. Actually, I talked to Fatih when I first started this tool, he advised this directory structure. It can also take place in Perception, but some of the developers from the Robotec working on generic evaluation tools, which will contain multiple packages for metric calculation (localization, control, etc. ). We can collect all benchmarking and evaluation tools under a single top-level folder. But this is optional. For now, I think, reproducing benchmarking result of this package is enough. Our goal was the compare our tracking result with other tracking submissions in the Waymo 3D Tracking Challenge. And also, after the generic evaluation tools are finished, I am planning to connect the generic evaluation tools and perception benchmark tools for a more generic evaluator / benchmarking tools. |
@ktro2828 Are you done with the review of the PR? If so, I would like you to give approval so that we can merge. |
@mitsudome-r Sorry not yet. As mentioned above(his reply at 15 days ago), @.kaancolak is trying to update codes. After he gets ready, I will re-review. |
fa0c511
to
6e39032
Compare
Signed-off-by: Kaan Colak <kcolak@leodrive.ai>
6e39032
to
2c3ff5d
Compare
@ktro2828, Sorry for the delay, I was dealing with other issues about BUS ODD field tests. I made the updates, the package is now ready to be reviewed. |
Signed-off-by: Kaan Colak <kcolak@leodrive.ai>
@kaancolak sorry for late to reaction. I'm reviewing, but build-and-test is failed in CI/CD. Can you fix it?? |
Thanks, I added the license agreement. Currently, Autoware docker doesn't contain TensorFlow but but all data in the Waymo dataset is in TensorFlow "tfrecord" format. For this reason, it doesn't pass the CI/CD pipeline at the moment. |
Signed-off-by: Kaan Colak <kcolak@leodrive.ai>
@kenji-miyake What is the right way to have pip packages like
https://github.com/autowarefoundation/autoware.universe/actions/runs/3088532428/jobs/5011962436 here I suspect |
@xmfcx Yes, it is possible. You can send pull requests to add your dependencies here. If you have some troubles with And I believe skipping tests is only acceptable as a tentative workaround. In this case, you should create a follow-up issue. |
I create PR for waymo_open_dataset, a related link. After this PR is merged, waymo_open_dataset will install the relevant TensorFlow version. |
@kaancolak It seems the PR has been merged, so please update package.xml to install waymo-open-dataset with rosdep. |
Signed-off-by: Kaan Colak <kcolak@leodrive.ai>
Signed-off-by: Kaan Colak <kcolak@leodrive.ai>
caf3cc5
to
036c873
Compare
And this causes python unit tests to fail. I couldn't find what installs the But if I install waymo with (without @kenji-miyake Is there a clean way to skip If so, I can update the README instructions to install it that way. If not, I will close this PR and add it as a separate repository in |
036c873
to
adc7951
Compare
@kaancolak I guess the dependency of $ docker run --rm -it ubuntu:20.04 /bin/bash
$ apt update && apt install -y python3-pip
$ pip3 install -U -q waymo-open-dataset-tf-2-6-0
$ pip list | grep protobuf
protobuf 4.21.7 Also, build fails on Humble. In that case, you can't merge this PR. $ docker run --rm -it ubuntu:20.04 /bin/bash
$ apt update && apt install -y python3-pip
$ pip3 install -U -q waymo-open-dataset-tf-2-6-0
ERROR: Could not find a version that satisfies the requirement waymo-open-dataset-tf-2-6-0 (from version
s: none) ERROR: No matching distribution found for waymo-open-dataset-tf-2-6-0 |
You can move the contents of this package to https://github.com/autowarefoundation/perception_benchmark_tool |
this PR moved to autowarefoundation/perception_benchmark_tool#1 |
feat: change launch repo to awf/autoware_launch Signed-off-by: Takayuki Murooka <takayuki5168@gmail.com>
Signed-off-by: kaancolak kcolak@leodrive.ai
Description
Resolves #565
Builds on top of #565
Related links
Tests performed
Notes for reviewers
Pre-review checklist for the PR author
The PR author must check the checkboxes below when creating the PR.
In-review checklist for the PR reviewers
The PR reviewers must check the checkboxes below before approval.
Post-review checklist for the PR author
The PR author must check the checkboxes below before merging.
After all checkboxes are checked, anyone who has write access can merge the PR.