Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[pull] main from autowarefoundation:main #276

Merged
merged 7 commits into from
Jul 20, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -232,6 +232,8 @@
<arg name="input/object1" value="clustering/camera_lidar_fusion/objects"/>
<arg name="output/object" value="$(var lidar_detection_model)_roi_cluster_fusion/objects"/>
<arg name="priority_mode" value="0"/>
<arg name="data_association_matrix_path" value="$(var object_recognition_detection_object_merger_data_association_matrix_param_path)"/>
<arg name="distance_threshold_list_path" value="$(var object_recognition_detection_object_merger_distance_threshold_list_path)"/>
</include>
</group>

Expand All @@ -242,6 +244,8 @@
<arg name="input/object0" value="$(var lidar_detection_model)_roi_cluster_fusion/objects"/>
<arg name="input/object1" value="detection_by_tracker/objects"/>
<arg name="output/object" value="$(var merger/output/objects)"/>
<arg name="data_association_matrix_path" value="$(var object_recognition_detection_object_merger_data_association_matrix_param_path)"/>
<arg name="distance_threshold_list_path" value="$(var object_recognition_detection_object_merger_distance_threshold_list_path)"/>
</include>
</group>

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -120,6 +120,8 @@
<arg name="input/object0" value="$(var merger/input/objects)"/>
<arg name="input/object1" value="clustering/objects"/>
<arg name="output/object" value="temporary_merged_objects"/>
<arg name="data_association_matrix_path" value="$(var object_recognition_detection_object_merger_data_association_matrix_param_path)"/>
<arg name="distance_threshold_list_path" value="$(var object_recognition_detection_object_merger_distance_threshold_list_path)"/>
</include>
</group>

Expand All @@ -130,6 +132,8 @@
<arg name="input/object0" value="temporary_merged_objects"/>
<arg name="input/object1" value="detection_by_tracker/objects"/>
<arg name="output/object" value="$(var merger/output/objects)"/>
<arg name="data_association_matrix_path" value="$(var object_recognition_detection_object_merger_data_association_matrix_param_path)"/>
<arg name="distance_threshold_list_path" value="$(var object_recognition_detection_object_merger_distance_threshold_list_path)"/>
</include>
</group>

Expand Down
2 changes: 2 additions & 0 deletions launch/tier4_perception_launch/launch/perception.launch.xml
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,8 @@
<arg name="object_recognition_detection_object_position_filter_param_path"/>
<arg name="object_recognition_detection_pointcloud_map_filter_param_path"/>
<arg name="object_recognition_prediction_map_based_prediction_param_path"/>
<arg name="object_recognition_detection_object_merger_data_association_matrix_param_path"/>
<arg name="object_recognition_detection_object_merger_distance_threshold_list_path"/>
<arg name="object_recognition_tracking_multi_object_tracker_data_association_matrix_param_path"/>
<arg name="obstacle_segmentation_ground_segmentation_param_path"/>
<arg name="obstacle_segmentation_ground_segmentation_elevation_map_param_path"/>
Expand Down
1 change: 0 additions & 1 deletion localization/yabloc/yabloc_pose_initializer/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,6 @@ target_link_libraries(${TARGET} ${PCL_LIBRARIES} Sophus::Sophus)
# Semantic segmentation
install(PROGRAMS
src/semantic_segmentation/semantic_segmentation_core.py
src/semantic_segmentation/semantic_segmentation_node.py
src/semantic_segmentation/semantic_segmentation_server.py
DESTINATION lib/${PROJECT_NAME}
)
Expand Down
14 changes: 14 additions & 0 deletions localization/yabloc/yabloc_pose_initializer/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,20 @@ This package contains some nodes related to initial pose estimation.
- [camera_pose_initializer](#camera_pose_initializer)
- [semantic_segmentation_server](#semantic_segmentation_server)

Ideally, this package downloads a pre-trained semantic segmentation model during the build and loads it at runtime for initialization.
However, to handle cases where network connectivity is not available at build time, **the default behavior is not to download the model during build.**
Even if the model is not downloaded, initialization will still complete, but the accuracy may be compromised.

<!-- cspell: ignore DDOWNLOAD_ARTIFACTS -->

To download the model, please specify `--cmake-args -DDOWNLOAD_ARTIFACTS=ON` to the build command.

```bash
colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release -DDOWNLOAD_ARTIFACTS=ON --packages-select yabloc_pose_initializer
```

For detailed information about the downloaded contents, please consult the `download.cmake` file in this package.

## Note

This package makes use of external code. The trained files are provided by apollo. The trained files are automatically downloaded when you build.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,8 @@ class CameraPoseInitializer : public rclcpp::Node
PoseCovStamped create_rectified_initial_pose(
const Eigen::Vector3f & pos, double yaw_angle_rad, const PoseCovStamped & src_msg);

bool estimate_pose(const Eigen::Vector3f & position, double & yaw_angle_rad, double yaw_std_rad);
std::optional<double> estimate_pose(
const Eigen::Vector3f & position, const double yaw_angle_rad, const double yaw_std_rad);
};
} // namespace yabloc

Expand Down

This file was deleted.

Original file line number Diff line number Diff line change
Expand Up @@ -20,10 +20,6 @@
#include <opencv2/highgui.hpp>
#include <opencv2/imgproc.hpp>

#include <pcl/point_cloud.h>
#include <pcl/point_types.h>
#include <pcl_conversions/pcl_conversions.h>

namespace yabloc
{
CameraPoseInitializer::CameraPoseInitializer()
Expand Down Expand Up @@ -74,7 +70,7 @@ cv::Mat bitwise_and_3ch(const cv::Mat src1, const cv::Mat src2)
return merged;
}

int count_nonzero(cv::Mat image_3ch)
int count_non_zero(cv::Mat image_3ch)
{
std::vector<cv::Mat> images;
cv::split(image_3ch, images);
Expand All @@ -85,20 +81,20 @@ int count_nonzero(cv::Mat image_3ch)
return count;
}

bool CameraPoseInitializer::estimate_pose(
const Eigen::Vector3f & position, double & yaw_angle_rad, double yaw_std_rad)
std::optional<double> CameraPoseInitializer::estimate_pose(
const Eigen::Vector3f & position, const double yaw_angle_rad, const double yaw_std_rad)
{
if (!projector_module_->define_project_func()) {
return false;
return std::nullopt;
}
if (!lane_image_) {
RCLCPP_WARN_STREAM(get_logger(), "vector map is not ready ");
return false;
return std::nullopt;
}
// TODO(KYabuuchi) check latest_image_msg's time stamp, too
if (!latest_image_msg_.has_value()) {
RCLCPP_WARN_STREAM(get_logger(), "source image is not ready");
return false;
return std::nullopt;
}

Image segmented_image;
Expand All @@ -110,10 +106,22 @@ bool CameraPoseInitializer::estimate_pose(
using namespace std::literals::chrono_literals;
std::future_status status = result_future.wait_for(1000ms);
if (status == std::future_status::ready) {
segmented_image = result_future.get()->dst_image;
const auto response = result_future.get();
if (response->success) {
segmented_image = response->dst_image;
} else {
RCLCPP_WARN_STREAM(get_logger(), "segmentation service failed unexpectedly");
// NOTE: Even if the segmentation service fails, the function will still return the
// yaw_angle_rad as it is and complete the initialization. The service fails
// when the DNN model is not downloaded. Ideally, initialization should rely on
// segmentation, but this implementation allows initialization even in cases where network
// connectivity is not available.
return yaw_angle_rad;
}
} else {
RCLCPP_ERROR_STREAM(get_logger(), "segmentation service exited unexpectedly");
return false;
RCLCPP_ERROR_STREAM(
get_logger(), "segmentation service did not return within the expected time");
return std::nullopt;
}
}

Expand Down Expand Up @@ -149,30 +157,18 @@ bool CameraPoseInitializer::estimate_pose(
if (lane_angle_rad) {
gain = 2 + std::cos((lane_angle_rad.value() - angle_rad) / 2.0);
}
const float score = gain * count_nonzero(dst);

// DEBUG:
constexpr bool imshow = false;
if (imshow) {
cv::Mat show_image;
cv::hconcat(std::vector<cv::Mat>{rotated_image, vector_map_image, dst}, show_image);
cv::imshow("and operator", show_image);
cv::waitKey(50);
}
// If count_non_zero() returns 0 everywhere, the orientation is chosen by the only gain
const float score = gain * (1 + count_non_zero(dst));

scores.push_back(score);
angles_rad.push_back(angle_rad);
}

{
size_t max_index =
std::distance(scores.begin(), std::max_element(scores.begin(), scores.end()));
yaw_angle_rad = angles_rad.at(max_index);
}

marker_module_->publish_marker(scores, angles_rad, position);

return true;
const size_t max_index =
std::distance(scores.begin(), std::max_element(scores.begin(), scores.end()));
return angles_rad.at(max_index);
}

void CameraPoseInitializer::on_map(const HADMapBin & msg)
Expand All @@ -193,8 +189,6 @@ void CameraPoseInitializer::on_service(
{
RCLCPP_INFO_STREAM(get_logger(), "CameraPoseInitializer on_service");

response->success = false;

const auto query_pos_with_cov = request->pose_with_covariance;
const auto query_pos = request->pose_with_covariance.pose.pose.position;
const auto orientation = request->pose_with_covariance.pose.pose.orientation;
Expand All @@ -203,12 +197,14 @@ void CameraPoseInitializer::on_service(
RCLCPP_INFO_STREAM(get_logger(), "Given initial position " << pos_vec3f.transpose());

// Estimate orientation
const auto header = request->pose_with_covariance.header;
double yaw_angle_rad = 2 * std::atan2(orientation.z, orientation.w);
if (estimate_pose(pos_vec3f, yaw_angle_rad, yaw_std_rad)) {
const double initial_yaw_angle_rad = 2 * std::atan2(orientation.z, orientation.w);
const auto yaw_angle_rad_opt = estimate_pose(pos_vec3f, initial_yaw_angle_rad, yaw_std_rad);
if (yaw_angle_rad_opt.has_value()) {
response->success = true;
response->pose_with_covariance =
create_rectified_initial_pose(pos_vec3f, yaw_angle_rad, query_pos_with_cov);
create_rectified_initial_pose(pos_vec3f, yaw_angle_rad_opt.value(), query_pos_with_cov);
} else {
response->success = false;
}
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ void MarkerModule::publish_marker(
auto minmax = std::minmax_element(scores.begin(), scores.end());
auto normalize = [minmax](int score) -> float {
return static_cast<float>(score - *minmax.first) /
static_cast<float>(*minmax.second - *minmax.first);
std::max(1e-4f, static_cast<float>(*minmax.second - *minmax.first));
};

MarkerArray array;
Expand Down

This file was deleted.

Loading