The Mobile Pick and Place project involves a robotic arm mounted on an Autonomous Mobile Robot (AMR). The system performs pick-and-place operations autonomously by navigating to a target location, detecting and estimating the pose of objects using a YOLO-based object detection module, and executing robotic arm movements. The navigation stack and Robotics arm software is implemented using the ROS 2 Humble framework.
- Navigation ( roverrobotics_ros2 ) : The AMR navigates from its current location to the designated pick and drop location using the Nav2 stack.
- Object Detection and 3D Pose Estimation( pose_estimation_pkg ): A YOLO-based object detection module identifies the object and uses it for estimating the pose of the object relative to the camera.
- Pose Transformation: The
rcar_communication
package transforms the pose from the camera frame to the base frame of the robotic arm. - Pick-and-Place Operation: Joint angles are calculated, and the robotic arm executes the pick-and-place task.
- ROS 2 Humble installed on your system.
- Correct hardware connections for the AMR and robotic arm.
- A YOLO-based object detection model.
mkdir -p ~/ros2_ws/src
cd ~/ros2_ws/src
git clone https://github.com/Ignitarium-Renesas/R-car_Mobile_Arm.git
cd ~/ros2_ws
pip install -r src/R-car_Mobile_Arm/pose_estimation_pkg/pose_estimation_pkg/libs/requirements.txt
pip install pymycobot --upgrade
cd ~/ros2_ws
colcon build
- Ensure all hardware connections are correctly set up.
- Turn on the robotic arm.
- Connect to the Wi-Fi access point:
- SSID: ElephantRobotics_AP
- Log in to the robotic arm controller using SSH:
ssh er@10.42.0.1
# Password: Elephant
- Start the server:
./start_server.sh
- Open a terminal and launch the AMR and ARM controller:
ros2 launch roverrobotics_driver rover_controller.launch.py
- Open another terminal and launch the navigation stack:
ros2 launch roverrobotics_driver navigation_launch.py
- Open another terminal.
- Navigate to the
pose_estimation_pkg
package inside itslib
folder:
cd ~/ros2_ws/src/R-car_Mobile_Arm/pose_estimation_pkg/pose_estimation_pkg/lib
- Run the camera activation script:
python3 send.py
Use the following ROS 2 service to execute the demo:
ros2 service call /run_demo mecharm_interfaces/srv/PickObject {}
- Ensure all components are powered on and properly configured before running the project.
- The navigation stack must be correctly tuned to ensure accurate AMR movement.
- Verify the YOLO object detection module is properly trained for the target objects.
- Add dockerfile.
- Add images of the different stages.
For troubleshooting, refer to the system logs and debug messages in the ROS 2 terminals.