AARDK is a project aimed at easing the development of cross-platform robotics solutions for various environments. The following Subprojects are being implemented currently.
- Autonomous Underwater Vehicle
- Autonomous Surface Vehicle
- Install Ubuntu 22.04 LTS x86_64 or Jetpack 6.1 GA aarch64
- Install the NVIDIA CUDA Toolkit x86_64 or aarch64
- Install the NVIDIA Container Toolkit link
- Install Docker Engine for Ubuntu link
- Complete Docker Post-Install Steps for Linux link
- Install the ZED SDK V4.2 link
- Clone the AARDK and Subprojects by running git clone with the --recursive or --recurse_submodules flag
- Run the following lines to modify kernel for larger message buffer, better timeouts, etc.
echo "net.ipv4.ipfrag_time=5" | sudo tee --append /etc/sysctl.d/10-cyclone-max.conf && \
echo "net.ipv4.ipfrag_high_thresh=134217728" | sudo tee --append /etc/sysctl.d/10-cyclone-max.conf && \
echo "net.core.rmem_max=2147483647" | sudo tee --append /etc/sysctl.d/10-cyclone-max.conf
These commands change the IP fragment timeout to 5 seconds from 30 seconds, reducing time invalid messages continue to pollute the buffer. The second command expands the memory that the kernel can use to reassemble IP fragments to 128MiB. The third command expands the kernels recieve buffer size to 2GiB, whihc will allow for more messages to be kept in the queue. This may still require some tweaking to work well with Jetson models with low RAM (<=8GiB)
The project is designed to be interacted with via the CC.sh script. The following commands are supported as of the current release.
- -b → Builds the container specified as argument, valid choices are:
- iceberg-asv-analysis
- iceberg-asv-deployment
- auv-analysis
- auv-deployment
- -c → Cross-builds the container for the other target architecture. same options as build. Currently not working
- -d → Destroys and removes all containers.
- -e → Exports selected container.
- -g → Grabs system dependency (ISSAC ROS Common).
- -h → Displays the help menu.
- -i → Installs prerequisite libraries (Not-implemented).
- -n → Spawns a new intercative window for specified container.
- -s → Starts the selected container. During the start process, the ./CC.sh script should overwrite package configuration files with the appropriate /dev/bus/ directory to access the device (Symlinks won't work). valid choices are:
- iceberg-asv-analysis
- iceberg-asv-deployment
- auv-analysis
- auv-deployment
When running -s, the appropriate visual studio code files will be imported into the container. It is recommended that the -s script is run outside of visual studio code, as if it is run inside and visual studio freezes or crashes, undesired results could occur (I have experinced this many times in testing, mostly on devices with low RAM). To develop, use the Dev Containers extension and navigate to the Remote Explorer tab, and click the arrow that appears when hovering over the containers name. The workspace root should be at ${HOME} to ensure intellisense can function properly.
To add package-based tokens, API keys, etc, use the corresponding .env file, and don't push updates to the repo unless it contains general environment information ONLY.
base → ros2_humble → realsense → opencv_nv → user → asv_analysis
base → ros2_humble → opencv_nv → user → asv_deployment
base → ros2_humble → opencv_nv → user → auv_analysis
base → ros2_humble → opencv_nv → user → auv_deployment
base → Contains build instructions for configuring a base enviornment to build upon. Most CUDA dependencies are installed here, and most platform-specific instructions will be executed here. The base Dockerfile is property of and maintained by NVIDIA Corporation.
ros2_humble → Contains build instructions relating to the installation of ROS2 and interlacing frameworks, such as the MoveIT framework. The ros2_humble Dockerfile is property of and maintained by NVIDIA Corporation.
realsense→ Contains build instructions for setting up the
opencv_nv → Contains platform-specific instructions for buildingh and installing OpenCV with support for NVIDIA CUDA, CUDNN, etc.
user → Contains instructions for setting up the user in the container, and granting them the approrpiate permissions to access the hardware. This is a maintained version of a previous NVIDIA CORPORATION Dockerfile that is no longer supported, but essential for the project. Project specific changes have been implemented.
asv_analysis → WIP
asv_deployment → Contains instructions for building Iceberg ASV's project stack.
auv_analysis → WIP
auv_deployment → Contains instructions for setting up the deployment environment for the AUV.
-
Docker compose compatibility is still a work-in-progress, not sure if it's entriely possible, perhaps with buildx plugin?
-
opencv_nv SFM is currently disabled on x86_64
This is not a production level release, and due to the one-man development team, limited testing is done. If using for any mission-critical technology, ensure the fork is compliant with the applicable ISO standards.
This is currently an open-source project maintained by Robert Fudge, 2024 - Present
Pull requests are welcome.
To create a new project, there are six components needed. It is recommended to follow development in this order to allow for a natural progression and to facilitate testing of the current step with the completed components of the previous stage.
Custom Build
- AMD Ryzen 9 9950X 16C/32T
- 64GB 6000Mhz DDR5
- NVIDIA RTX 3060ti LHR (8GB)
Acer Predator Helios 300 (2019)
- Intel i7-9750H
- 16GB 2666MHz DDR4 RAM
- NVIDIA RTX 2060 Mobile (6GB)
NVIDIA Jetson Orin AGX 64GB Development Kit
- ARM Cortex A78 x 12
- 64GB 3200MHz DDR5 RAM
- 1024 CUDA Cores (SM Version 8.7)
NVIDIA Jetson ORIN NX 8GB Engineering Reference Kit
- ARM Cortex A78 x 6
- 8GB 3200MHz DDR5 RAM
- 1024 CUDA Cores (SM Version 8.7)
*In the ORIN NX, the RAM, CPU, and GPU are on the same die, meaning the memory is shared between CPU and GPU, and none can be upgraded
Please see the associated References.bib file for academic references to technologies used in this project. This is currently a work-in-progress, and if you notice that a project used here isn't referenced properly, please reach out to rnfudge@mun.ca and corrections will be made.
This project is currently licensed under the Apache 2.0 license.