Welcome to the official GitHub repository for the open-source software and hardware associated with our article. This repository provides all the necessary code, designs, and documentation to reproduce the results and experiments described in our paper.
In this study, we introduce a novel shared-control system for key-hole docking operations, combining a commercial camera with occlusion-robust pose estimation and a hand-eye information fusion technique. This system is used to enhance docking precision and force-compliance safety. To train a hand-eye information fusion network model, we generated a self-supervised dataset using this docking system. After training, our pose estimation method showed improved accuracy compared to traditional methods, including observation-only approaches, hand-eye calibration, and conventional state estimation filters. In real-world phantom experiments, our approach demonstrated its effectiveness with reduced position dispersion (1.23 ± 0.81 mm vs. 2.47 ± 1.22 mm) and force dispersion (0.78 ± 0.57 N vs. 1.15 ± 0.97 N) compared to the control group. These advancements in semi-autonomy co-manipulation scenarios enhance interaction and stability. The study presents an anti-interference, steady, and precision solution with potential applications extending beyond laparoscopic surgery to other minimally invasive procedures.
For a detailed explanation of the methods and results, please refer to our paper:
- Title: Semi-Autonomous Laparoscopic Robot Docking with Learned Hand-Eye Information Fusion
- Authors: Huanyu Tian, Martin Huber, Christopher E. Mower, Zhe Han, Changsheng Li, Xingguang Duan, and Christos Bergeles
- Journal: Accepted by T-BME
- arXiv Link: https://arxiv.org/abs/2405.05817
A video demonstration of the project is available at the following link:
- Video Link: https://youtu.be/M_uZHY2E7gY
Before you begin, ensure you have met the following requirements:
- Python 3.x
- OpTas
- CasaDi
- OpenCV
- ROS2 Humble
- LBR_FRI_LIB
-
Clone this repository:
git clone https://github.com/yourusername/learned-heyfusion.git cd learned-heyfusion
-
Install the required dependencies:
pip install -r requirements.txt
-
Training with KalmanNet:
To train the network using the KalmanNet model, run the following command from the repository root:python train/trainModel.py
-
Training with KalmanNetOrigin:
Alternatively, to train using the original variant (KalmanNetOrigin), run:python train/trainOrigin.py
The repository is organized into the following folders:
- config: Configuration and parameter settings (e.g.,
config/config.py
). - dataset: Data processing and dataset utilities (e.g.,
dataset/dataset_alb.py
anddataset/finding_ground_truth.py
).
Note: The actual data files should be stored in a folder nameddata
(at the same level as this README) with subfolders named from0
to18
. - models: Model definitions and filtering implementations (e.g.,
models/nnkf.py
,models/SystemModel.py
, andmodels/UKF.py
). - pipeline: The training and evaluation pipeline (e.g.,
pipeline/pipeline.py
). - train: Training scripts (e.g.,
train/trainModel.py
andtrain/trainOrigin.py
).
For any inquiries, please contact your.email@example.com.