Official repository for "RefRec: Pseudo-labels Refinement via Shape Reconstruction for Unsupervised 3D Domain Adaptation"
Adriano Cardace - Riccardo Spezialetti - Pierluigi Zama Ramirez - Samuele Salti - Luigi Di Stefano
We rely on several libraries: Pytorch-lightning, Weight & Biases, Hesiod
To run the code, please follow the instructions below.
- install required dependencies
python -m venv env
source env/bin/activate
python -m pip install --upgrade pip
pip install torch==1.8.1+cu111 torchvision==0.9.1+cu111 -f https://download.pytorch.org/whl/torch_stable.html
pip install -r requirements.txt
- Install pytorchEMD following https://github.com/daerduoCarey/PyTorchEMD and daerduoCarey/PyTorchEMD#6 for latest versions of Torch
Reqeuest dataset access at https://drive.google.com/file/d/14mNtQTPA-b9_qzfHiadUIc_RWvPxfGX_/view?usp=sharing.
The dataset is the same provided by the original authours at https://github.com/canqin001/PointDAN. For convenience we provide a preprocessed version used in this work. To train the reconstruction network, we need to merge two dataset. Then, load all the required dataset to the wandb server:
mkdir data
unzip PointDA_aligned.zip -d data/
cd data
cp modelent modelnet_scannet
rsync -av scannet modelnet_scannet
./load_data.sh
To train modelnet->scannet, simply execute the following command:
./train_pipeline_m2sc.sh