This is the code of Learning Dynamic Point Cloud Compression via Hierarchical Inter-frame Block Matching.
Link of the paper: https://dl.acm.org/doi/abs/10.1145/3581783.3613793
Please refer to the Requirements.md
.
LDPCC environment is compatible with D-DPCC.
The training is very unstable!!! Must load pretrained models pretrain_ckpts
rather than training from scratch.
nohup python -u trainer.py --pretrained=./pretrain_ckpts/I15_best_model.pth --lamb=10 --exp_name=I10 --gpu=3 >3.out 2>&1 &
10-bit 8iVFB (soldier, redandblack, loot, longdress) are used for training, and results for 32/96 frames of 10-bit Owlii (basketball, dancer, exercise, model) are provided.
Detailed information is shown in the MPEG proposal: M65061 "[AI-3DGC][EE5.3] Results modification of LDPCC: Learning Dynamic Point Cloud Compression via Hierarchical Inter-frame Block Matching", 2023/10.
Noted that the 10-bit Owlii dataset is quantized with the floor
operation rather than round
after downsampling in this paper.
In the inference stage of point cloud compression networks, using floor
instead of round
for coordinate integralization yields significantly better results.
Generated by new_test_owlii_mpeg.py
.
A sample command:
nohup python -u new_test_owlii_mpeg.py --log_name=96frames-3 --tmp_dir=tmp-3 --gpu=6 --results_dir=mpeg-results-96-3 > 6.out 2>&1 &
And the detailed output logs are 96frames-0.txt
, 96frames-1.txt
, 96frames-2.txt
and 96frames-3.txt
.
Generated by read_log_32frames.py
, which generates 32-frame results based on the detailed per-frame results
(96frames-0/1/2/3.txt
) output by new_test_owlii_mpeg.py
.
A sample command:
nohup python -u read_log_32frames.py > 32.out 2>&1 &
Of course, you can also get 32-frame results by changing the parameter --frame_count=32
in new_test_owlii_mpeg.py
.
Generated by plot-mpeg-proposal-10bit-32fs.py
and plot-mpeg-proposal-10bit-96fs.py
.
- If
../GPCC/tmc3: Permission denied
:chmod -R 777 ./