This repo contains the code for our ICLR 2023 paper (Spotlight):
Domain-Indexing Variational Bayes: Interpretable Domain Index for Domain Adaptation
Zihao Xu*, Guang-Yuan Hao*, Hao He, Hao Wang
Eleventh International Conference on Learning Representations, 2023
[Paper] [OpenReview] [PPT] [Talk (Youtube)] [Talk (Bilibili)]
- Brief Introduction for VDI
- Sample Visualization of Inferred Domain Indices
- Domain Index Definition (Informal)
- Method Overview
- Installation
- Code for Different Datasets
- Quantitative Result
- More Visualization of Inferred Domain Indices
- Related Works
- Reference
Previous studies have shown that leveraging domain index can significantly boost domain adaptation performance [1,2]. However, such domain indices are not always available. VDI is the model that aims to address this challenge. To achieve this goal, we first formally define the "domain index" from the probabilistic perspective, and then infers domain indices from multi-domain data, with an adversarial variational Bayesian framework. These domain indices provide additional insight on domain relations and improve domain adaptation performance. Our theoretical analysis shows that VDI finds the optimal domain index at equilibrium.
Below are inferred domain indices for
We require the domain index to:
- Be independent of the data's encoding (i.e., domain-invariant encoding).
- Retain as much information on the data as possible.
- Maximize adaptation performance.
We propose a Hierarchical Bayesian Deep Learning model for domain index inference, which is shown below. Left: Probabilistic graphical model for VDI's generative model. Right: Probabilistic graphical model for the VDI's inference model. See our paper for detailed explanation.
Our theortical analysis found that maximizing our model's evidence lower bound while adversarially training an additional discriminator is equivalent to inferring the optimal domain indices according to the definition. This gives rise to our final network structure shown below.
conda create -n VDI python=3.8
conda activate VDI
conda install pytorch==1.11.0 torchvision==0.12.0 cudatoolkit=11.3 -c pytorch
pip install -r requirements.txt
In the directory of each dataset, there are detailed steps on how to train VDI and how to visualize the inferred domain indices.
Inferred domain indices (reduced to 1 dimension by PCA) with true domain indices for dataset Circle. VDI's inferred indices have a correlation of 0.97 with true indices.
Left: Ground-truth domain graph for DG-15. We use 'red' and 'blue' to roughly indicate positive and negative data points in a domain. Right: VDI's inferred domain graph for DG-15, with an AUC of 0.83.
Inferred domain indices for 30 domains in CompCars. We color inferred domain indices according to ground-truth indices, viewpoints (first) and YOMs (second). Observations are consistent with intuition: (1) domains with the same viewpoint or YOM have similar domain indices; (2) domains with "front-side" and "rear-side" viewpoints have similar domain indices; (3) domains with "front" and "rear" viewpoints have similar domain indices.
[1] Graph-Relational Domain Adaptation
Zihao Xu, Hao He, Guang-He Lee, Yuyang Wang, Hao Wang
Tenth International Conference on Learning Representations (ICLR), 2022
[Paper] [Code] [Talk] [Slides]
[2] Continuously Indexed Domain Adaptation
Hao Wang*, Hao He*, Dina Katabi
Thirty-Seventh International Conference on Machine Learning (ICML), 2020
[Paper] [Code] [Talk] [Blog] [Slides] [Website]
Domain-Indexing Variational Bayes: Interpretable Domain Index for Domain Adaptation
@inproceedings{VDI,
title={Domain-Indexing Variational Bayes: Interpretable Domain Index for Domain Adaptation},
author={Xu, Zihao and Hao, Guang-Yuan and He, Hao and Wang, Hao},
booktitle={International Conference on Learning Representations},
year={2023}
}