Skip to content

[ICLR 2022] Code for Graph-less Neural Networks: Teaching Old MLPs New Tricks via Distillation (GLNN)

License

Notifications You must be signed in to change notification settings

snap-research/graphless-neural-networks

Repository files navigation

Graph-less Neural Networks (GLNN)

Code for Graph-less Neural Networks: Teaching Old MLPs New Tricks via Distillation by Shichang Zhang, Yozen Liu, Yizhou Sun, and Neil Shah.

Overview

Distillation framework



Accuracy vs. inference time on the ogbn-products dataset



Getting Started

Setup Environment

We use conda for environment setup. You can use

bash ./prepare_env.sh

which will create a conda environment named glnn and install relevant requirements (from requirements.txt). For simplicity, we use CPU-based torch and dgl versions in this guide, as specified in requirements. To run experiments with CUDA, please install torch and dgl with proper CUDA support, remove them from requirements.txt, and properly set the --device argument in the scripts. See https://pytorch.org/ and https://www.dgl.ai/pages/start.html for more installation details.

Be sure to activate the environment with

conda activate glnn

before running experiments as described below.

Preparing datasets

To run experiments for dataset used in the paper, please download from the following links and put them under data/ (see below for instructions on organizing the datasets).

  • CPF data (cora, citeseer, pubmed, a-computer, and a-photo): Download the '.npz' files from here. Rename amazon_electronics_computers.npz and amazon_electronics_photo.npz to a-computer.npz and a-photo.npz respectively.

  • OGB data (ogbn-arxiv and ogbn-products): Datasets will be automatically downloaded when running the load_data function in dataloader.py. More details here.

  • BGNN data (house_class and vk_class): Follow the instructions here and download dataset pre-processed in DGL format from here.

  • NonHom data (penn94 and pokec): Follow the instructions here to download the penn94 dataset and its splits. The pokec dataset will be automatically downloaded when running the load_data function in dataloader.py.

  • Your favourite datasets: download and add to the load_data function in dataloader.py.

Usage

To quickly train a teacher model you can run train_teacher.py by specifying the experiment setting, i.e. transductive (tran) or inductive (ind), teacher model, e.g. GCN, and dataset, e.g. cora, as per the example below.

python train_teacher.py --exp_setting tran --teacher GCN --dataset cora

To quickly train a student model with a pretrained teacher you can run train_student.py by specifying the experiment setting, teacher model, student model, and dataset like the example below. Make sure you train the teacher using the train_teacher.py first and have its result stored in the correct path specified by --out_t_path.

python train_student.py --exp_setting ind --teacher SAGE --student MLP --dataset citeseer --out_t_path outputs

For more examples, and to reproduce results in the paper, please refer to scripts in experiments/ as below.

bash experiments/sage_cpf.sh

To extend GLNN to your own model, you may do one of the following.

  • Add your favourite model architectures to the Model class in model.py. Then follow the examples above.
  • Train teacher model and store its output (log-probabilities). Then train the student by train_student.py with the correct --out_t_path.

Results

GraphSAGE vs. MLP vs. GLNN under the production setting described in the paper (transductive and inductive combined). Delta_MLP (Delta_GNN) represents difference between the GLNN and the MLP (GNN). Results show classification accuracy (higher is better); Delta_GNN > 0 indicates GLNN outperforms GNN. We observe that GLNNs always improve from MLPs by large margins and achieve competitive results as GNN on 6/7 datasets. Please see Table 3 in the paper for more details.

Datasets GNN(SAGE) MLP GLNN Delta_MLP Delta_GNN
Cora 79.29 58.98 78.28 19.30 (32.72%) -1.01 (-1.28%)
Citseer 68.38 59.81 69.27 9.46 (15.82%) 0.89 (1.30%)
Pubmed 74.88 66.80 74.71 7.91 (11.83%) -0.17 (-0.22%)
A-computer 82.14 67.38 82.29 14.90 (22.12%) 0.15 (0.19%)
A-photo 91.08 79.25 92.38 13.13 (16.57%) 1.30 (1.42%)
Arxiv 70.73 55.30 65.09 9.79 (17.70%) -5.64 (-7.97%)
Products 76.60 63.72 75.77 12.05 (18.91%) -0.83 (-1.09%)

Citation

If you find our work useful, please cite the following:

@inproceedings{zhang2021graphless,
      title={Graph-less Neural Networks: Teaching Old MLPs New Tricks via Distillation}, 
      author={Shichang Zhang and Yozen Liu and Yizhou Sun and Neil Shah},
      booktitle={International Conference on Learning Representations}
      year={2022},
      url={https://arxiv.org/abs/2110.08727}
}

Contact Us

Please open an issue or contact shichang@cs.ucla.edu if you have any questions.

Releases

No releases published

Packages

No packages published