Skip to content

tailintalent/MP_Neural_PDE_Solvers

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

26 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Baseline commands:

Training:

Here are the commands for training the baselines for 1D experiments for the lamp repo:

First, cd into the folder of MP_Neural_PDE_Solvers/. Then, run one of the following commands:

MP-PDE with initial 25 nodes:

python experiments/train.py --device=cuda:0 --experiment=E2 --model=GNN --base_resolution=250,25 --time_window=25 --uniform_sample=4 --id=0

MP-PDE with initial 50 nodes:

python experiments/train.py --device=cuda:0 --experiment=E2 --model=GNN --base_resolution=250,50 --time_window=25 --uniform_sample=2 --id=0

MP-PDE with initial 100 nodes:

python experiments/train.py --device=cuda:0 --experiment=E2 --model=GNN --base_resolution=250,100 --time_window=25 --uniform_sample=-1 --id=0

CNN with initial 25 nodes:

python experiments/train.py --device=cuda:0 --experiment=E2 --model=BaseCNN --base_resolution=250,100 --time_window=25 --uniform_sample=4 --id=0

CNN with initial 50 nodes:

python experiments/train.py --device=cuda:0 --experiment=E2 --model=BaseCNN --base_resolution=250,100 --time_window=25 --uniform_sample=2 --id=0

CNN with initial 100 nodes:

python experiments/train.py --device=cuda:0 --experiment=E2 --model=BaseCNN --base_resolution=250,100 --time_window=25 --uniform_sample=-1 --id=0

FNO with initial 25 nodes:

python experiments/train.py --device=cuda:0 --experiment=E2 --model=FNO --base_resolution=250,100 --time_window=25 --uniform_sample=4 --id=0

FNO with initial 50 nodes:

python experiments/train.py --device=cuda:0 --experiment=E2 --model=FNO --base_resolution=250,100 --time_window=25 --uniform_sample=2 --id=0

FNO with initial 100 nodes:

python experiments/train.py --device=cuda:0 --experiment=E2 --model=FNO --base_resolution=250,100 --time_window=25 --uniform_sample=-1 --id=0

Analysis:

Use analysis.ipynb to analyze the results for the baselines.

The following is the original README:

Message Passing Neural PDE Solvers

Johannes Brandstetter*, Daniel Worrall*, Max Welling

Link to the paper

ICLR 2022 Spotlight Paper

If you find our work and/or our code useful, please cite us via:

@article{brandstetter2022message,
  title={Message Passing Neural PDE Solvers},
  author={Brandstetter, Johannes and Worrall, Daniel and Welling, Max},
  journal={arXiv preprint arXiv:2202.03376},
  year={2022}
}

Set up conda environment

source environment.sh

Produce datasets for tasks E1, E2, E3, WE1, WE2, WE3

python generate/generate_data.py --experiment={E1, E2, E3, WE1, WE2, WE3} --train_samples=2048 --valid_samples=128 --test_samples=128 --log=True --device=cuda:0

Note, to generate the dataset, need to go to line 13 of /temporal/solvers.py and change this line to torch.set_default_dtype(torch.float32).

Train MP-PDE solvers for tasks E1, E2, E3

python experiments/train.py --device=cuda:0 --experiment={E1, E2, E3} --model={GNN, ResCNN, Res1DCNN} --base_resolution=250,{100,50,40} --time_window=25 --log=True

Train MP-PDE solvers for tasks WE1, WE2

python experiments/train.py --device=cuda:0 --experiment={WE1, WE2} --base_resolution=250,{100,50,40} --neighbors=6 --time_window=25 --log=True

Train MP-PDE solvers for task WE3

python experiments/train.py --device=cuda:0 --experiment=WE3 --base_resolution=250,100 --neighbors=20 --time_window=25 --log=True

python experiments/train.py --device=cuda:0 --experiment=WE3 --base_resolution=250,50 --neighbors=12 --time_window=25 --log=True

python experiments/train.py --device=cuda:0 --experiment=WE3 --base_resolution=250,40 --neighbors=10 --time_window=25 --log=True

python experiments/train.py --device=cuda:0 --experiment=WE3 --base_resolution=250,40 --neighbors=6 --time_window=25 --log=True

About

Repo to the paper "Message Passing Neural PDE Solvers"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 88.2%
  • Jupyter Notebook 11.4%
  • Shell 0.4%