From c96c99ab67ef90f768289bcf1a39e7eff9c29e83 Mon Sep 17 00:00:00 2001 From: zhaoli2023 <43864970+Zhaoli2042@users.noreply.github.com> Date: Mon, 8 Jul 2024 14:55:16 -0400 Subject: [PATCH 1/3] Update readme.md --- Cluster-Setup/NERSC/readme.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/Cluster-Setup/NERSC/readme.md b/Cluster-Setup/NERSC/readme.md index 507a6fc..d456b3e 100644 --- a/Cluster-Setup/NERSC/readme.md +++ b/Cluster-Setup/NERSC/readme.md @@ -4,8 +4,8 @@ * If you don't need the ML potential, download the code, `cd src_clean/`, and start from [step 6](#Step-6). * Installation on other clusters are similar to on Perlmutter of NERSC. Follow the instructions here, and if you encounter issues, please consult with your institution's IT. * [Northwestern Quest IT](https://www.it.northwestern.edu/departments/it-services-support/research/computing/quest/) - * Check the [nvhpc](https://developer.nvidia.com/hpc-sdk) version. Currently the code works for **22.5 and 22.7** - * on NERSC, you can do ```module load nvhpc/22.7``` to use the 22.7 version of nvhpc. + * Check the [nvhpc](https://developer.nvidia.com/hpc-sdk) version. + * on NERSC, you can do ```module load PrgEnv-nvhpc``` to use the latest version of nvhpc/cuda. * Also check out the [issue on this topic](https://github.com/snurr-group/gRASPA/issues/9) # Step 1 We download TensorFlow2 C++ API to a local directory: (assuming in the HOME directory) From 649e10ebfc5888678763201f1d3596c02a59f54b Mon Sep 17 00:00:00 2001 From: zhaoli2023 <43864970+Zhaoli2042@users.noreply.github.com> Date: Mon, 8 Jul 2024 14:57:50 -0400 Subject: [PATCH 2/3] Update NVC_COMPILE_NERSC --- Cluster-Setup/NERSC/NVC_COMPILE_NERSC | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/Cluster-Setup/NERSC/NVC_COMPILE_NERSC b/Cluster-Setup/NERSC/NVC_COMPILE_NERSC index f240a2c..34f72c0 100644 --- a/Cluster-Setup/NERSC/NVC_COMPILE_NERSC +++ b/Cluster-Setup/NERSC/NVC_COMPILE_NERSC @@ -1,15 +1,16 @@ #!/bin/bash +# The following line uses latest nvhpc/cuda version module load PrgEnv-nvhpc # The following lines are added since March 2024 # NERSC changed their default nvhpc version - -module load nvhpc/22.7 -module unload nvhpc/22.7 +# Commented since latest gRASPA code now works +#module load nvhpc/22.7 +#module unload nvhpc/22.7 #module load cudatoolkit/11.7 -module load cudatoolkit/11.5 -module load nvhpc/22.7 +#module load cudatoolkit/11.5 +#module load nvhpc/22.7 rm *.o nvc_main.x From 6c644ce527d0b2d6ed80b28088263bec147ba060 Mon Sep 17 00:00:00 2001 From: zhaoli2023 <43864970+Zhaoli2042@users.noreply.github.com> Date: Mon, 8 Jul 2024 14:58:08 -0400 Subject: [PATCH 3/3] Update NVC_COMPILE_NERSC_VANILLA --- Cluster-Setup/NERSC/NVC_COMPILE_NERSC_VANILLA | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/Cluster-Setup/NERSC/NVC_COMPILE_NERSC_VANILLA b/Cluster-Setup/NERSC/NVC_COMPILE_NERSC_VANILLA index 228d005..5c95849 100644 --- a/Cluster-Setup/NERSC/NVC_COMPILE_NERSC_VANILLA +++ b/Cluster-Setup/NERSC/NVC_COMPILE_NERSC_VANILLA @@ -1,15 +1,16 @@ #!/bin/bash +# The following line uses latest nvhpc/cuda version module load PrgEnv-nvhpc # The following lines are added since March 2024 # NERSC changed their default nvhpc version - -module load nvhpc/22.7 -module unload nvhpc/22.7 +# Commented since latest gRASPA code now works +#module load nvhpc/22.7 +#module unload nvhpc/22.7 #module load cudatoolkit/11.7 -module load cudatoolkit/11.5 -module load nvhpc/22.7 +#module load cudatoolkit/11.5 +#module load nvhpc/22.7 rm *.o nvc_main.x