Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add pre-commit, other development setup stuff #113

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 13 additions & 0 deletions .github/workflows/pr.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
name: pr

on:
push:
pull_request:

jobs:
checks:
name: "pre-commit hooks"
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: pre-commit/action@v3.0.1
60 changes: 60 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
*.7zip
*.a
*.bak
*.bz
*.bz2
*.conda
*.core
*.coverage
*.css
*.csv
*.dat
*.db
dist/
*.dll
*.doc
*.docx
*.docm
.DS_Store
*.dylib
*.egg-info/
*.env
*.exe
*.feather
*.html
htmlcov/
.idea/
.ipynb_checkpoints/
*.js
*.json
*.lzma
.mypy_cache/
*.npy
*.o
*.pdf
*.pem
*.ppt
*.pptx
*.pptm
*.pq
*.pub
*.pyc
__pycache__/
.pytest_cache/
*.rda
*.rds
*.Rdata
*.rsa
.ruff_cache/
*.snappy-*.tar.gz
*.so
*.sqlite
*.tar.gz
*.tgz
*.tmp
*.whl
*.xls
*.xlsx
*.xlsm
*.zip
*.zstd
19 changes: 19 additions & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
# Copyright (c) 2024, NVIDIA CORPORATION.

repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v5.0.0
hooks:
- id: trailing-whitespace
- id: end-of-file-fixer
- repo: https://github.com/codespell-project/codespell
rev: v2.3.0
hooks:
- id: codespell
- repo: https://github.com/shellcheck-py/shellcheck-py
rev: v0.10.0.1
hooks:
- id: shellcheck

default_language_version:
python: python3
10 changes: 10 additions & 0 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
# contributing

## Linting

This repo uses `pre-commit` to statically analyze code.
Run the following to do that locally.

```shell
pre-commit run --all-files
```
8 changes: 4 additions & 4 deletions colab/env-check.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,10 @@
import subprocess
from pathlib import Path

try:
try:
import pynvml
except:
output = subprocess.Popen(["pip install pynvml"], shell=True, stderr=subprocess.STDOUT,
output = subprocess.Popen(["pip install pynvml"], shell=True, stderr=subprocess.STDOUT,
stdout=subprocess.PIPE)
for line in io.TextIOWrapper(output.stdout, encoding="utf-8"):
if(line == ""):
Expand All @@ -20,7 +20,7 @@
Unfortunately you're in a Colab instance that doesn't have a GPU.

Please make sure you've configured Colab to request a GPU Instance Type.

Go to 'Runtime -> Change Runtime Type --> under the Hardware Accelerator, select GPU', then try again."""
)
gpu_name = pynvml.nvmlDeviceGetName(pynvml.nvmlDeviceGetHandleByIndex(0))
Expand All @@ -36,4 +36,4 @@
Unfortunately Colab didn't give you a RAPIDS compatible GPU (P4, P100, T4, or V100), but gave you a """+ gpu_name +""".

Please use 'Runtime -> Factory Reset Runtimes...', which will allocate you a different GPU instance, to try again."""
)
)
18 changes: 9 additions & 9 deletions colab/install_rapids.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,10 +3,10 @@
import subprocess
from pathlib import Path

try:
try:
import pynvml
except:
output = subprocess.Popen(["pip install pynvml"], shell=True, stderr=subprocess.STDOUT,
output = subprocess.Popen(["pip install pynvml"], shell=True, stderr=subprocess.STDOUT,
stdout=subprocess.PIPE)
for line in io.TextIOWrapper(output.stdout, encoding="utf-8"):
if(line == ""):
Expand All @@ -21,27 +21,27 @@
Unfortunately you're in a Colab instance that doesn't have a GPU.

Please make sure you've configured Colab to request a GPU Instance Type.

Go to 'Runtime -> Change Runtime Type --> under the Hardware Accelerator, select GPU', then try again."""
)
gpu_name = pynvml.nvmlDeviceGetName(pynvml.nvmlDeviceGetHandleByIndex(0))

# CFFI fix with pip
output = subprocess.Popen(["pip uninstall --yes cffi"], shell=True, stderr=subprocess.STDOUT,
# CFFI fix with pip
output = subprocess.Popen(["pip uninstall --yes cffi"], shell=True, stderr=subprocess.STDOUT,
stdout=subprocess.PIPE)
for line in io.TextIOWrapper(output.stdout, encoding="utf-8"):
if(line == ""):
break
else:
print(line.rstrip())
output = subprocess.Popen(["pip uninstall --yes cryptography"], shell=True, stderr=subprocess.STDOUT,
output = subprocess.Popen(["pip uninstall --yes cryptography"], shell=True, stderr=subprocess.STDOUT,
stdout=subprocess.PIPE)
for line in io.TextIOWrapper(output.stdout, encoding="utf-8"):
if(line == ""):
break
else:
print(line.rstrip())
output = subprocess.Popen(["pip install cffi==1.15.0"], shell=True, stderr=subprocess.STDOUT,
output = subprocess.Popen(["pip install cffi==1.15.0"], shell=True, stderr=subprocess.STDOUT,
stdout=subprocess.PIPE)
for line in io.TextIOWrapper(output.stdout, encoding="utf-8"):
if(line == ""):
Expand Down Expand Up @@ -72,15 +72,15 @@
pkg = "rapids"
print("Starting the RAPIDS install on Colab. This will take about 15 minutes.")

output = subprocess.Popen(["conda install -y --prefix /usr/local -c conda-forge mamba"], shell=True, stderr=subprocess.STDOUT,
output = subprocess.Popen(["conda install -y --prefix /usr/local -c conda-forge mamba"], shell=True, stderr=subprocess.STDOUT,
stdout=subprocess.PIPE)
for line in io.TextIOWrapper(output.stdout, encoding="utf-8"):
if(line == ""):
break
else:
print(line.rstrip())

output = subprocess.Popen(["mamba install -y --prefix /usr/local -c "+release[0]+" -c conda-forge -c nvidia python=3.10 cuda-version=12.0 "+pkg+"="+release[1]+" llvmlite gcsfs openssl dask-sql"], shell=True, stderr=subprocess.STDOUT,
output = subprocess.Popen(["mamba install -y --prefix /usr/local -c "+release[0]+" -c conda-forge -c nvidia python=3.10 cuda-version=12.0 "+pkg+"="+release[1]+" llvmlite gcsfs openssl dask-sql"], shell=True, stderr=subprocess.STDOUT,
stdout=subprocess.PIPE)
for line in io.TextIOWrapper(output.stdout, encoding="utf-8"):
if(line == ""):
Expand Down
24 changes: 12 additions & 12 deletions colab/pip-install.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,10 +3,10 @@
from pathlib import Path

# Install RAPIDS -- we're doing this in one file, for now, due to ease of use
try:
try:
import pynvml
except:
output = subprocess.Popen(["pip install pynvml"], shell=True, stderr=subprocess.STDOUT,
output = subprocess.Popen(["pip install pynvml"], shell=True, stderr=subprocess.STDOUT,
stdout=subprocess.PIPE)
for line in io.TextIOWrapper(output.stdout, encoding="utf-8"):
if(line == ""):
Expand All @@ -21,7 +21,7 @@
Unfortunately you're in a Colab instance that doesn't have a GPU.

Please make sure you've configured Colab to request a GPU Instance Type.

Go to 'Runtime -> Change Runtime Type --> under the Hardware Accelerator, select GPU', then try again."""
)
gpu_name = pynvml.nvmlDeviceGetName(pynvml.nvmlDeviceGetHandleByIndex(0))
Expand All @@ -33,27 +33,27 @@
if(len(sys.argv[1])=="legacy"):
rapids_version = "24.6.*"
print("Installing the rest of the RAPIDS " + rapids_version + " libraries")
output = subprocess.Popen([f"pip install cudf-cu12=={rapids_version} cuml-cu12=={rapids_version} cugraph-cu12=={rapids_version} cuspatial-cu12=={rapids_version} cuproj-cu12=={rapids_version} cuxfilter-cu12=={rapids_version} cucim-cu12=={rapids_version} pylibraft-cu12=={rapids_version} raft-dask-cu12=={rapids_version} nx-cugraph-cu12=={rapids_version} aiohttp --extra-index-url=https://pypi.nvidia.com"], shell=True, stderr=subprocess.STDOUT,
output = subprocess.Popen([f"pip install cudf-cu12=={rapids_version} cuml-cu12=={rapids_version} cugraph-cu12=={rapids_version} cuspatial-cu12=={rapids_version} cuproj-cu12=={rapids_version} cuxfilter-cu12=={rapids_version} cucim-cu12=={rapids_version} pylibraft-cu12=={rapids_version} raft-dask-cu12=={rapids_version} nx-cugraph-cu12=={rapids_version} aiohttp --extra-index-url=https://pypi.nvidia.com"], shell=True, stderr=subprocess.STDOUT,
stdout=subprocess.PIPE)
elif(sys.argv[1] == "latest"):
print(f"Installing RAPIDS Stable {LATEST_RAPIDS_VERSION}.*")
output = subprocess.Popen([f"pip install cudf-cu12=={LATEST_RAPIDS_VERSION}.* cuml-cu12=={LATEST_RAPIDS_VERSION}.* cugraph-cu12=={LATEST_RAPIDS_VERSION}.* cuspatial-cu12=={rapids_version} cuproj-cu12=={rapids_version} cuxfilter-cu12=={rapids_version} cucim-cu12=={rapids_version} pylibraft-cu12=={rapids_version} raft-dask-cu12=={LATEST_RAPIDS_VERSION}.* nx-cugraph-cu12=={LATEST_RAPIDS_VERSION}.* aiohttp --extra-index-url=https://pypi.nvidia.com"], shell=True, stderr=subprocess.STDOUT,
output = subprocess.Popen([f"pip install cudf-cu12=={LATEST_RAPIDS_VERSION}.* cuml-cu12=={LATEST_RAPIDS_VERSION}.* cugraph-cu12=={LATEST_RAPIDS_VERSION}.* cuspatial-cu12=={rapids_version} cuproj-cu12=={rapids_version} cuxfilter-cu12=={rapids_version} cucim-cu12=={rapids_version} pylibraft-cu12=={rapids_version} raft-dask-cu12=={LATEST_RAPIDS_VERSION}.* nx-cugraph-cu12=={LATEST_RAPIDS_VERSION}.* aiohttp --extra-index-url=https://pypi.nvidia.com"], shell=True, stderr=subprocess.STDOUT,
stdout=subprocess.PIPE)
elif(sys.argv[1] == "nightlies"):
print(f"Installing RAPIDS {NIGHTLY_RAPIDS_VERSION}.*")
output = subprocess.Popen([f'pip install "cudf-cu12=={NIGHTLY_RAPIDS_VERSION}.*,>=0.0.0a0" "cuml-cu12=={NIGHTLY_RAPIDS_VERSION}.*,>=0.0.0a0" "cugraph-cu12=={NIGHTLY_RAPIDS_VERSION}.*,>=0.0.0a0" "cuspatial-cu12=={NIGHTLY_RAPIDS_VERSION}.*,>=0.0.0a0" "cuproj-cu12=={NIGHTLY_RAPIDS_VERSION}.*,>=0.0.0a0" "cuxfilter-cu12=={NIGHTLY_RAPIDS_VERSION}.*,>=0.0.0a0" "cucim-cu12=={NIGHTLY_RAPIDS_VERSION}.*,>=0.0.0a0" "pylibraft-cu12=={NIGHTLY_RAPIDS_VERSION}.*,>=0.0.0a" "raft-dask-cu12=={NIGHTLY_RAPIDS_VERSION}.*,>=0.0.0a0" "nx-cugraph-cu12=={NIGHTLY_RAPIDS_VERSION}.*,>=0.0.0a0" aiohttp --extra-index-url=https://pypi.anaconda.org/rapidsai-wheels-nightly/simple'], shell=True, stderr=subprocess.STDOUT,
output = subprocess.Popen([f'pip install "cudf-cu12=={NIGHTLY_RAPIDS_VERSION}.*,>=0.0.0a0" "cuml-cu12=={NIGHTLY_RAPIDS_VERSION}.*,>=0.0.0a0" "cugraph-cu12=={NIGHTLY_RAPIDS_VERSION}.*,>=0.0.0a0" "cuspatial-cu12=={NIGHTLY_RAPIDS_VERSION}.*,>=0.0.0a0" "cuproj-cu12=={NIGHTLY_RAPIDS_VERSION}.*,>=0.0.0a0" "cuxfilter-cu12=={NIGHTLY_RAPIDS_VERSION}.*,>=0.0.0a0" "cucim-cu12=={NIGHTLY_RAPIDS_VERSION}.*,>=0.0.0a0" "pylibraft-cu12=={NIGHTLY_RAPIDS_VERSION}.*,>=0.0.0a" "raft-dask-cu12=={NIGHTLY_RAPIDS_VERSION}.*,>=0.0.0a0" "nx-cugraph-cu12=={NIGHTLY_RAPIDS_VERSION}.*,>=0.0.0a0" aiohttp --extra-index-url=https://pypi.anaconda.org/rapidsai-wheels-nightly/simple'], shell=True, stderr=subprocess.STDOUT,
stdout=subprocess.PIPE)
else:
rapids_version = "24.8.*"
print("Installing RAPIDS Stable " + rapids_version)
output = subprocess.Popen([f"pip install cudf-cu12=={rapids_version} cuml-cu12=={rapids_version} cugraph-cu12=={rapids_version} cuspatial-cu12=={rapids_version} cuproj-cu12=={rapids_version} cuxfilter-cu12=={rapids_version} cucim-cu12=={rapids_version} pylibraft-cu12=={rapids_version} raft-dask-cu12=={rapids_version} nx-cugraph-cu12=={rapids_version} aiohttp --extra-index-url=https://pypi.nvidia.com"], shell=True, stderr=subprocess.STDOUT,
output = subprocess.Popen([f"pip install cudf-cu12=={rapids_version} cuml-cu12=={rapids_version} cugraph-cu12=={rapids_version} cuspatial-cu12=={rapids_version} cuproj-cu12=={rapids_version} cuxfilter-cu12=={rapids_version} cucim-cu12=={rapids_version} pylibraft-cu12=={rapids_version} raft-dask-cu12=={rapids_version} nx-cugraph-cu12=={rapids_version} aiohttp --extra-index-url=https://pypi.nvidia.com"], shell=True, stderr=subprocess.STDOUT,
stdout=subprocess.PIPE)
else:
rapids_version = "24.4.*"
print("Installing RAPIDS remaining " + rapids_version + " libraries")
output = subprocess.Popen([f"pip install cudf-cu12=={rapids_version} cuml-cu12=={rapids_version} cugraph-cu12=={rapids_version} cuspatial-cu12=={rapids_version} cuproj-cu12=={rapids_version} cuxfilter-cu12=={rapids_version} cucim-cu12=={rapids_version} pylibraft-cu12=={rapids_version} raft-dask-cu12=={rapids_version} nx-cugraph-cu12=={rapids_version} aiohttp --extra-index-url=https://pypi.nvidia.com"], shell=True, stderr=subprocess.STDOUT,
output = subprocess.Popen([f"pip install cudf-cu12=={rapids_version} cuml-cu12=={rapids_version} cugraph-cu12=={rapids_version} cuspatial-cu12=={rapids_version} cuproj-cu12=={rapids_version} cuxfilter-cu12=={rapids_version} cucim-cu12=={rapids_version} pylibraft-cu12=={rapids_version} raft-dask-cu12=={rapids_version} nx-cugraph-cu12=={rapids_version} aiohttp --extra-index-url=https://pypi.nvidia.com"], shell=True, stderr=subprocess.STDOUT,
stdout=subprocess.PIPE)

for line in io.TextIOWrapper(output.stdout, encoding="utf-8"):
if(line == ""):
break
Expand All @@ -62,13 +62,13 @@
print("""
***********************************************************************
The pip install of RAPIDS is complete.

Please do not run any further installation from the conda based installation methods, as they may cause issues!

Please ensure that you're pulling from the git repo to remain updated with the latest working install scripts.

Troubleshooting:
- If there is an installation failure, please check back on RAPIDSAI owned templates/notebooks to see how to update your personal files.
- If there is an installation failure, please check back on RAPIDSAI owned templates/notebooks to see how to update your personal files.
- If an installation failure persists when using the latest script, please make an issue on https://github.com/rapidsai-community/rapidsai-csp-utils
***********************************************************************
"""
Expand Down
3 changes: 2 additions & 1 deletion colab/rapids-colab.sh
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
#!/bin/sh

echo "PLEASE READ FOR 21.06"
echo "********************************************************************************************************"
echo "Another release, another script change. We had to revise the script, which now:"
Expand Down Expand Up @@ -43,4 +45,3 @@ echo ""
echo "********************************************************************************************************"
echo ""
echo "Enjoy using RAPIDS! If you have any issues with or suggestions for RAPIDSAI on Colab, please create a issue on https://github.com/rapidsai/rapidsai-csp-utils/issues/new."