Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove some duplicated files and run RAPIDS CI locally #26

Merged
merged 1 commit into from
Jun 12, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions LICENSE
Original file line number Diff line number Diff line change
Expand Up @@ -186,7 +186,7 @@
same "printed page" as the copyright notice for easier
identification within third-party archives.

Copyright 2022 NVIDIA CORPORATION
Copyright 2023 NVIDIA CORPORATION

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
Expand All @@ -198,4 +198,4 @@
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
limitations under the License.
16 changes: 16 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,3 +6,19 @@ WholeMemory is a Tensor like storage and provide multi-GPU support.
It is optimized for NVLink systems like DGX A100 servers.
By working together with cuGraph, cuGraph-Ops, cuGraph-DGL, cuGraph-PyG,
and upstream DGL and PyG, it will be easy to build GNN applications.

## Table of content
- Installation
- [Getting WholeGraph Packages](./docs/wholegraph/source/installation/getting_wholegraph.md)
- [Building from Source](./docs/wholegraph/source/installation/source_build.md)
- General
- [WholeGraph Introduction](./docs/wholegraph/source/basics/wholegraph_intro.md)
- Packages
- libwholegraph (C/CUDA)
- pylibwholegraph
- API Docs
- Python
- C
- Reference
- [RAPIDS](https://rapids.ai)
- [cuGraph](https://github.com/rapidsai/cugraph)
5 changes: 3 additions & 2 deletions build.sh
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,7 @@ VALIDARGS="
-v
-g
-n
--allgpuarch
--native
--cmake-args
--compile-cmd
Expand Down Expand Up @@ -81,7 +82,7 @@ CMAKE_VERBOSE_OPTION=""
BUILD_TYPE=Release
BUILD_ALL_GPU_ARCH=0
INSTALL_TARGET="--target install"
PYTHON="python"
PYTHON=${PYTHON:-python}

# Set defaults for vars that may not have been defined externally
# FIXME: if INSTALL_PREFIX is not set, check PREFIX, then check
Expand Down Expand Up @@ -299,4 +300,4 @@ if hasArg docs; then
cmake --build "${LIBWHOLEGRAPH_BUILD_DIR}" -j${PARALLEL_LEVEL} --target docs_wholegraph ${VERBOSE_FLAG}
cd ${REPODIR}/docs/wholegraph
make html
fi
fi
6 changes: 3 additions & 3 deletions ci/test_clang_tidy.sh
Original file line number Diff line number Diff line change
Expand Up @@ -28,12 +28,12 @@ env PATH=${PATH}:/usr/local/cuda/bin
# library in the second run.
CMAKE_EXTRA_ARGS="--cmake-args=\"-DBUILD_OPS_WITH_TORCH_C10_API=OFF\""
rapids-logger "Generate compilation databases for C++ library and tests"
./build.sh clean libwholegraph tests pylibwholegraph --compile-cmd ${CMAKE_EXTRA_ARGS}
./build.sh clean libwholegraph tests pylibwholegraph --allgpuarch --compile-cmd ${CMAKE_EXTRA_ARGS}

# -git_modified_only -v
rapids-logger "Run clang-tidy"
python scripts/checks/run-clang-tidy.py \
-ignore wholememory_binding \
build/compile_commands.json \
pylibwholegraph/_skbuild/build/compile_commands.json \
cpp/build/compile_commands.json \
python/pylibwholegraph/_skbuild/build/compile_commands.json \
-v
2 changes: 1 addition & 1 deletion ci/test_python.sh
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ PYTEST_PATH=${PYLIBWHOLEGRAPH_INSTALL_PATH}/tests
pytest \
--cache-clear \
--forked \
${PYTEST_PATH}/pylibwholegraph/ ${PYTEST_PATH}/wholegraph_torch/ops/test_wholegraph_gather_scatter.py
${PYTEST_PATH}

echo "test_python is exiting with value: ${EXITCODE}"
exit ${EXITCODE}
2 changes: 1 addition & 1 deletion conda/recipes/libwholegraph/build.sh
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
#!/usr/bin/env bash
# Copyright (c) 2021-2023, NVIDIA CORPORATION.

./build.sh -n libwholegraph tests -v
./build.sh -n libwholegraph tests -v --allgpuarch
2 changes: 1 addition & 1 deletion conda/recipes/libwholegraph/install_libwholegraph.sh
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
#!/bin/bash
# Copyright (c) 2022-2023, NVIDIA CORPORATION.

cmake --install build
cmake --install cpp/build
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
#!/bin/bash
# Copyright (c) 2022-2023, NVIDIA CORPORATION.

cmake --install build --component testing
cmake --install cpp/build --component testing
2 changes: 1 addition & 1 deletion conda/recipes/pylibwholegraph/build.sh
Original file line number Diff line number Diff line change
Expand Up @@ -3,4 +3,4 @@

CMAKE_EXTRA_ARGS="--cmake-args=\"-DBUILD_OPS_WITH_TORCH_C10_API=OFF\""

./build.sh pylibwholegraph -v ${CMAKE_EXTRA_ARGS}
./build.sh pylibwholegraph --allgpuarch -v ${CMAKE_EXTRA_ARGS}
58 changes: 0 additions & 58 deletions cpp/cmake/thirdparty/nanobind.cmake

This file was deleted.

16 changes: 6 additions & 10 deletions cpp/include/wholememory/wholegraph_op.h
Original file line number Diff line number Diff line change
Expand Up @@ -87,26 +87,22 @@ wholememory_error_code_t wholegraph_csr_weighted_sample_without_replacement(
* raft_pcg_generator_random_int cpu op
* @param random_seed : random seed
* @param subsequence : subsequence for generating random value
* @param output : Wholememory Tensor of output
* @param output : Wholememory Tensor of output
* @return : wholememory_error_code_t
*/
wholememory_error_code_t generate_random_positive_int_cpu(
int64_t random_seed,
int64_t subsequence,
wholememory_tensor_t output
);
wholememory_error_code_t generate_random_positive_int_cpu(int64_t random_seed,
int64_t subsequence,
wholememory_tensor_t output);

/**
* raft_pcg_generator_random_float cpu op
* @param random_seed : random seed
* @param subsequence : subsequence for generating random value
* @param output : Wholememory Tensor of output
* @param output : Wholememory Tensor of output
* @return : wholememory_error_code_t
*/
wholememory_error_code_t generate_exponential_distribution_negative_float_cpu(
int64_t random_seed,
int64_t subsequence,
wholememory_tensor_t output);
int64_t random_seed, int64_t subsequence, wholememory_tensor_t output);

#ifdef __cplusplus
}
Expand Down
Loading