This repository has been archived by the owner on Nov 25, 2024. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 37
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Forward-merge branch-24.06 into branch-24.08
Forward-merge branch-24.06 into branch-24.08
Forward-merge branch-24.06 into branch-24.08
Forward-merge branch-24.06 into branch-24.08
Forward-merge branch-24.06 into branch-24.08
Forward-merge branch-24.06 into branch-24.08
Contributes to rapidsai/build-planning#31 Contributes to rapidsai/dependency-file-generator#89 Proposes introducing `rapids-build-backend` as this project's build backend, to reduce the complexity of various CI/build scripts. Authors: - James Lamb (https://github.com/jameslamb) Approvers: - Bradley Dice (https://github.com/bdice) URL: #181
Authors: - https://github.com/zhuofan1123 Approvers: - https://github.com/linhu-nv - Brad Rees (https://github.com/BradReesWork) URL: #180
This PR refactors the embedding creation interface, decoupling it from the optimizer dependency. Users now can designate the embeddings for optimization during optimizer initialization. cpp: ``` wholememory_create_embedding(&wm_embedding, ...); wholememory_create_embedding_optimizer(&optimizer, ...); wholememory_embedding_set_optimizer(wm_embedding, optimizer); ``` python: ``` wm_embedding = wgth.create_embedding(...) wm_optimizer = wgth.create_wholememory_optimizer(wm_embedding, "adam", {}) ``` Authors: - https://github.com/zhuofan1123 Approvers: - https://github.com/linhu-nv - Brad Rees (https://github.com/BradReesWork) URL: #186
support split comm and get_local_mnnvl_comm split_comm ``` def split_communicator(comm: WholeMemoryCommunicator, color: int, key: int = 0): """Split Communicator. Creates a set of new communicators from an existing one. Ranks which pass the same color value will be part of the same group; color must be a non-negative value. The value of key will determine the rank order, and the smaller key means the smaller rank in new communicator. If keys are equal between ranks, then the rank in the original communicator will be used to order ranks. """ ``` Authors: - Chuang Zhu (https://github.com/chuangz0) Approvers: - https://github.com/linhu-nv - Brad Rees (https://github.com/BradReesWork) URL: #185
…t memory (#187) 1. The default shm option is still SYSTEMV, but users can choose POSIX API through system env using `export WG_USE_POSIX_SHM=1`. 2. `unlink` shm files immediately after `shm_open` to avoid leftover memory in `/dev/shm` in case of a wholegraph crash. Authors: - https://github.com/linhu-nv Approvers: - Chuang Zhu (https://github.com/chuangz0) - Brad Rees (https://github.com/BradReesWork) URL: #187
With the deployment of rapids-build-backend, we need to make sure our dependencies have alpha specs. Contributes to rapidsai/build-planning#31 Authors: - Kyle Edwards (https://github.com/KyleFromNVIDIA) Approvers: - James Lamb (https://github.com/jameslamb) URL: #188
Contributes to rapidsai/build-planning#80 Adds constraints to avoid pulling in CMake 3.30.0, for the reasons described in that issue. Authors: - James Lamb (https://github.com/jameslamb) Approvers: - Bradley Dice (https://github.com/bdice) URL: #189
Usage of the CUDA math libraries is independent of the CUDA runtime. Make their static/shared status separately controllable. Contributes to rapidsai/build-planning#35 Authors: - Kyle Edwards (https://github.com/KyleFromNVIDIA) Approvers: - Robert Maynard (https://github.com/robertmaynard) - Vyas Ramasubramani (https://github.com/vyasr) URL: #190
#190 was supposed to separate static CUDA math libraries from static CUDA runtime library, but accidentally pulled the runtime along with the math libraries. The way we'd normally fix this is by creating a separate variable for the runtime. However, since this project doesn't actually use any math libraries, we can just revert the whole thing. Contributes to rapidsai/build-planning#35 Authors: - Kyle Edwards (https://github.com/KyleFromNVIDIA) Approvers: - Vyas Ramasubramani (https://github.com/vyasr) URL: #192
This PR updates the latest CUDA build/test version 12.2.2 to 12.5.1. Contributes to rapidsai/build-planning#73 Authors: - Kyle Edwards (https://github.com/KyleFromNVIDIA) Approvers: - James Lamb (https://github.com/jameslamb) URL: #191
After updating everything to CUDA 12.5.1, use `shared-workflows@branch-24.08` again. Contributes to rapidsai/build-planning#73 Authors: - Kyle Edwards (https://github.com/KyleFromNVIDIA) Approvers: - James Lamb (https://github.com/jameslamb) URL: #193
This project has some dependencies in `dependencies.yaml` which are part of groups that have `output_types: requirements`, but which use version specifiers that aren't recognized by `pip`. For example, the use of a secondary build component matching a build string (a conda-specific pattern): https://github.com/rapidsai/wholegraph/blob/f85ee4356f2e3d42195a2e0a6c7f195154c47091/dependencies.yaml#L247 And the use of a single `=` pin (not recognized by `pip`) https://github.com/rapidsai/wholegraph/blob/f85ee4356f2e3d42195a2e0a6c7f195154c47091/dependencies.yaml#L288 I believe these were intended to only affect `conda` outputs from `rapids-dependency-file-generator`. This marks them that way. ## Notes for Reviewers I discovered this while running the following `cuda11.8-pip` unified devcontainer from https://github.com/rapidsai/devcontainers. ```shell rapids-make-pip-env --force ``` That resulted in an error like this when `wholegraph` was included. ```text ERROR: Invalid requirement: 'pytorch-cuda=11.8': Expected end or semicolon (after name and no valid version specifier) pytorch-cuda=11.8 ^ (from line 75 of /tmp/rapids.requirements.txt) Hint: = is not a valid operator. Did you mean == ? ``` Authors: - James Lamb (https://github.com/jameslamb) Approvers: - Kyle Edwards (https://github.com/KyleFromNVIDIA) URL: #195
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
❄️ Code freeze for
branch-24.08
and v24.08 releaseWhat does this mean?
Only critical/hotfix level issues should be merged into
branch-24.08
until release (merging of this PR).What is the purpose of this PR?
branch-24.08
intomain
for the release