Skip to content
This repository has been archived by the owner on Nov 25, 2024. It is now read-only.

[RELEASE] wholegraph v24.08 #202

Merged
merged 21 commits into from
Aug 8, 2024
Merged

[RELEASE] wholegraph v24.08 #202

merged 21 commits into from
Aug 8, 2024

Conversation

raydouglass
Copy link
Member

❄️ Code freeze for branch-24.08 and v24.08 release

What does this mean?

Only critical/hotfix level issues should be merged into branch-24.08 until release (merging of this PR).

What is the purpose of this PR?

  • Update documentation
  • Allow testing for the new release
  • Enable a means to merge branch-24.08 into main for the release

raydouglass and others added 21 commits May 20, 2024 17:40
Forward-merge branch-24.06 into branch-24.08
Forward-merge branch-24.06 into branch-24.08
Forward-merge branch-24.06 into branch-24.08
Forward-merge branch-24.06 into branch-24.08
Forward-merge branch-24.06 into branch-24.08
Forward-merge branch-24.06 into branch-24.08
Contributes to rapidsai/build-planning#31
Contributes to rapidsai/dependency-file-generator#89

Proposes introducing `rapids-build-backend` as this project's build backend, to reduce the complexity of various CI/build scripts.

Authors:
  - James Lamb (https://github.com/jameslamb)

Approvers:
  - Bradley Dice (https://github.com/bdice)

URL: #181
This PR refactors the embedding creation interface, decoupling it from the optimizer dependency. Users now can designate the embeddings for optimization during optimizer initialization.
cpp:
```
wholememory_create_embedding(&wm_embedding, ...);
wholememory_create_embedding_optimizer(&optimizer, ...);
wholememory_embedding_set_optimizer(wm_embedding, optimizer);
```
python:
```
wm_embedding = wgth.create_embedding(...)
wm_optimizer = wgth.create_wholememory_optimizer(wm_embedding, "adam", {})
```

Authors:
  - https://github.com/zhuofan1123

Approvers:
  - https://github.com/linhu-nv
  - Brad Rees (https://github.com/BradReesWork)

URL: #186
support split comm and get_local_mnnvl_comm

split_comm
```

def split_communicator(comm: WholeMemoryCommunicator, color: int, key: int = 0):
    """Split Communicator.
    Creates a set of new communicators from an existing one. Ranks which pass the same color value will be part of the
    same group; color must be a non-negative value.
    The value of key will determine the rank order, and the smaller key means the smaller rank in new communicator.
    If keys are equal between ranks, then the rank in the original communicator will be used to order ranks.
    """
```

Authors:
  - Chuang Zhu (https://github.com/chuangz0)

Approvers:
  - https://github.com/linhu-nv
  - Brad Rees (https://github.com/BradReesWork)

URL: #185
…t memory (#187)

1. The default shm option is still SYSTEMV, but users can choose POSIX API through system env using `export WG_USE_POSIX_SHM=1`.
2. `unlink` shm files immediately after `shm_open` to avoid leftover memory in `/dev/shm` in case of a wholegraph crash.

Authors:
  - https://github.com/linhu-nv

Approvers:
  - Chuang Zhu (https://github.com/chuangz0)
  - Brad Rees (https://github.com/BradReesWork)

URL: #187
With the deployment of rapids-build-backend, we need to make sure our dependencies have alpha specs.

Contributes to rapidsai/build-planning#31

Authors:
  - Kyle Edwards (https://github.com/KyleFromNVIDIA)

Approvers:
  - James Lamb (https://github.com/jameslamb)

URL: #188
Contributes to rapidsai/build-planning#80

Adds constraints to avoid pulling in CMake 3.30.0, for the reasons described in that issue.

Authors:
  - James Lamb (https://github.com/jameslamb)

Approvers:
  - Bradley Dice (https://github.com/bdice)

URL: #189
Usage of the CUDA math libraries is independent of the CUDA runtime. Make their static/shared status separately controllable.

Contributes to rapidsai/build-planning#35

Authors:
  - Kyle Edwards (https://github.com/KyleFromNVIDIA)

Approvers:
  - Robert Maynard (https://github.com/robertmaynard)
  - Vyas Ramasubramani (https://github.com/vyasr)

URL: #190
#190 was supposed to separate static CUDA math libraries from static CUDA runtime library, but accidentally pulled the runtime along with the math libraries. The way we'd normally fix this is by creating a separate variable for the runtime. However, since this project doesn't actually use any math libraries, we can just revert the whole thing.

Contributes to rapidsai/build-planning#35

Authors:
  - Kyle Edwards (https://github.com/KyleFromNVIDIA)

Approvers:
  - Vyas Ramasubramani (https://github.com/vyasr)

URL: #192
This PR updates the latest CUDA build/test version 12.2.2 to 12.5.1.

Contributes to rapidsai/build-planning#73

Authors:
  - Kyle Edwards (https://github.com/KyleFromNVIDIA)

Approvers:
  - James Lamb (https://github.com/jameslamb)

URL: #191
After updating everything to CUDA 12.5.1, use `shared-workflows@branch-24.08` again.

Contributes to rapidsai/build-planning#73

Authors:
  - Kyle Edwards (https://github.com/KyleFromNVIDIA)

Approvers:
  - James Lamb (https://github.com/jameslamb)

URL: #193
This project has some dependencies in `dependencies.yaml` which are part of groups that have `output_types: requirements`, but which use version specifiers that aren't recognized by `pip`.

For example, the use of a secondary build component matching a build string (a conda-specific pattern):

https://github.com/rapidsai/wholegraph/blob/f85ee4356f2e3d42195a2e0a6c7f195154c47091/dependencies.yaml#L247

And the use of a single `=` pin (not recognized by `pip`)

https://github.com/rapidsai/wholegraph/blob/f85ee4356f2e3d42195a2e0a6c7f195154c47091/dependencies.yaml#L288

I believe these were intended to only affect `conda` outputs from `rapids-dependency-file-generator`. This marks them that way.

## Notes for Reviewers

I discovered this while running the following `cuda11.8-pip` unified devcontainer from https://github.com/rapidsai/devcontainers.

```shell
rapids-make-pip-env --force
```

That resulted in an error like this when `wholegraph` was included.

```text
ERROR: Invalid requirement: 'pytorch-cuda=11.8': Expected end or semicolon (after name and no valid version specifier)
    pytorch-cuda=11.8
                ^ (from line 75 of /tmp/rapids.requirements.txt)
Hint: = is not a valid operator. Did you mean == ?
```

Authors:
  - James Lamb (https://github.com/jameslamb)

Approvers:
  - Kyle Edwards (https://github.com/KyleFromNVIDIA)

URL: #195
@raydouglass raydouglass requested a review from a team as a code owner August 1, 2024 17:27
Copy link

copy-pr-bot bot commented Aug 1, 2024

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@raydouglass raydouglass merged commit b3ee744 into main Aug 8, 2024
905 of 955 checks passed
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants