Skip to content

Commit

Permalink
Add .wordlist.txt file and spellcheck fix
Browse files Browse the repository at this point in the history
* Add markdownlint configuration files
* The .markdownlint.yaml will come from rocm-docs-core, only temporary added here
* PR findings: Update README.md
* The spellchecker and markdown checker are fixed
* Minor fix
  • Loading branch information
neon60 committed Nov 27, 2023
1 parent bcdd2ac commit 690caa1
Show file tree
Hide file tree
Showing 12 changed files with 69 additions and 50 deletions.
2 changes: 2 additions & 0 deletions .markdownlint-cli2.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
ignores:
- docs/api-library.md
17 changes: 17 additions & 0 deletions .wordlist.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
asynchronicity
bfloat
BFLOAT
ClippedReLU
csv
cuSPARSELt
GeLU
Gemm
GoogleTest
hipSPARSELt
LeakyReLU
ReLU
rocSPARSELt
sudo
SpMM
Tanh
tensorfloat
10 changes: 7 additions & 3 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,21 +1,25 @@
# Change Log for hipSPARSELt

## (Unreleased) hipSPARSELt 0.2.0

### Added

- Support Matrix B is a Structured Sparsity Matrix.

## (Unreleased) hipSPARSELt 0.1.0

### Added

- Enable hipSPARSELt APIs
- Support platform: gfx940, gfx941, gfx942
- Support platform: gfx940, gfx941, gfx942
- Support problem type: fp16, bf16, int8
- Support activation: relu, gelu, abs, sigmod, tanh
- Support gelu scaling
- Support bias vector
- Support batched computation (single sparse x multiple dense, multiple sparse x single dense)
- Support batched computation (single sparse x multiple dense, multiple sparse x
single dense)
- Support cuSPARSELt v0.4 backend
- Integreate with tensilelite kernel generator
- Add Gtest: hipsparselt-test
- Add benchmarking tool: hipsparselt-bench
- Add sample app: example_spmm_strided_batched, example_prune, example_compress

File renamed without changes.
37 changes: 29 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,18 @@
# hipSPARSELt

hipSPARSELt is a SPARSE marshalling library, with multiple supported backends. It sits between the application and a 'worker' SPARSE library, marshalling inputs into the backend library and marshalling results back to the application. hipSPARSELt exports an interface that does not require the client to change, regardless of the chosen backend. Currently, hipSPARSELt supports [rocSPARSELt](library/src/hcc_detial/rocsparselt) and [cuSPARSELt v0.4](https://docs.nvidia.com/cuda/cusparselt) as backends.
hipSPARSELt is a SPARSE marshalling library, with multiple supported backends.
It sits between the application and a 'worker' SPARSE library, marshalling
inputs into the backend library and marshalling results back to the
application. hipSPARSELt exports an interface that does not require the client
to change, regardless of the chosen backend. Currently, hipSPARSELt supports
[rocSPARSELt](library/src/hcc_detial/rocsparselt) and [cuSPARSELt v0.4](https://docs.nvidia.com/cuda/cusparselt)
as backends.

## Installing pre-built packages

Download pre-built packages either from the [ROCm package servers](https://rocm.github.io/install.html#installing-from-amd-rocm-repositories) or by clicking the GitHub releases tab and manually downloading, which could be newer. Release notes are available for each release on the releases tab.
Download pre-built packages either from the [ROCm package servers](https://rocm.github.io/install.html#installing-from-amd-rocm-repositories)
or by clicking the GitHub releases tab and manually downloading, which could be
newer. Release notes are available for each release on the releases tab.

* `sudo apt update && sudo apt install hipsparselt`

Expand All @@ -24,7 +32,12 @@ Download pre-built packages either from the [ROCm package servers](https://rocm.

### Bash helper build script

The root of this repository has a helper bash script `install.sh` to build and install hipSPARSELt on Ubuntu with a single command. It does not take a lot of options and hard-codes configuration that can be specified through invoking cmake directly, but it's a great way to get started quickly and can serve as an example of how to build/install. A few commands in the script need sudo access, so it may prompt you for a password.
The root of this repository has a helper bash script `install.sh` to build and
install hipSPARSELt on Ubuntu with a single command. It does not take a lot of
options and hard-codes configuration that can be specified through invoking
CMake directly, but it's a great way to get started quickly and can serve as an
example of how to build/install. A few commands in the script need sudo access,
so it may prompt you for a password.

```bash
# Run install.sh script
Expand Down Expand Up @@ -64,7 +77,8 @@ The root of this repository has a helper bash script `install.sh` to build and i
* Add support for Mixed-precision computation
* FP8 input/output, FP32 Matrix Core accumulate
* BF8 input/output, FP32 Matrix Core accumulate
* Add kernel selection and genroator, used to provide the appropriate solution for the specific problem.
* Add kernel selection and genroator, used to provide the appropriate
solution for the specific problem.
* CUDA
* Support cusparseLt v0.4

Expand All @@ -84,7 +98,10 @@ python3 -m sphinx -T -E -b html -d _build/doctrees -D language=en . _build/html

## hipSPARSELt interface examples

The hipSPARSELt interface is compatible with cuSPARSELt APIs. Porting a CUDA application which originally calls the cuSPARSELt API to an application calling hipSPARSELt API should be relatively straightforward. For example, the hipSPARSELt matmul interface is
The hipSPARSELt interface is compatible with cuSPARSELt APIs. Porting a CUDA
application which originally calls the cuSPARSELt API to an application calling
hipSPARSELt API should be relatively straightforward. For example, the
hipSPARSELt matmul interface is

### matmul API

Expand All @@ -103,13 +120,16 @@ hipsparseStatus_t hipsparseLtMatmul(const hipsparseLtHandle_t* handle,

```
hipSPARSELt assumes matrix A, B, C, D and workspace are allocated in GPU memory space filled with data. Users are responsible for copying data from/to the host and device memory.
hipSPARSELt assumes matrix A, B, C, D and workspace are allocated in GPU memory
space filled with data. Users are responsible for copying data from/to the host
and device memory.
## Running tests and benchmark tool
### Unit tests
To run unit tests, hipSPARSELt has to be built with option -DBUILD_CLIENTS_TESTS=ON (or using ./install.sh -c)
To run unit tests, hipSPARSELt has to be built with option
-DBUILD_CLIENTS_TESTS=ON (or using ./install.sh -c)
```bash
# Go to hipSPARSELt build directory
Expand All @@ -121,7 +141,8 @@ cd hipSPARSELt; cd build/release

### Benchmarks

To run benchmarks, hipSPARSELt has to be built with option -DBUILD_CLIENTS_BENCHMARKS=ON (or using ./install.sh -c).
To run benchmarks, hipSPARSELt has to be built with option
-DBUILD_CLIENTS_BENCHMARKS=ON (or using ./install.sh -c).

```bash
# Go to hipSPARSELt build directory
Expand Down
2 changes: 1 addition & 1 deletion docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@

extensions = ['sphinx_design', 'sphinx.ext.intersphinx']

exclude_patterns = ['reference/api-library.md', 'reference/_functions.rst']
exclude_patterns = ['reference/api-library.md']

external_toc_path = "./sphinx/_toc.yml"

Expand Down
27 changes: 0 additions & 27 deletions docs/reference/_functions.rst

This file was deleted.

9 changes: 6 additions & 3 deletions docs/reference/api-library.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
The hipSPARSELt library is organized as follows:
<!-- spellcheck-disable -->

* @ref types_module
* @ref library_module
Expand All @@ -9,6 +10,8 @@ The hipSPARSELt library is organized as follows:
* @ref helper_module
* @ref aux_module

Note that all hipSPARSELt library functions, unless otherwise stated, are non-blocking and are run
asynchronously with respect to the host. They may return before the actual computation has finished.
To force synchronization, use `hipDeviceSynchronize` or `hipStreamSynchronize`.
<!-- spellcheck-enable -->
Note that all hipSPARSELt library functions, unless otherwise stated, are
non-blocking and are run asynchronously with respect to the host. They may
return before the actual computation has finished. To force synchronization, use
`hipDeviceSynchronize` or `hipStreamSynchronize`.
4 changes: 2 additions & 2 deletions docs/reference/porting.rst
Original file line number Diff line number Diff line change
Expand Up @@ -12,9 +12,9 @@ The hipSPARSELt interface is compatible with cuSPARSELt APIs. Porting a CUDA app
originally calls the cuSPARSELt API to an application that calls the hipSPARSELt API should be relatively
straightforward.

For example, the hipSPARSELt matmul interface is:
For example, the hipSPARSELt matrix multiplication interface is:

Matmul API
Matrix multiplication API

.. code-block:: c
Expand Down
4 changes: 2 additions & 2 deletions docs/reference/supported-functions.rst
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ ROCm & CUDA supported functions
* Single sparse matrix/Multiple dense matrices (Broadcast)
* Multiple sparse and dense matrices

* Activation function fuse in spmm kernel support:
* Activation function fuse in SpMM kernel support:

* ReLU
* ClippedReLU (ReLU with upper bound and threshold setting)
Expand All @@ -46,4 +46,4 @@ ROCm & CUDA supported functions

* CUDA

* Support cusparseLt v0.4
* Support cuSPARSELt v0.4
5 changes: 2 additions & 3 deletions docs/tutorials/install/linux.rst
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,6 @@ Using Ubuntu as an example, hipSPARSELt can be installed using
sudo apt-get update
sudo apt-get install hipsparselt
Once installed, hipSPARSELt can be used just like any other library with a C API.

The header file must be included in the user code in order to make calls into hipSPARSELt. The
Expand Down Expand Up @@ -157,7 +156,7 @@ After successfully compiling the library with clients, you can test the installa
Running benchmarks & unit tests
----------------------------------------------------------------------------

To run **benchmarks**, hipSPARSELt has to be built with option -DBUILD_CLIENTS_BENCHMARKS=ON (or using ./install.sh -c).
To run **benchmarks**, hipSPARSELt has to be built with option ``-DBUILD_CLIENTS_BENCHMARKS=ON`` (or using ``./install.sh -c``).

.. code-block:: bash
Expand All @@ -167,7 +166,7 @@ To run **benchmarks**, hipSPARSELt has to be built with option -DBUILD_CLIENTS_B
# Run benchmark, e.g.
./clients/staging/hipsparselt-bench -f spmm -i 200 -m 256 -n 256 -k 256
To run **unit tests**, hipSPARSELt has to be built with option -DBUILD_CLIENTS_TESTS=ON (or using ./install.sh -c)
To run **unit tests**, hipSPARSELt has to be built with option ``-DBUILD_CLIENTS_TESTS=ON`` (or using ``./install.sh -c``)

.. code-block:: bash
Expand Down
2 changes: 1 addition & 1 deletion docs/tutorials/quick-start/linux.rst
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ The root of the
`hipSPARSELt GitHub repository <https://github.com/ROCmSoftwarePlatform/hipSPARSELt>`_ has a
helper bash script `install.sh` to build and install hipSPARSELt on Ubuntu with a single command. It
doesn't take a lot of options and hard-codes configuration that can be specified through invoking
cmake directly, but it's a great way to get started quickly and can serve as an example of how to build
CMake directly, but it's a great way to get started quickly and can serve as an example of how to build
and install. A few commands in the script need sudo access, so it may prompt you for a password.

.. code-block:: bash
Expand Down

0 comments on commit 690caa1

Please sign in to comment.