Skip to content

Commit

Permalink
update sm doc
Browse files Browse the repository at this point in the history
  • Loading branch information
FantasyVR committed Nov 23, 2022
1 parent 91b6735 commit c7bb890
Showing 1 changed file with 12 additions and 11 deletions.
23 changes: 12 additions & 11 deletions docs/lang/articles/math/sparse_matrix.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,25 +4,25 @@ sidebar_position: 2

# Sparse Matrix

Sparse matrices are frequently used when solving linear systems in science and engineering. Taichi provides programmers with useful APIs for sparse matrices.
Sparse matrices are frequently used when solving linear systems in science and engineering. Taichi provides programmers with useful APIs for sparse matrices on CPU and CUDA backend.

To use the sparse matrix in taichi programs, you should follow these three steps:

1. Create a `builder` using `ti.linalg.SparseMatrixBuilder()`.
2. Fill the `builder` with your matrices' data.
3. Create sparse matrices from the `builder`.
2. Fill the `builder` using `ti.kernel` function with your matrices' data.
3. Build sparse matrices from the `builder`.

:::caution WARNING
The sparse matrix is still under implementation. There are some limitations:
- The sparse matrix data type on the CPU only supports `float32` and `double`.
- The sparse matrix data type on the CUDA only supports `float32`.

- Only the CPU backend is supported.
- The data type of sparse matrix is float32.
- The storage format is column-major
:::
Here's an example:
```python
import taichi as ti
ti.init(arch=ti.x64) # only CPU backend is supported for now
arch = ti.cpu # or ti.cuda
ti.init(arch=arch)

n = 4
# step 1: create sparse matrix builder
Expand Down Expand Up @@ -135,22 +135,23 @@ print(f">>>> Element Access: A[0,0] = {A[0,0]}")
## Sparse linear solver
You may want to solve some linear equations using sparse matrices.
Then, the following steps could help:
1. Create a `solver` using `ti.linalg.SparseSolver(solver_type, ordering)`. Currently, the sparse solver supports `LLT`, `LDLT` and `LU` factorization types, and orderings including `AMD`, `COLAMD`.
1. Create a `solver` using `ti.linalg.SparseSolver(solver_type, ordering)`. Currently, the sparse solver on CPU supports `LLT`, `LDLT` and `LU` factorization types, and orderings including `AMD`, `COLAMD`. The sparse solver on CUDA supports `LLT` factorization type.
2. Analyze and factorize the sparse matrix you want to solve using `solver.analyze_pattern(sparse_matrix)` and `solver.factorize(sparse_matrix)`
3. Call `solver.solve(b)` to get your solutions, where `b` is a numpy array or taichi filed representing the right-hand side of the linear system.
3. Call `x = solver.solve(b)`, where `x` is the solution and `b` is the right-hand side of the linear system. On CPU backend, `x` and `b` are numpy arrays, taichi ndarrays or taichi fileds. On CUDA backend, `x` and `b` can only be a taichi ndarray.
4. Call `solver.info()` to check if the solving process succeeds.

Here's a full example.

```python
import taichi as ti

ti.init(arch=ti.x64)
arch = ti.cpu # or ti.cuda
ti.init(arch=arch)

n = 4

K = ti.linalg.SparseMatrixBuilder(n, n, max_num_triplets=100)
b = ti.field(ti.f32, shape=n)
b = ti.ndarray(ti.f32, shape=n)

@ti.kernel
def fill(A: ti.types.sparse_matrix_builder(), b: ti.template(), interval: ti.i32):
Expand Down

0 comments on commit c7bb890

Please sign in to comment.