Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Doc] [refactor] Improve Taichi kernels and functions definition #1576

Merged
merged 22 commits into from
Aug 11, 2020
Merged
Show file tree
Hide file tree
Changes from 13 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
21 changes: 15 additions & 6 deletions docs/hello.rst
Original file line number Diff line number Diff line change
Expand Up @@ -46,13 +46,16 @@ Running the Taichi code below (``python3 fractal.py`` or ``ti example fractal``)

Let's dive into this simple Taichi program.


import taichi as ti
-------------------

Taichi is a domain-specific language (DSL) embedded in Python. To make Taichi as easy to use as a Python package,
we have done heavy engineering with this goal in mind - letting every Python programmer write Taichi programs with
minimal learning effort. You can even use your favorite Python package management system, Python IDEs and other
Python packages in conjunction with Taichi.


Portability
-----------

Expand All @@ -74,6 +77,7 @@ Taichi programs run on either CPUs or GPUs. Initialize Taichi according to your
ti.init(arch=ti.cpu)

.. note::

Supported backends on different platforms:

+----------+------+------+--------+-------+
Expand Down Expand Up @@ -101,28 +105,33 @@ Taichi programs run on either CPUs or GPUs. Initialize Taichi according to your

On other platforms, Taichi will make use of its on-demand memory allocator to adaptively allocate memory.

(Sparse) tensors
----------------
Tensors
-------

Taichi is a data-oriented programming language where dense or spatially-sparse tensors are the first-class citizens.

See :ref:`sparse` for more details on sparse tensors.

In the code above, ``pixels = ti.var(dt=ti.f32, shape=(n * 2, n))`` allocates a 2D dense tensor named ``pixels`` of
size ``(640, 320)`` and element data type ``ti.f32`` (i.e. ``float`` in C).


Functions and kernels
---------------------

Computation resides in Taichi **kernels**. Kernel arguments must be type-hinted.
Computation resides in Taichi **kernels**, which is defined with the decorator ``@ti.kernel``.
archibate marked this conversation as resolved.
Show resolved Hide resolved
Kernel arguments must be type-hinted (if any).
The language used in Taichi kernels and functions looks exactly like Python, yet the Taichi frontend compiler converts it
into a language that is **compiled, statically-typed, lexically-scoped, parallel and differentiable**.

Taichi **functions**, which can be called by Taichi kernels and other Taichi functions, should be defined with the keyword ``ti.func``.
Taichi **functions** are defined with the decorator ``@ti.func``.
They can be called by Taichi kernels and other Taichi functions.
archibate marked this conversation as resolved.
Show resolved Hide resolved

.. note::

**Taichi-scopes v.s. Python-scopes**: everything decorated with ``ti.kernel`` and ``ti.func`` is in Taichi-scope, which will be compiled by the Taichi compiler.
Everything else is in Python-scopes. They are simply Python code.
**Taichi-scopes v.s. Python-scopes**: everything decorated with ``@ti.kernel`` and ``@ti.func`` is in Taichi-scope and hence will be compiled by the Taichi compiler.

Everything else is in Python-scopes. They are simply Python native code.

.. warning::

Expand Down
2 changes: 1 addition & 1 deletion docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -38,10 +38,10 @@ The Taichi Programming Language
meta
layout
sparse
offset
differentiable_programming
odop
compilation
offset
syntax_sugars


Expand Down
241 changes: 193 additions & 48 deletions docs/syntax.rst
Original file line number Diff line number Diff line change
@@ -1,34 +1,111 @@
Syntax
======

Taichi-scope vs Python-scope
----------------------------

Code decorated by ``@ti.kernel`` or ``@ti.func`` is in the **Taichi-scope**.

They are to be compiled and executed on CPU or GPU devices with high
parallelization performance, on the cost of less flexibility.

.. note::

For people from CUDA, Taichi-scope = **device** side.


Code outside ``@ti.kernel`` or ``@ti.func`` is in the **Python-scope**.

They are not compiled by the Taichi compiler and have lower performance
but with a richer type system and better flexibility.

.. note::

For people from CUDA, Python-scope = **host** side.


Kernels
-------

Kernel arguments must be type-hinted. Kernels can have at most 8 parameters, e.g.,
A Python function decorated by ``@ti.kernel`` is a **Taichi kernel**:

.. code-block:: python

@ti.kernel
def my_kernel():
...

my_kernel()


Kernels should be called from **Python-scope**.

.. note::

For people from CUDA, Taichi kernels = ``__global__`` functions.


Arguments
*********

Kernels can have at most 8 parameters so that you can pass values from
Python-scope to Taichi-scope easily.

Kernel arguments must be type-hinted:

.. code-block:: python

@ti.kernel
def print_xy(x: ti.i32, y: ti.f32):
def my_kernel(x: ti.i32, y: ti.f64):
print(x + y)

my_kernel(2, 3.3) # prints: 5.3

.. note::

For now, we only support scalars as arguments. Specifying ``ti.Matrix`` or ``ti.Vector`` as argument is not supported. For example:

.. code-block:: python

@ti.kernel
def bad_kernel(v: ti.Vector):
...

@ti.kernel
def good_kernel(vx: ti.f32, vy: ti.f32):
v = ti.Vector([vx, vy])
...


Return value
************

A kernel may or may not have a **scalar** return value.
If it does, the type of return value must be hinted:

.. code-block:: python

@ti.kernel
def my_kernel() -> ti.f32:
return 233.33

print(my_kernel()) # 233.33


A kernel can have a **scalar** return value. If a kernel has a return value, it must be type-hinted.
The return value will be automatically cast into the hinted type. e.g.,

.. code-block:: python

@ti.kernel
def add_xy(x: ti.f32, y: ti.f32) -> ti.i32:
return x + y # same as: ti.cast(x + y, ti.i32)
def add_xy() -> ti.i32: # int32
return 233.33

res = add_xy(2.3, 1.1)
print(res) # 3, since return type is ti.i32
print(my_kernel()) # 233, since return type is ti.i32


.. note::

For now, we only support one scalar as return value. Returning ``ti.Matrix`` or ``ti.Vector`` is not supported. Python-style tuple return is not supported either. For example:
For now, a kernel can only have one scalar return value. Returning ``ti.Matrix`` or ``ti.Vector`` is not supported. Python-style tuple return is not supported either. For example:

.. code-block:: python

Expand All @@ -43,74 +120,118 @@ The return value will be automatically cast into the hinted type. e.g.,
return x, y # Error


We also support **template arguments** (see :ref:`template_metaprogramming`) and **external array arguments** (see :ref:`external`) in Taichi kernels.
Advanced arguments
******************

.. warning::
We also support **template arguments** (see :ref:`template_metaprogramming`) and **external array arguments** (see :ref:`external`) in Taichi kernels. Use ``ti.template()`` or ``ti.ext_arr()`` as their type-hints respectively.

.. note::

When using differentiable programming, there are a few more constraints on kernel structures. See the **Kernel Simplicity Rule** in :ref:`differentiable`.

Also, please do not use kernel return values in differentiable programming, since the return value will not be tracked by automatic differentiation. Instead, store the result into a global variable (e.g. ``loss[None]``).


Functions
---------

Use ``@ti.func`` to decorate your Taichi functions. These functions are callable only in `Taichi`-scope. Do not call them in `Python`-scopes.
A Python function decorated by ``@ti.func`` is a **Taichi function**:

.. code-block:: python

@ti.func
def laplacian(t, i, j):
return inv_dx2 * (
-4 * p[t, i, j] + p[t, i, j - 1] + p[t, i, j + 1] + p[t, i + 1, j] +
p[t, i - 1, j])
@ti.func
def my_func():
...

@ti.kernel
def fdtd(t: ti.i32):
for i in range(n_grid): # Parallelized
for j in range(n_grid): # Serial loops in each parallel threads
laplacian_p = laplacian(t - 2, i, j)
laplacian_q = laplacian(t - 1, i, j)
p[t, i, j] = 2 * p[t - 1, i, j] + (
c * c * dt * dt + c * alpha * dt) * laplacian_q - p[
t - 2, i, j] - c * alpha * dt * laplacian_p
@ti.kernel
def my_kernel():
...
my_func() # call functions from Taichi-scope
...

my_kernel() # call kernels from Python-scope

.. warning::

Functions with multiple ``return`` statements are not supported for now. Use a **local** variable to store the results, so that you end up with only one ``return`` statement:
Taichi functions should be called from **Taichi-scope**.

.. code-block:: python
.. note::

# Bad function - two return statements
@ti.func
def safe_sqrt(x):
if x >= 0:
return ti.sqrt(x)
else:
return 0.0
For people from CUDA, Taichi functions = ``__device__`` functions.

# Good function - single return statement
@ti.func
def safe_sqrt(x):
rst = 0.0
if x >= 0:
rst = ti.sqrt(x)
else:
rst = 0.0
return rst
.. note::

Taichi functions can be nested.

.. warning::

Currently, all functions are force-inlined. Therefore, no recursion is allowed.


.. note::
Arguments and return values
***************************

Functions can have multiple arguments and return values.
Unlike kernels, arguments in functions don't need to be type-hinted:

.. code-block:: python

@ti.func
def my_add(x, y):
return x + y


@ti.kernel
def my_kernel():
...
ret = my_add(2, 3.3)
print(ret) # 5.3
...


Function arguments are passed by value, changes made inside function scope
won't affect the outside value in the caller:
archibate marked this conversation as resolved.
Show resolved Hide resolved

.. code-block:: python

@ti.func
def my_func(x):
x = x + 1 # won't change the original value of x


@ti.kernel
def my_kernel():
...
x = 233
my_func(x)
print(x) # 233
...


Advanced arguments
******************

You may use ``ti.template()`` as type-hint to force arguments to be passed by
reference:

.. code-block:: python

@ti.func
def my_func(x: ti.template()):
x = x + 1 # will change the original value of x


@ti.kernel
def my_kernel():
...
x = 233
my_func(x)
print(x) # 234
...

Function arguments are passed by value.

.. note::

Unlike functions, **kernels do not support vectors or matrices as arguments**:
Unlike kernels, functions **do support vectors or matrices as arguments and return values**:

.. code-block:: python

Expand All @@ -126,11 +247,35 @@ Use ``@ti.func`` to decorate your Taichi functions. These functions are callable
p += d * t
...

.. warning::

Functions with multiple ``return`` statements are not supported for now. Use a **local** variable to store the results, so that you end up with only one ``return`` statement:

.. code-block:: python

# Bad function - two return statements
@ti.func
def safe_sqrt(x):
if x >= 0:
return ti.sqrt(x)
else:
return 0.0

# Good function - single return statement
@ti.func
def safe_sqrt(x):
ret = 0.0
if x >= 0:
ret = ti.sqrt(x)
else:
ret = 0.0
return ret


Scalar arithmetics
------------------
Supported scalar functions:

Currently supported scalar functions:

.. function:: ti.sin(x)
.. function:: ti.cos(x)
Expand Down
Loading