Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[doc] Add user utilities doc #1295

Merged
merged 18 commits into from
Jul 13, 2020
206 changes: 183 additions & 23 deletions docs/utilities.rst
Original file line number Diff line number Diff line change
@@ -1,25 +1,185 @@
Developer utilities
===================


This section provides a detailed description of some commonly used utilities for Taichi developers.

Logging
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi, sorry about dismissing your changes, I have already done with Logging in #1475 :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

May you merge these 2 changes into a single one? I also have no idea about how to deal with this conflict.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To prevent conflicts with #1475, you may move the Profiler to another section like profiler.rst :) I'll try to merge conflicts in Logging section once one of these two PR is merged.

-------

Taichi provides logging APIs. These functions should only be called at **compile-time** instead of run-time.

If you want to log at **run-time**, please simply use ``print()`` instead.

.. Note::

Taichi logging APIs only support standard output now.

.. function:: ti.set_logging_level(level)

:parameter level: (string) a valid logging level

- This function is used to set the level of logging. Currently, the log levels in Taichi are sorted as ``ti.TRACE``, ``ti.DEBUG``, ``ti.INFO``, ``ti.WARN`` and ``ti.ERROR``. The default logging level is ``ti.INFO``.

- The lower the logging level is, the more content will be printed.

- If we set the logging level to ``ti.TRACE``, all logs will be printed.
- If the logging level is ``ti.ERROR``, Taichi shows only those logs generated by ``ti.error()``.

.. Note ::

You can also override the default logging level by setting the ``TI_LOG_LEVEL`` environment variable. For example, ``TI_LOG_LEVEL=warn``.

.. function:: ti.info(info)

:parameter info: (string) logging info

Print the input string to stdout **only in Taichi-scope**, when the logging level is lower or equal to ``ti.INFO``. For example:

.. code-block:: python

import taichi as ti
ti.init(arch=ti.cpu)

ti.set_logging_level(ti.INFO)
mat = ti.var(dt=ti.f32, shape=(5, 5))


@ti.func
def calc(i: ti.int32, j: ti.int32):
ti.info(f"set var in ti.func")
mat[i, j] = i * j


@ti.kernel
def compute():
calc(0, 0)
calc(1, 1)
calc(2, 2)
ti.info(f"set var in ti.kernel")


compute()
compute()
compute()


As we statement before, the ``ti.info`` will **only print once** in compile-time. Its output is like:

::

[I 07/09/20 13:15:24.517] [main.py:calc@10] set var in ti.func
[I 07/09/20 13:15:24.518] [main.py:calc@10] set var in ti.func
[I 07/09/20 13:15:24.518] [main.py:calc@10] set var in ti.func
[I 07/09/20 13:15:24.518] [main.py:compute@19] set var in ti.kernel

The other logging functions below **are all similar to** ``ti.info``. They can all print out when the logging level is set to be lower than they required, respectively.

.. function:: ti.warn(info)

:parameter info: (string) logging info

.. function:: ti.debug(info)

:parameter info: (string) logging info

.. function:: ti.trace(info)

:parameter info: (string) logging info

.. function:: ti.error(info)

:parameter info: (string) logging info

This function prints the input string in any logging level and **crashes the program**.

.. warning::

Note that ``ti.error`` will crash your program and throws an exception ``RuntimeError``.

Here is an example:

.. code-block:: python

import taichi as ti

ti.init()
ti.set_logging_level(ti.INFO)

try:
ti.error("Fatal error. Exiting now...")
except RuntimeError as err:
print(err)


Profiler
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Profiler should be shared by devs and end-users, maybe we can have a separate .rst for it?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO I think it's enough in utilities.rst at this moment.

--------

Taichi's profiler can help you analyze the run-time cost of your program. There are two profiling systems in Taichi: ``ScopedProfiler`` and ``KernelProfiler``.

ScopedProfiler
##############

1. ``ScopedProfiler`` measures time spent on the **host tasks** hierarchically.

2. This profiler is turned on automatically. To show its results, call ``ti.print_profile_info()``. For example:

.. code-block:: python

'''
level can be {}
ti.TRACE
ti.DEBUG
ti.INFO
ti.WARN
ti.ERR
ti.CRITICAL
'''
ti.set_logging_level(level)

The default logging level is ``ti.INFO``.
You can also override default logging level by setting the environment variable like
``TI_LOG_LEVEL=warn``.
import taichi as ti

ti.init(arch=ti.cpu)
var = ti.var(ti.f32, shape=1)


@ti.kernel
def compute():
var[0] = 1.0
print(f"set var[0] =", var[0])


compute()
ti.print_profile_info()


``ti.print_profile_info()`` prints profiling results in a hierarchical format.

.. Note::

``ScopedProfiler`` is a C++ class in the core of Taichi. It is not exposed to Python users.

KernelProfiler
##############

1. ``KernelProfiler`` records the costs of Taichi kernels on devices. To enable this profiler, set ``kernel_profiler=True`` in ``ti.init``.

2. Call ``ti.kernel_profiler_print()`` to show the kernel profiling result. For example:

.. code-block:: python
:emphasize-lines: 3, 13

import taichi as ti

ti.init(ti.cpu, kernel_profiler=True)
var = ti.var(ti.f32, shape=1)


@ti.kernel
def compute():
var[0] = 1.0


compute()
ti.kernel_profiler_print()


The outputs would be:

::

[ 22.73%] jit_evaluator_0_kernel_0_serial min 0.001 ms avg 0.001 ms max 0.001 ms total 0.000 s [ 1x]
[ 0.00%] jit_evaluator_1_kernel_1_serial min 0.000 ms avg 0.000 ms max 0.000 ms total 0.000 s [ 1x]
[ 77.27%] compute_c4_0_kernel_2_serial min 0.004 ms avg 0.004 ms max 0.004 ms total 0.000 s [ 1x]

.. _regress:

Expand All @@ -28,9 +188,9 @@ Benchmarking and regression tests

* Run ``ti benchmark`` to run tests in benchmark mode. This will record the performance of ``ti test``, and save it in ``benchmarks/output``.

* Run ``ti regression`` to show the difference between previous result in ``benchmarks/baseline``. And you can see if the performance is increasing or decreasing after your commits. This is really helpful when your work is related to IR optimizations.
* Run ``ti regression`` to show the difference between the previous result in ``benchmarks/baseline``. And you can see if the performance is increasing or decreasing after your commits. This is really helpful when your work is related to IR optimizations.

* Run ``ti baseline`` to save the benchmark result to ``benchmarks/baseline`` for furture comparsion, this may be executed on performance related PRs, before they are merged into master.
* Run ``ti baseline`` to save the benchmark result to ``benchmarks/baseline`` for future comparison, this may be executed on performance-related PRs, before they are merged into master.

For example, this is part of the output by ``ti regression`` after enabling constant folding optimization pass:

Expand All @@ -50,14 +210,14 @@ For example, this is part of the output by ``ti regression`` after enabling cons

.. note::

Currently ``ti benchmark`` only support benchmarking number-of-statements, no time benchmarking is included since it depends on hardware performance and therefore hard to compare if the baseline is from another machine.
Currently ``ti benchmark`` only supports benchmarking number-of-statements, no time benchmarking is included since it depends on hardware performance and therefore hard to compare if the baseline is from another machine.
We are to purchase a fixed-performance machine as a time benchmark server at some point.
Discussion at: https://github.com/taichi-dev/taichi/issue/948


The suggested workflow for the performance related PR author to run the regression tests is:
The suggested workflow for the performance-related PR author to run the regression tests is:

* Run ``ti benchmark && ti baseline`` in ``master`` to save the current performance as baseline.
* Run ``ti benchmark && ti baseline`` in ``master`` to save the current performance as a baseline.

* Run ``git checkout -b your-branch-name``.

Expand All @@ -67,7 +227,7 @@ The suggested workflow for the performance related PR author to run the regressi

* (If result BAD) Do further improvements, until the result is satisfying.

* (If result OK) Run ``ti baseline`` to save stage 1 performance as baseline.
* (If result OK) Run ``ti baseline`` to save stage 1 performance as a baseline.

* Go forward to stage 2, 3, ..., and the same workflow is applied.

Expand Down Expand Up @@ -101,8 +261,8 @@ The suggested workflow for the performance related PR author to run the regressi
Code coverage
-------------

To ensure that our tests covered every situations, we need to have **coverage report**.
That is, to detect how many percent of code lines in is executed in test.
To ensure that our tests covered every situation, we need to have **coverage report**.
That is, to detect how many percents of code lines in is executed in test.

- Generally, the higher the coverage percentage is, the stronger our tests are.
- When making a PR, we want to **ensure that it comes with corresponding tests**. Or code coverage will decrease.
Expand All @@ -129,7 +289,7 @@ Serialization (legacy)

The serialization module of taichi allows you to serialize/deserialize objects into/from binary strings.

You can use ``TI_IO`` macros to explicit define fields necessary in Taichi.
You can use ``TI_IO`` macros to explicitly define fields necessary in Taichi.

.. code-block:: cpp

Expand Down
2 changes: 1 addition & 1 deletion python/taichi/__init__.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
from taichi.main import main
from taichi.core import ti_core
from taichi.core import start_memory_monitoring, is_release, package_root
from taichi.misc.util import vec, veci, set_gdb_trigger, set_logging_level, info, warn, error, debug, trace, INFO, WARN, ERROR, DEBUG, TRACE
from taichi.misc.util import vec, veci, set_gdb_trigger, print_profile_info, set_logging_level, info, warn, error, debug, trace, INFO, WARN, ERROR, DEBUG, TRACE
from taichi.core.util import require_version
from taichi.tools import *
from taichi.misc import *
Expand Down
4 changes: 4 additions & 0 deletions python/taichi/misc/util.py
Original file line number Diff line number Diff line change
Expand Up @@ -184,3 +184,7 @@ def set_logging_level(level):

def set_gdb_trigger(on=True):
taichi.ti_core.set_core_trigger_gdb_when_crash(on)


def print_profile_info():
taichi.ti_core.print_profile_info()