Skip to content

Commit

Permalink
Merge branch 'master' into ggui-features
Browse files Browse the repository at this point in the history
  • Loading branch information
bobcao3 authored Feb 23, 2023
2 parents d745f72 + 6170861 commit 291d486
Show file tree
Hide file tree
Showing 119 changed files with 2,891 additions and 1,433 deletions.
16 changes: 14 additions & 2 deletions .github/workflows/perf.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@ on:
push:
branches:
- master
workflow_dispatch:

jobs:
gpu_backends:
Expand Down Expand Up @@ -48,10 +49,21 @@ jobs:
run: |
. .github/workflows/scripts/common-utils.sh
MEMFREQ=($(sudo nvidia-smi --query-supported-clocks=mem --format=csv | head -n 2 | tail -n 1))
GRFREQ=($(sudo nvidia-smi --query-supported-clocks=gr --format=csv | head -n 35 | tail -n 1))
sudo nvidia-smi -pm 1
sudo nvidia-smi -lmc ${MEMFREQ[0]}
sudo nvidia-smi -lgc ${GRFREQ[0]}
sleep 0.5
ci-docker-run-gpu --name taichi-benchmark-run \
-e BENCHMARK_UPLOAD_TOKEN \
registry.taichigraphics.com/taichidev-ubuntu18.04:v0.3.4 \
/home/dev/taichi/.github/workflows/scripts/unix-perf-mon.sh
registry.taichigraphics.com/taichidev-ubuntu18.04:v0.3.4 \
/home/dev/taichi/.github/workflows/scripts/unix-perf-mon.sh
sudo nvidia-smi -rmc
sudo nvidia-smi -rgc
sudo nvidia-smi -pm 0
env:
PY: '3.8'
BENCHMARK_UPLOAD_TOKEN: ${{ secrets.BENCHMARK_UPLOAD_TOKEN }}
25 changes: 16 additions & 9 deletions .github/workflows/scripts/unix_test.sh
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,22 @@ ti diagnose
ti changelog
echo "wanted archs: $TI_WANTED_ARCHS"


if [ -z "$TI_SKIP_CPP_TESTS" ]; then
echo "Running cpp tests on platform:" "${PLATFORM}"
# Temporary hack before CI Pipeline Overhaul
if [[ $PLATFORM == *"linux"* ]]; then
if nvidia-smi -L | grep "Tesla P4"; then
python3 tests/run_tests.py --cpp -vr2 -t6 -m "not sm70"
else
python3 tests/run_tests.py --cpp -vr2 -t6
fi
else
python3 tests/run_tests.py --cpp -vr2 -t6
fi
fi


if [ "$TI_RUN_RELEASE_TESTS" == "1" ]; then
python3 -m pip install PyYAML
git clone https://github.com/taichi-dev/taichi-release-tests
Expand All @@ -51,15 +67,6 @@ EOF
popd
fi

if [ -z "$TI_SKIP_CPP_TESTS" ]; then
echo "Running cpp tests on platform:" "${PLATFORM}"
python3 tests/run_tests.py --cpp
if [[ $PLATFORM == *"m1"* ]]; then
echo "Running cpp tests with statically linked C-API library"
python3 tests/run_tests.py --cpp --use_static_c_api
fi
fi

function run-it {
ARCH=$1
PARALLELISM=$2
Expand Down
8 changes: 7 additions & 1 deletion .github/workflows/scripts/win_test.ps1
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,13 @@ Invoke pip install -r requirements_test.txt
Invoke pip install "paddlepaddle==2.3.0; python_version < '3.10'"

# Run C++ tests
Invoke python tests/run_tests.py --cpp
#
# Temporary hack before CI Pipeline Overhaul
if (nvidia-smi -L | Select-String "Tesla P4") {
Invoke python tests/run_tests.py --cpp -vr2 -t6 -m "not sm70"
} else {
Invoke python tests/run_tests.py --cpp -vr2 -t6
}

# Fail fast, give priority to the error-prone tests
Invoke python tests/run_tests.py -vr2 -t1 -k "paddle" -a cpu
Expand Down
27 changes: 27 additions & 0 deletions .github/workflows/testing.yml
Original file line number Diff line number Diff line change
Expand Up @@ -125,6 +125,9 @@ jobs:
env:
PY: ${{ matrix.python }}
steps:
- name: Workaround checkout Needed single revision issue
run: git submodule foreach 'git rev-parse HEAD > /dev/null 2>&1 || rm -rf $PWD'

- uses: actions/checkout@v3
with:
fetch-depth: '0'
Expand Down Expand Up @@ -186,6 +189,9 @@ jobs:

runs-on: ${{ matrix.tags }}
steps:
- name: Workaround checkout Needed single revision issue
run: git submodule foreach 'git rev-parse HEAD > /dev/null 2>&1 || rm -rf $PWD'

- uses: actions/checkout@v3
with:
submodules: 'recursive'
Expand Down Expand Up @@ -253,6 +259,9 @@ jobs:
timeout-minutes: ${{ github.event.schedule != '0 18 * * *' && 90 || 120 }}
runs-on: [self-hosted, amdgpu]
steps:
- name: Workaround checkout Needed single revision issue
run: git submodule foreach 'git rev-parse HEAD > /dev/null 2>&1 || rm -rf $PWD'

- uses: actions/checkout@v3
with:
submodules: 'recursive'
Expand Down Expand Up @@ -388,6 +397,9 @@ jobs:
shell: '/usr/bin/arch -arch arm64e /bin/bash --noprofile --norc -eo pipefail {0}'
runs-on: [self-hosted, m1]
steps:
- name: Workaround checkout Needed single revision issue
run: git submodule foreach 'git rev-parse HEAD > /dev/null 2>&1 || rm -rf $PWD'

- uses: actions/checkout@v3
with:
fetch-depth: '0'
Expand Down Expand Up @@ -453,6 +465,9 @@ jobs:
timeout-minutes: ${{ github.event.schedule != '0 18 * * *' && 90 || 120 }}
runs-on: [self-hosted, Linux, cuda, vulkan, cn]
steps:
- name: Workaround checkout Needed single revision issue
run: git submodule foreach 'git rev-parse HEAD > /dev/null 2>&1 || rm -rf $PWD'

- uses: actions/checkout@v3
with:
submodules: 'recursive'
Expand Down Expand Up @@ -511,6 +526,9 @@ jobs:
REDIS_HOST: 172.16.5.8
PY: '3.9'
steps:
- name: Workaround checkout Needed single revision issue
run: git submodule foreach 'git rev-parse HEAD > /dev/null 2>&1 || rm -rf $PWD'

- uses: actions/checkout@v3
name: Checkout taichi
with:
Expand Down Expand Up @@ -589,6 +607,9 @@ jobs:
REDIS_HOST: 172.16.5.8
PY: '3.9'
steps:
- name: Workaround checkout Needed single revision issue
run: git submodule foreach 'git rev-parse HEAD > /dev/null 2>&1 || rm -rf $PWD'

- uses: actions/checkout@v3
name: Checkout taichi
with:
Expand Down Expand Up @@ -666,6 +687,9 @@ jobs:
REDIS_HOST: 172.16.5.8
PY: '3.9'
steps:
- name: Workaround checkout Needed single revision issue
run: git submodule foreach 'git rev-parse HEAD > /dev/null 2>&1 || rm -rf $PWD'

- uses: actions/checkout@v3
name: Checkout taichi
with:
Expand Down Expand Up @@ -755,6 +779,9 @@ jobs:

runs-on: ${{ matrix.tags }}
steps:
- name: Workaround checkout Needed single revision issue
run: git submodule foreach 'git rev-parse HEAD > /dev/null 2>&1 || rm -rf $PWD'

- uses: actions/checkout@v3
with:
submodules: 'recursive'
Expand Down
2 changes: 1 addition & 1 deletion c_api/docs/taichi/taichi_core.h.md
Original file line number Diff line number Diff line change
Expand Up @@ -116,7 +116,7 @@ You can load a Taichi AOT module from the filesystem.
TiAotModule aot_module = ti_load_aot_module(runtime, "/path/to/aot/module");
```

`/path/to/aot/module` should point to the directory that contains a `metadata.tcb`.
`/path/to/aot/module` should point to the directory that contains a `metadata.json`.

You can destroy an unused AOT module, but please ensure that there is no kernel or compute graph related to it pending to [`ti_flush`](#function-ti_flush).

Expand Down
2 changes: 1 addition & 1 deletion c_api/include/taichi/taichi_core.h
Original file line number Diff line number Diff line change
Expand Up @@ -142,7 +142,7 @@
// ```
//
// `/path/to/aot/module` should point to the directory that contains a
// `metadata.tcb`.
// `metadata.json`.
//
// You can destroy an unused AOT module, but please ensure that there is no
// kernel or compute graph related to it pending to
Expand Down
49 changes: 49 additions & 0 deletions c_api/tests/c_api_aot_test.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -122,6 +122,41 @@ void texture_aot_kernel_test(TiArch arch) {
}
}

static void shared_array_aot_test(TiArch arch) {
uint32_t kArrLen = 8192;

const auto folder_dir = getenv("TAICHI_AOT_FOLDER_PATH");

std::stringstream aot_mod_ss;
aot_mod_ss << folder_dir;

ti::Runtime runtime(arch);

ti::NdArray<float> v_array =
runtime.allocate_ndarray<float>({kArrLen}, {}, true);
ti::NdArray<float> d_array =
runtime.allocate_ndarray<float>({kArrLen}, {}, true);
ti::NdArray<float> a_array =
runtime.allocate_ndarray<float>({kArrLen}, {}, true);
ti::AotModule aot_mod = runtime.load_aot_module(aot_mod_ss.str().c_str());
ti::Kernel k_run = aot_mod.get_kernel("run");

k_run.push_arg(v_array);
k_run.push_arg(d_array);
k_run.push_arg(a_array);
k_run.launch();
runtime.wait();

// Check Results
float *data = reinterpret_cast<float *>(a_array.map());

for (int i = 0; i < kArrLen; ++i) {
EXPECT_EQ(data[i], kArrLen);
}

a_array.unmap();
}

TEST_F(CapiTest, AotTestCpuField) {
TiArch arch = TiArch::TI_ARCH_X64;
field_aot_test(arch);
Expand Down Expand Up @@ -166,3 +201,17 @@ TEST_F(CapiTest, GraphTestVulkanTextureKernel) {
texture_aot_kernel_test(arch);
}
}

TEST_F(CapiTest, AotTestCudaSharedArray) {
if (ti::is_arch_available(TI_ARCH_CUDA)) {
TiArch arch = TiArch::TI_ARCH_CUDA;
shared_array_aot_test(arch);
}
}

TEST_F(CapiTest, AotTestVulkanSharedArray) {
if (ti::is_arch_available(TI_ARCH_VULKAN)) {
TiArch arch = TiArch::TI_ARCH_VULKAN;
shared_array_aot_test(arch);
}
}
3 changes: 2 additions & 1 deletion cmake/TaichiCore.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -273,10 +273,12 @@ endif()
add_subdirectory(taichi/util)
add_subdirectory(taichi/common)
add_subdirectory(taichi/rhi/interop)
add_subdirectory(taichi/compilation_manager)

target_link_libraries(${CORE_LIBRARY_NAME} PRIVATE taichi_util)
target_link_libraries(${CORE_LIBRARY_NAME} PRIVATE taichi_common)
target_link_libraries(${CORE_LIBRARY_NAME} PRIVATE interop_rhi)
target_link_libraries(${CORE_LIBRARY_NAME} PRIVATE compilation_manager)

if (TI_WITH_CUDA AND TI_WITH_CUDA_TOOLKIT)
find_package(CUDAToolkit REQUIRED)
Expand Down Expand Up @@ -316,7 +318,6 @@ set(SPIRV-Headers_SOURCE_DIR ${CMAKE_CURRENT_SOURCE_DIR}/external/SPIRV-Headers)
set(ENABLE_SPIRV_TOOLS_INSTALL OFF)
add_subdirectory(external/SPIRV-Tools)
add_subdirectory(taichi/codegen/spirv)
add_subdirectory(taichi/cache/gfx)
add_subdirectory(taichi/runtime/gfx)

if (TI_WITH_OPENGL OR TI_WITH_VULKAN OR TI_WITH_DX11 OR TI_WITH_METAL)
Expand Down
2 changes: 1 addition & 1 deletion docs/lang/articles/c-api/taichi_core.md
Original file line number Diff line number Diff line change
Expand Up @@ -116,7 +116,7 @@ You can load a Taichi AOT module from the filesystem.
TiAotModule aot_module = ti_load_aot_module(runtime, "/path/to/aot/module");
```

`/path/to/aot/module` should point to the directory that contains a `metadata.tcb`.
`/path/to/aot/module` should point to the directory that contains a `metadata.json`.

You can destroy an unused AOT module, but please ensure that there is no kernel or compute graph related to it pending to [`ti_flush`](#function-ti_flush).

Expand Down
2 changes: 1 addition & 1 deletion docs/lang/articles/debug/debugging.md
Original file line number Diff line number Diff line change
Expand Up @@ -127,7 +127,7 @@ Because threads are processed in random order, Taichi's automated parallelizatio

### Serialize an entire Taichi program

If you choose CPU as the backend, you can set `cpu_max_num_thread=1` when initializing Taichi to serialize the program. Then the program runs on a single thread and its behavior becomes deterministic. For example:
If you choose CPU as the backend, you can set `cpu_max_num_threads=1` when initializing Taichi to serialize the program. Then the program runs on a single thread and its behavior becomes deterministic. For example:

```python
ti.init(arch=ti.cpu, cpu_max_num_threads=1)
Expand Down
3 changes: 1 addition & 2 deletions docs/lang/articles/deployment/tutorial.md
Original file line number Diff line number Diff line change
Expand Up @@ -81,8 +81,7 @@ Now that we're done with Kernel compilation, let's take a look at the generated
├── demo
│ ├── add_base_c78_0_k0001_vk_0_t00.spv
│ ├── init_c76_0_k0000_vk_0_t00.spv
│ ├── metadata.json
│ └── metadata.tcb
│ └── metadata.json
└── demo.py
```

Expand Down
13 changes: 10 additions & 3 deletions docs/lang/articles/get-started/hello_world.md
Original file line number Diff line number Diff line change
Expand Up @@ -172,7 +172,7 @@ The field pixels is treated as an iterator, with `i` and `j` being integer indic

It is important to keep in mind that for loops nested within other constructs, such as `if/else` statements or other loops, are not automatically parallelized and are processed *sequentially*.

```python {3,7,14-15}
```python
@ti.kernel
def fill():
total = 0
Expand Down Expand Up @@ -209,7 +209,7 @@ def foo():

### Display the result

Finally we render the result on screen using Taichi's built-in [GUI System](../visualization/gui_system.md):
To render the result on screen, Taichi provides a built-in [GUI System](../visualization/gui_system.md). Use the `gui.set_image()` method to set the content of the window and `gui.show()` method to show the updated image.

```python skip-ci:Trivial
gui = ti.GUI("Julia Set", res=(n * 2, n))
Expand All @@ -223,7 +223,14 @@ while gui.running:
i += 1
```

To display the result on your screen, use the `gui.set_image()` method to set the content of the window, and then call the `gui.show()` method to show the updated image.
Taichi's GUI system uses the standard Cartesian coordinate system to define pixel coordinates. The origin of the coordinate system is located at the lower left corner of the screen. The `(0, 0)` element in `pixels` will be mapped to the lower left corner of the window, and the `(639, 319)` element will be mapped to the upper right corner of the window, as shown in the following image:

<center>

![](https://raw.githubusercontent.com/taichi-dev/public_files/master/taichi/doc/pixels.png)

</center>


### Key takeaways

Expand Down
2 changes: 0 additions & 2 deletions docs/lang/articles/math/math_module.md
Original file line number Diff line number Diff line change
Expand Up @@ -173,12 +173,10 @@ You can also compute the power, logarithm, and exponential of a complex number:


```python

@ti.kernel
def test():
x = tm.vec2(1, 1) # complex number 1 + 1j
y = tm.cpow(x, 2) # complex number (1 + 1j)**2 = 2j
z = tm.clog(x) # complex number (0.346574 + 0.785398j)
w = tm.cexp(x) # complex number (1.468694 + 2.287355j)

```
3 changes: 2 additions & 1 deletion python/taichi/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,8 @@
# Provide a shortcut to types since they're commonly used.
from taichi.types.primitive_types import *

from taichi import ad, algorithms, experimental, graph, linalg, math, tools
from taichi import (ad, algorithms, experimental, graph, linalg, math, tools,
types)
from taichi.ui import GUI, hex_to_rgb, rgb_to_hex, ui

# Issue#2223: Do not reorder, or we're busted with partially initialized module
Expand Down
11 changes: 3 additions & 8 deletions python/taichi/examples/graph/texture_graph.py
Original file line number Diff line number Diff line change
Expand Up @@ -51,19 +51,14 @@ def main():

_rw_tex = ti.graph.Arg(ti.graph.ArgKind.RWTEXTURE,
'rw_tex',
channel_format=ti.f32,
shape=(128, 128),
num_channels=1)
fmt=ti.Format.r32f,
ndim=2)
g_init_builder = ti.graph.GraphBuilder()
g_init_builder.dispatch(make_texture, _rw_tex)
g_init = g_init_builder.compile()

g_init.run({'rw_tex': texture})
_tex = ti.graph.Arg(ti.graph.ArgKind.TEXTURE,
'tex',
channel_format=ti.f32,
shape=(128, 128),
num_channels=1)
_tex = ti.graph.Arg(ti.graph.ArgKind.TEXTURE, 'tex', ndim=2)
g_builder = ti.graph.GraphBuilder()
g_builder.dispatch(paint, _t, _pixels_arr, _tex)
g = g_builder.compile()
Expand Down
Loading

0 comments on commit 291d486

Please sign in to comment.