Syntax

Kernels

Kernel arguments must be type-hinted. Kernels can have at most 8 parameters, e.g.,

@ti.kernel
def print_xy(x: ti.i32, y: ti.f32):
    print(x + y)

A kernel can have a scalar return value. If a kernel has a return value, it must be type-hinted. The return value will be automatically cast into the hinted type. e.g.,

@ti.kernel
def add_xy(x: ti.f32, y: ti.f32) -> ti.i32:
    return x + y  # same as: ti.cast(x + y, ti.i32)

res = add_xy(2.3, 1.1)
print(res)  # 3, since return type is ti.i32

Note

For now, we only support one scalar as return value. Returning ti.Matrix or ti.Vector is not supported. Python-style tuple return is not supported either. For example:

@ti.kernel
def bad_kernel() -> ti.Matrix:
    return ti.Matrix([[1, 0], [0, 1]])  # Error

@ti.kernel
def bad_kernel() -> (ti.i32, ti.f32):
    x = 1
    y = 0.5
    return x, y  # Error

We also support template arguments (see Template metaprogramming) and external array arguments (see Interacting with external arrays) in Taichi kernels.

Warning

When using differentiable programming, there are a few more constraints on kernel structures. See the Kernel Simplicity Rule in Differentiable programming (WIP).

Also, please do not use kernel return values in differentiable programming, since the return value will not be tracked by automatic differentiation. Instead, store the result into a global variable (e.g. loss[None]).

Functions

Use @ti.func to decorate your Taichi functions. These functions are callable only in Taichi-scope. Do not call them in Python-scopes.

@ti.func
def laplacian(t, i, j):
    return inv_dx2 * (
        -4 * p[t, i, j] + p[t, i, j - 1] + p[t, i, j + 1] + p[t, i + 1, j] +
        p[t, i - 1, j])

@ti.kernel
def fdtd(t: ti.i32):
    for i in range(n_grid): # Parallelized
        for j in range(n_grid): # Serial loops in each parallel threads
            laplacian_p = laplacian(t - 2, i, j)
            laplacian_q = laplacian(t - 1, i, j)
            p[t, i, j] = 2 * p[t - 1, i, j] + (
                c * c * dt * dt + c * alpha * dt) * laplacian_q - p[
                           t - 2, i, j] - c * alpha * dt * laplacian_p

Warning

Functions with multiple return statements are not supported for now. Use a local variable to store the results, so that you end up with only one return statement:

# Bad function - two return statements
@ti.func
def safe_sqrt(x):
  if x >= 0:
    return ti.sqrt(x)
  else:
    return 0.0

# Good function - single return statement
@ti.func
def safe_sqrt(x):
  rst = 0.0
  if x >= 0:
    rst = ti.sqrt(x)
  else:
    rst = 0.0
  return rst

Warning

Currently, all functions are force-inlined. Therefore, no recursion is allowed.

Note

Function arguments are passed by value.

Note

Unlike functions, kernels do support vectors or matrices as arguments:

@ti.func
def sdf(u):  # functions support matrices and vectors as arguments. No type-hints needed.
    return u.norm() - 1

@ti.kernel
def render(d_x: ti.f32, d_y: ti.f32):  # kernels do not support vector/matrix arguments yet. We have to use a workaround.
    d = ti.Vector([d_x, d_y])
    p = ti.Vector([0.0, 0.0])
    t = sdf(p)
    p += d * t
    ...

Scalar arithmetics

Supported scalar functions:

ti.sin(x)
ti.cos(x)
ti.asin(x)
ti.acos(x)
ti.atan2(x, y)
ti.cast(x, data_type)
ti.sqrt(x)
ti.floor(x)
ti.ceil(x)
ti.inv(x)
ti.tan(x)
ti.tanh(x)
ti.exp(x)
ti.log(x)
ti.random(data_type)
abs(x)
int(x)
float(x)
max(x, y)
min(x, y)
pow(x, y)

Note

Python 3 distinguishes / (true division) and // (floor division). For example, 1.0 / 2.0 = 0.5, 1 / 2 = 0.5, 1 // 2 = 0, 4.2 // 2 = 2. Taichi follows this design:

  • true divisions on integral types will first cast their operands to the default float point type.
  • floor divisions on float-point types will first cast their operands to the default integer type.

To avoid such implicit casting, you can manually cast your operands to desired types, using ti.cast. See Default precisions for more details on default numerical types.

Note

When these scalar functions are applied on Matrices and Vectors, they are applied in an element-wise manner. For example:

B = ti.Matrix([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
C = ti.Matrix([[3.0, 4.0, 5.0], [6.0, 7.0, 8.0]])

A = ti.sin(B)
# is equivalent to
for i in ti.static(range(2)):
    for j in ti.static(range(3)):
        A[i, j] = ti.sin(B[i, j])

A = ti.pow(B, 2)
# is equivalent to
for i in ti.static(range(2)):
    for j in ti.static(range(3)):
        A[i, j] = ti.pow(B[i, j], 2)

A = ti.pow(B, C)
# is equivalent to
for i in ti.static(range(2)):
    for j in ti.static(range(3)):
        A[i, j] = ti.pow(B[i, j], C[i, j])

A += 2
# is equivalent to
for i in ti.static(range(2)):
    for j in ti.static(range(3)):
        A[i, j] += 2

A += B
# is equivalent to
for i in ti.static(range(2)):
    for j in ti.static(range(3)):
        A[i, j] += B[i, j]

Debugging

Debug your program with print() in Taichi-scope. For example:

@ti.kernel
def inside_taichi_scope():
    x = 233
    print('hello', x)
    #=> hello 233

    m = ti.Matrix([[2, 3, 4], [5, 6, 7]])
    print('m is', m)
    #=> m is [[2, 3, 4], [5, 6, 7]]

    v = ti.Vector([3, 4])
    print('v is', v)
    #=> v is [3, 4]

Note

For now, print is only supported on CPU, CUDA and OpenGL backends.

For the CUDA backend, the printed result won’t shows up until ti.sync():

import taichi as ti
ti.init(arch=ti.cuda)

@ti.kernel
def kern():
    print('inside kernel')

print('before kernel')
kern()
print('after kernel')
ti.sync()
print('after sync')

obtains:

before kernel
after kernel
inside kernel
after

Also note that host access or program end will also implicitly invoke for ti.sync().