Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

use taichi to caculate jacobian for differential rendering #6196

Open
fangde opened this issue Sep 28, 2022 · 4 comments
Open

use taichi to caculate jacobian for differential rendering #6196

fangde opened this issue Sep 28, 2022 · 4 comments
Assignees
Labels
question Question on using Taichi

Comments

@fangde
Copy link

fangde commented Sep 28, 2022

jacobina cacluaiton

I'm trying to use taichi to implement differential ray tracing for various computer vision task.
we are using least square to optimized the parameters, needs to calculate the Jacobina,

after a bit research, i managed to get it work, except for a few issues there, someone may have optimization suggestions.

background

The idea is to shot a ray around each pixel, and calculate the intersection of the ray with object.
the object can be parametrized, such as the size, and position

based on the measurement data (a photo), we can Calcutta the error between rendered data and measurement, by optimized it against the object parameters, we may correctly estimate the parameters.

I have a test code. ray tracking a sphere, the sphere radius, position are parameters to be optimized .
I run once with some parameter, and slightly modify it , render again, caculate the difference, and then use least square optimization to estimate the modifed radius. and postions of the sphere.


res=(640,480)
focus=400


from taichi.math import vec3

ray=ti.Vector.field(n=3,dtype=ti.f32,shape=(res[0],res[1]))
img=ti.field(dtype=ti.f32,shape=(res[0],res[1]))


d_losses = ti.field(float, shape=(res[0],res[1]), needs_dual=True, needs_grad=True)

scene_params=ti.field(dtype=ti.f32,shape=4,needs_dual=True,needs_grad=True)





@ti.kernel
def init_ray():
    for i,j in ray:
        v=vec3((i-res[0]/2.0)/focus,(j-res[1]/2.0)/focus,1.0)
        ray[i,j]=v.normalized()







@ti.func
def ray_sphere(ray):
    
    
    r=scene_params[0]

    sc=vec3(scene_params[1],scene_params[2],scene_params[3])
    tc=sc.dot(ray)*ray
    d=sc-tc
    

    dd=0.0
    if r<d.norm():
        dd=0.0
    else:
        dd=tc.norm()-ti.sqrt(r*r-d.norm_sqr())

    return dd

@ti.kernel
def trace_sphere():
    for i,j in ray:
        v=ray[i,j]
        d=ray_sphere(v)
        img[i,j]=d


@ti.kernel
def trace_loss():
    for i,j in ray:
        v=ray[i,j]
        d=ray_sphere(v)
        dis=d-img[i,j]
        d_losses[i,j]=dis




init_ray()

scene_params.from_numpy(np.array([10.0,10.0,10.0,100.0],dtype=np.float32))
trace_sphere()

scene_params.from_numpy(np.array([10.1,10.1,10.1,100.1],dtype=np.float32))
trace_loss()

trace_loss, find the difference of each pixel

to caculate the jacobin, i use the forward model

seeds=[0]*4

J=np.zeros((640,480,4))
for ip in range(4):
    seeds[ip]=1.0
    with ti.ad.FwdMode(loss=d_losses, param=scene_params, seed=seeds):
        trace_loss()
    t2=time.time()
    J[:,:,ip]=d_losses.dual.to_numpy()
    seeds[ip]=0.0
 

and finallly gaussion newton is cacluate in least square manner

JL=J.reshape(-1,4)
dd=d_loss.to_numpy()
dl=dl.reshape(-1)

es=np.linalg.inv(JL.T@JL)@JL.T@dl

if es is correct,should be 0.1

@fangde fangde added the question Question on using Taichi label Sep 28, 2022
@taichi-gardener taichi-gardener moved this to Untriaged in Taichi Lang Sep 28, 2022
@fangde
Copy link
Author

fangde commented Sep 28, 2022

It almost work,
just for a few issue

  1. I need to run 4 times for 4 parameters, I wonder can i get the jacobin in one go.

  2. autodiff work, except at the sphere boundary, as there is a big discontinuity. siggraph has a paper about it(https://people.csail.mit.edu/tzumao/diffrt/diffrt.pdf) ,but i don't know how to add it. for example, i want to do manual cacualtiong the gradient myself on the boundary, but use the autodiff gradient eslewhere

@erizmr erizmr self-assigned this Sep 28, 2022
@erizmr
Copy link
Contributor

erizmr commented Sep 29, 2022

Hi @fangde , thanks for the detailed description.

  1. Just wondering do you only need a more convenient interface such as ti.ad.jac or would like to have the 4 time computation parallel executed?
  2. Taichi autodiff system supports replacing autograd kernel with manually computation routine. There decorator pairti.ad.grad_replaced and ti.ad.grad_for are designed for this. An example is shown in the doc: https://docs.taichi-lang.org/docs/differentiable_programming#extending-taichi-autodiff-system

@fangde
Copy link
Author

fangde commented Sep 29, 2022

@erizmr
thank you for the reply.

  1. I would like to have something like ti.ad.jac , as to my knowledge , each forward will calculate the loss and dual once, here is a toy example, for real application, I I may have a few hundreds parameters for the scene, in that case to calculate the Jacobin, I will have the overhead of re-render the scene a few hundred times. I think the main benefits of have ti.ad.jac is performance reason , it will save the overhead of re-render and kernel launch.
  2. as for manual calculation, for the small toy example, I have done it with Numpy, but the reason I switch to autodiff is I find manual work is not maintainable, so I am looking forward is some mechanism to have access to the intermedia results such as. $fx$, $\frac{dx}{dx}$ , so I can do some manual combination on of values the boundary, without manual calculate $fx$, $\frac{df}{dx}$ completely by hand .

@erizmr
Copy link
Contributor

erizmr commented Sep 30, 2022

Hi @fangde

  1. Just as you mentioned, forward mode autodiff can only compute one column of the jacobian matrix one time. Due to that taichi does not support parallel execution between kernels, it requires to compute as many times as the number of input parameters to get a Jacobian matrix. One good thing is that the for i, j in ray inside the kernel is massively paralleled executed, i.e., the overhead of rendering once might not be very high.
  2. To my understanding, something you would like have is to manually overwrite some part of the derivative chain . Given a function f(g(h(x))), the derivative computed by chain rule is df/dx = df/dg * dg/dh * dh/dx, you may replace the dg/dh with manual function. I am wondering if I understand this correctly?

@erizmr erizmr moved this from Untriaged to Todo in Taichi Lang Sep 30, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Question on using Taichi
Projects
Status: Todo
Development

No branches or pull requests

2 participants