Skip to content

Commit

Permalink
Add torch.compile layernorm
Browse files Browse the repository at this point in the history
Summary:
While this will probably also be covered in some microbenchmarks for
torch inductor, it seems like we should also have this in tritonbench to cover
triton-based changes in perf here.

Reviewed By: sijiac

Differential Revision: D55902435

fbshipit-source-id: 2622f807086d87a0036c65aa21524242e646a963
  • Loading branch information
bertmaher authored and facebook-github-bot committed Apr 9, 2024
1 parent d9a9600 commit 161f2cc
Showing 1 changed file with 8 additions and 0 deletions.
8 changes: 8 additions & 0 deletions torchbenchmark/operators/layer_norm/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,14 @@ def triton_layer_norm(self, *args):
def torch_layer_norm(self, *args):
return lambda: F.layer_norm(*args)

@register_benchmark()
def torch_compile_layer_norm(self, *args):
@torch.compile
def inner(*args):
return F.layer_norm(*args)

return lambda: inner(*args)

def get_bwd_fn(self, fwd_fn: Callable) -> Callable:
y = fwd_fn()
dy = 0.1 * torch.randn_like(y)
Expand Down

0 comments on commit 161f2cc

Please sign in to comment.