Skip to content

Latest commit

 

History

History
29 lines (22 loc) · 1.38 KB

readme.md

File metadata and controls

29 lines (22 loc) · 1.38 KB

Sequential Attention mechanisms

Repo of sequential attention mechanisms from various papers, including their implementations and studies of how they behave within various model architectures.

Using and viewing

The implementations themselves are primarily done in PyTorch (v0.3 for now). They're built in a way that lets them be easily imported and used into any model following the documented input/output shapes.

The tests and studies of the implementations are done in Jupyter notebooks and can be viewed without having PyTorch installed (but not runnable without it).

References