Skip to content

Commit

Permalink
[skip tests] [skip docs] start joss paper draft
Browse files Browse the repository at this point in the history
  • Loading branch information
avik-pal committed Oct 2, 2023
1 parent 1ee7e66 commit f194ad1
Show file tree
Hide file tree
Showing 3 changed files with 100 additions and 0 deletions.
23 changes: 23 additions & 0 deletions .github/workflows/JOSSPaper.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
on: [push]

jobs:
paper:
runs-on: ubuntu-latest
name: Paper Draft
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Build draft PDF
uses: openjournals/openjournals-draft-action@master
with:
journal: joss
# This should be the path to the paper within your repo.
paper-path: joss/paper.md
- name: Upload
uses: actions/upload-artifact@v1
with:
name: paper
# This is the output path where Pandoc will write the compiled
# PDF. Note, this should be the same directory as the input
# paper.md
path: joss/paper.pdf
28 changes: 28 additions & 0 deletions joss/paper.bib
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
@inproceedings{10.1145/3458817.3476165,
author = {Moses, William S. and Churavy, Valentin and Paehler, Ludger and H\"{u}ckelheim, Jan and Narayanan, Sri Hari Krishna and Schanen, Michel and Doerfert, Johannes},
title = {Reverse-Mode Automatic Differentiation and Optimization of GPU Kernels via Enzyme},
year = {2021},
isbn = {9781450384421},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3458817.3476165},
doi = {10.1145/3458817.3476165},
booktitle = {Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis},
articleno = {61},
numpages = {16},
keywords = {CUDA, LLVM, ROCm, HPC, AD, GPU, automatic differentiation},
location = {St. Louis, Missouri},
series = {SC '21}
}

@inproceedings{NEURIPS2020_9332c513,
author = {Moses, William and Churavy, Valentin},
booktitle = {Advances in Neural Information Processing Systems},
editor = {H. Larochelle and M. Ranzato and R. Hadsell and M. F. Balcan and H. Lin},
pages = {12472--12485},
publisher = {Curran Associates, Inc.},
title = {Instead of Rewriting Foreign Code for Machine Learning, Automatically Synthesize Fast Gradients},
url = {https://proceedings.neurips.cc/paper/2020/file/9332c513ef44b682e9347822c2e457ac-Paper.pdf},
volume = {33},
year = {2020}
}
49 changes: 49 additions & 0 deletions joss/paper.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
---
title: 'Lux.jl: Bridging Scientific Computing & Deep Learning'
tags:
- Julia
- Deep Learning
- Scientific Computing
- Neural Ordinary Differential Equations
- Deep Equilibrium Models
authors:
- name: Avik Pal
orcid: 0000-0002-3938-7375
affiliation: "1"
affiliations:
- name: Electrical Engineering and Computer Science, CSAIL, MIT
index: 1
date: 2 October 2023
bibliography: paper.bib
---

# Summary

Combining Machine Learning and Scientific Computing have recently led to development of
methods like Universal Differential Equations, Neural Differential Equations, Deep Equilibrium Models, etc.,
which have been pushing the boundaries of physical sciences. However, every major deep learning
framework requires the numerical softwares to be rewritten to satisfy their specific requirements.
Lux.jl is a deep learning framework written in Julia with the correct abstractions to provide seamless
composability with scientific computing softwares. Lux uses pure functions to provide a
compiler and automatic differentiation friendly interface without compromising on the performance.

# Statement of Need

## Switching Automatic Differentiation Frameworks

## Support for CPU, NVIDIA GPUs and AMD GPUs

## Composition with Scientific Computing Softwares

## Ecosystem

# Limitations

Lux.jl is still in its early days of development and has the following known limitations:

* Training Small Neural Networks on CPUs are not optimized yet. For small networks,
[SimpleChains.jl](https://github.com/PumasAI/SimpleChains.jl) is the fastest option!
* Nested Automatic Differentiation is current not well supported. We hope to fix this soon,
with a migration to Enzyme Automatic Differentiation Framework `[@0.1145/3458817.3476165; @NEURIPS2020_9332c513]`.

# References

0 comments on commit f194ad1

Please sign in to comment.