Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add flake8 and spell check, whitelist dictionary, docstrings #155

Merged
merged 4 commits into from
Dec 29, 2020
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 2 additions & 3 deletions .github/CONTRIBUTING.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# How to contribute

We welcome contributions from external contributors, and this document
describes how to merge code changes into this `opt_einsum`.
describes how to merge code changes into this `opt_einsum`.

## Getting Started

Expand Down Expand Up @@ -34,7 +34,7 @@ describes how to merge code changes into this `opt_einsum`.
integration returns checkmarks,
and multiple core developers give "Approved" reviews.

# Additional Resources
## Additional Resources

* [General GitHub documentation](https://help.github.com/)
* [PR best practices](http://codeinthehole.com/writing/pull-requests-and-other-good-practices-for-teams-using-github/)
Expand Down Expand Up @@ -115,4 +115,3 @@ available at [http://contributor-covenant.org/version/1/4][version]

[homepage]: http://contributor-covenant.org
[version]: http://contributor-covenant.org/version/1/4/

20 changes: 10 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,15 @@

# Optimized Einsum

[![Build Status](https://travis-ci.org/dgasmith/opt_einsum.svg?branch=master)](https://travis-ci.org/dgasmith/opt_einsum)
[![codecov](https://codecov.io/gh/dgasmith/opt_einsum/branch/master/graph/badge.svg)](https://codecov.io/gh/dgasmith/opt_einsum)
[![Anaconda-Server Badge](https://anaconda.org/conda-forge/opt_einsum/badges/version.svg)](https://anaconda.org/conda-forge/opt_einsum)
[![PyPI](https://img.shields.io/pypi/v/opt_einsum.svg)](https://pypi.org/project/opt-einsum/#description)
[![PyPIStats](https://img.shields.io/pypi/dm/opt_einsum)](https://pypistats.org/packages/opt-einsum)
[![Documentation Status](https://readthedocs.org/projects/optimized-einsum/badge/?version=latest)](http://optimized-einsum.readthedocs.io/en/latest/?badge=latest)
[![DOI](http://joss.theoj.org/papers/10.21105/joss.00753/status.svg)](https://doi.org/10.21105/joss.00753)
[![DOI](https://joss.theoj.org/papers/10.21105/joss.00753/status.svg)](https://doi.org/10.21105/joss.00753)


Optimized Einsum: A tensor contraction order optimizer
======================================================
## Optimized Einsum: A tensor contraction order optimizer

Optimized einsum can significantly reduce the overall execution time of einsum-like expressions (e.g.,
[`np.einsum`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.einsum.html),
Expand All @@ -17,7 +18,9 @@ Optimized einsum can significantly reduce the overall execution time of einsum-l
[`tensorflow.einsum`](https://www.tensorflow.org/api_docs/python/tf/einsum),
)
by optimizing the expression's contraction order and dispatching many
operations to canonical BLAS, cuBLAS, or other specialized routines. Optimized
operations to canonical BLAS, cuBLAS, or other specialized routines.

Optimized
einsum is agnostic to the backend and can handle NumPy, Dask, PyTorch,
Tensorflow, CuPy, Sparse, Theano, JAX, and Autograd arrays as well as potentially
any library which conforms to a standard API. See the
Expand Down Expand Up @@ -70,23 +73,20 @@ The following capabilities are enabled by `opt_einsum`:

Please see the [documentation](http://optimized-einsum.readthedocs.io/en/latest/?badge=latest) for more features!


## Installation

`opt_einsum` can either be installed via `pip install opt_einsum` or from conda `conda install opt_einsum -c conda-forge`. See the installation [documenation](http://optimized-einsum.readthedocs.io/en/latest/install.html) for further methods.
`opt_einsum` can either be installed via `pip install opt_einsum` or from conda `conda install opt_einsum -c conda-forge`. See the installation [documentation](http://optimized-einsum.readthedocs.io/en/latest/install.html) for further methods.

## Citation

If this code has benefited your research, please support us by citing:

Daniel G. A. Smith and Johnnie Gray, opt_einsum - A Python package for optimizing contraction order for einsum-like expressions. *Journal of Open Source Software*, **2018**, 3(26), 753

DOI: https://doi.org/10.21105/joss.00753
DOI: <https://doi.org/10.21105/joss.00753>

## Contributing

All contributions, bug reports, bug fixes, documentation improvements, enhancements, and ideas are welcome.

A detailed overview on how to contribute can be found in the [contributing guide](https://github.com/dgasmith/opt_einsum/blob/master/.github/CONTRIBUTING.md).


2 changes: 1 addition & 1 deletion opt_einsum/backends/dispatch.py
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ def get_func(func, backend='numpy', default=None):
return fn


# mark libs with einsum, else try to use tensordot/tranpose as much as possible
# mark libs with einsum, else try to use tensordot/transpose as much as possible
_has_einsum = {}


Expand Down
6 changes: 3 additions & 3 deletions opt_einsum/blas.py
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,7 @@ def can_blas(inputs, result, idx_removed, shapes=None):
if inputs[0] == inputs[1]:
return 'DOT'

# DDOT doesnt make sense if you have to tranpose - prefer einsum
# DDOT does not make sense if you have to transpose - prefer einsum
elif sets[0] == sets[1]:
return 'DOT/EINSUM'

Expand All @@ -107,7 +107,7 @@ def can_blas(inputs, result, idx_removed, shapes=None):
elif input_left[-rs:] == input_right[-rs:]:
return 'GEMM'

# GEMM tranpose left
# GEMM transpose left
elif input_left[:rs] == input_right[:rs]:
return 'GEMM'

Expand Down Expand Up @@ -216,7 +216,7 @@ def tensor_blas(view_left, input_left, view_right, input_right, index_result, id
elif input_left[-rs:] == input_right[-rs:]:
new_view = np.dot(view_left.reshape(dim_left, dim_removed), view_right.reshape(dim_right, dim_removed).T)

# Tranpose left
# Transpose left
elif input_left[:rs] == input_right[:rs]:
new_view = np.dot(view_left.reshape(dim_removed, dim_left).T, view_right.reshape(dim_removed, dim_right))

Expand Down
4 changes: 2 additions & 2 deletions opt_einsum/contract.py
Original file line number Diff line number Diff line change
Expand Up @@ -245,7 +245,7 @@ def contract_path(*operands, **kwargs):
num_ops = len(input_list)

# Compute naive cost
# This isnt quite right, need to look into exactly how einsum does this
# This is not quite right, need to look into exactly how einsum does this
# indices_in_input = input_subscripts.replace(',', '')

inner_product = (sum(len(x) for x in input_sets) - len(indices)) > 0
Expand Down Expand Up @@ -375,7 +375,7 @@ def _tensordot(x, y, axes, backend='numpy'):
# Rewrite einsum to handle different cases
def contract(*operands, **kwargs):
"""
contract(subscripts, *operands, out=None, dtype=None, order='K', casting='safe', use_blas=True, optimize=True, memory_limit=None, backend='numpy')
contract(subscripts, *operands, out=None, dtype=None, order='K', casting='safe', use_blas=True, optimize=True, memory_limit=None, backend='numpy')

Evaluates the Einstein summation convention on the operands. A drop in
replacement for NumPy's einsum function that optimizes the order of contraction
Expand Down
3 changes: 2 additions & 1 deletion opt_einsum/parser.py
Original file line number Diff line number Diff line change
Expand Up @@ -121,7 +121,8 @@ def alpha_canonicalize(equation):

def find_output_str(subscripts):
"""
Find the output string for the inputs ``subscripts`` under canonical einstein summation rules. That is, repeated indices are summed over by default.
Find the output string for the inputs ``subscripts`` under canonical einstein summation rules.
That is, repeated indices are summed over by default.

Examples
--------
Expand Down
2 changes: 1 addition & 1 deletion opt_einsum/path_random.py
Original file line number Diff line number Diff line change
Expand Up @@ -276,7 +276,7 @@ def thermal_chooser(queue, remaining, nbranch=8, temperature=1, rel_temperature=
chosen, = random_choices(range(n), weights=energies)
cost, k1, k2, k12 = choices.pop(chosen)

# put the other choise back in the heap
# put the other choice back in the heap
for other in choices:
heapq.heappush(queue, other)

Expand Down
4 changes: 2 additions & 2 deletions opt_einsum/paths.py
Original file line number Diff line number Diff line change
Expand Up @@ -702,7 +702,7 @@ def _tree_to_sequence(c):
return []

c = [c] # list of remaining contractions (lower part of columns shown above)
t = [] # list of elementary tensors (upper part of colums)
t = [] # list of elementary tensors (upper part of columns)
s = [] # resulting contraction sequence

while len(c) > 0:
Expand Down Expand Up @@ -970,7 +970,7 @@ def __call__(self, inputs, output, size_dict, memory_limit=None):
# nothing left to do after single axis reductions!
return _tree_to_sequence(simple_tree_tuple(inputs_done))

# a list of all neccessary contraction expressions for each of the
# a list of all necessary contraction expressions for each of the
# disconnected subgraphs and their size
subgraph_contractions = inputs_done
subgraph_contractions_size = [1] * len(inputs_done)
Expand Down
162 changes: 162 additions & 0 deletions whitelist.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,162 @@
0rc1
10pt
11pt
12pt
16x16
32x32
a4paper
abap
autodoc
autogenerated
autograd
Backends
backendseq
bbfreeze
bdist
blas
borland
bw
C0301
caes
cba
cfg
cmd
cmdclass
cmin
conda
consts
crossref
cupy
cx
datestamp
DDOT
Deduplicate
dep
dereference
detailmenu
dirname
documentclass
dtype
einsum
execfile
favicon
fi
FILEVERSION
fmt
fn
func
GEMM
GEMV
gh
gitattributes
githubs
hadamard
Hadamard
hardlink
hashable
hashtable
howto
htaccess
htbp
https
ico
idx
iij
ij
ik
inds
ja
ja
jax
jieba
jk
jkk
js
letterpaper
lgtm
lhs
libs
lru
manni
mem
method1
method2
modindex
moduleauthor
monokai
nczeczulin
ndim
nl
no
NUM
numpy
numpytensordot
opensearch
outputless
pagerefs
papersize
paraiso
parentdir
pep440
perldoc
pointsize
prepended
prodcuts
PRODUCTVERSION
py2exe
Pygments
pylint
quickstart
recurse
refnames
rhs
ro
rrt
rst
runtime
s1
s2
sdist
sectionauthor
Ses
sig
sourcedist
sourcelink
sparsify
sphinxstrong
sphinxtitleref
subdependencies
subgraph
subgraphs
subst
sv
TDOT
tempdir
tensordot
Tensordot
test5
texinfo
Texinfo
theano
titleref
tmp
toctree
toplevel
trac
transpose
uncomparable
undoc
unparseable
v0
VCS
Vectordot
versioneer
versionfile
x3
xcode
xhtml
zh
zh
zipball
eq
0b100101