Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pull #5

Merged
merged 77 commits into from
Aug 30, 2018
Merged

Pull #5

merged 77 commits into from
Aug 30, 2018

Conversation

sergei-mironov
Copy link
Owner

Thanks for contributing to TVM! Please refer to guideline https://docs.tvm.ai/contribute/ for useful information and tips. After the pull request is submitted, please request code reviews from others in the community.

masahi and others added 30 commits August 8, 2018 10:07
* Separate fusion and compilation

* fix description of graph_fuse.h

* fix lint

* fix @masahi 's comments, move fusion out of target

* fix graph passing and make fused_entries singula in graph attr

* fix typo

* fix some comments

* run test again

* remove rvalue for graphfuse and graphfindfusiablegroups
* [TOPI] add injective scheduler for HLS backends

* Introduced PrintBinaryExpr
* Use int for int8x4 due to performance overhead of char4

* Add a comment about using int

* Remove invalid test
* [NNVM][TENSORFLOW] Optimized tensorflow testcases

* Replace Constants with Placeholder

* Review comment fix
kazum and others added 29 commits August 22, 2018 15:36
* [NNVM][TEST] Numerical gradient testing

* [NNVM][TEST] Make some tests a little faster

* Fix the failing test_top_level3

* Target exclusion for the check_function

* Try to ignore singularities

* grad_input_vars now can't contain shapes

* Don't pass unnecessary grad_input_vars to check_function

* Multiple outputs; fixes; testing of check_function

* Use numerical_grads_params to pass parameters to numgrad checker

* Fail when no action is requested excplicitly

* Pass additional params to functions

* Silence the linter issue

* Simplified numgrad checking

* Improved docs for check_function

* Fixed the error message when no dtype is provided

* Several fixes

* Tests with shape/dtype inference for inputs

* Don't check dense's grads on cuda

* Raise an error if output dtypes haven't been inferred

* Moved shape/dtype inference into a separate function; use float32 as fallback

* Remove redundant dtype=float32

* Fix multiple outputs

* Use check_function in the rest of the test_top_level1
The old queue size is too small. It will stall the executor due to race condition.
* [CODEGEN][AOCL] Add math intrinsic rules

* introduce aocl_emu target for AOCL emulation

* rename aocl_emu with aocl_sw_emu

* update docs
… it (#1654)

* [TENSORFLOW] fix the convertion of sum and add testcase for it

* delete checking tyoe of axis and divide reduce test
* add docstring skip in hybrid script

* fix lint
* [TOPI] add nn schedulers for HLS backends

* fix pylint

* fix topi transform test
@sergei-mironov sergei-mironov merged commit a982599 into sergei-mironov:master Aug 30, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.