-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix Travis on test #59
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
tqchen
added a commit
to tqchen/tvm
that referenced
this pull request
May 26, 2018
apache#59) * [SYMBOL] Change list_input->list_input_names, add list_input_variables * fix
tqchen
added a commit
to tqchen/tvm
that referenced
this pull request
May 26, 2018
tqchen
added a commit
that referenced
this pull request
May 29, 2018
#59) * [SYMBOL] Change list_input->list_input_names, add list_input_variables * fix
tqchen
added a commit
to tqchen/tvm
that referenced
this pull request
Jul 6, 2018
apache#59) * [SYMBOL] Change list_input->list_input_names, add list_input_variables * fix
tqchen
added a commit
to tqchen/tvm
that referenced
this pull request
Jul 6, 2018
sergei-mironov
pushed a commit
to sergei-mironov/tvm
that referenced
this pull request
Aug 8, 2018
apache#59) * [SYMBOL] Change list_input->list_input_names, add list_input_variables * fix
sergei-mironov
pushed a commit
to sergei-mironov/tvm
that referenced
this pull request
Aug 8, 2018
jroesch
pushed a commit
to jroesch/tvm
that referenced
this pull request
Aug 29, 2018
zxy844288792
pushed a commit
to zxy844288792/tvm
that referenced
this pull request
Nov 26, 2019
* [TOPI][OP] Support Faster-RCNN Proposal OP on CPU (apache#4297) * Support Proposal operator on CPU. * PyLint space issue * PyLint space issue * Pylint singleton-comparison issue * [QNN][Legalize] Specialize for Platforms without any fast Int8 arithmetic units. (apache#4307) * fix error when memory_id is VTA_MEM_ID_OUT (apache#4330) * [CI][DOCKER] Add ONNX runtime dep (apache#4314) * [DOCKER] Add ONNX runtime dep * Improve ci script * [QNN] Quantize - Fixing the sequence of lowering. (apache#4316) * [QNN] Use Int16 upcast in Fallback Conv2D. Fix test names. (apache#4329) * [doc][fix] fix sphinx parsing for pass infra tutorial (apache#4337) * change ci image version (apache#4313) * [Codegen] remove fp16 function override for cuda (apache#4331) * add volatile override back * [codegen] remove fp16 function override for cuda * [CI] Set workspace to be per executor (apache#4336) * [Build][Windows] Fix Windows build by including cctype (apache#4319) * Fix build * dummy change to retrigger CI * dummy change to retrigger ci * dummy change to retrigger ci * Enable hipModuleGetGlobal() (apache#4321) * [Relay][Pass] Add pass to remove unused functions in relay module (apache#4334) * [Relay][Pass] Add pass to remove unused functions in relay module * Add tests * Fix lint * Fix visit order * Add pass argument * Fix * Add support for quant. mul operator in tflite frontend (apache#4283) A test for qnn_mul has to be added when the qnn elemwise tests (apache#4282) get merged. * Add topi.nn.fifo_buffer to TVM doc (apache#4343) * Solve custom model of prelu (apache#4326) * Deprecate NNVM warning msg (apache#4333) * [Contrib] Add MKL DNN option (apache#4323) * [Contrib] Add MKL DNN * update * update * [Relay][Frontend][TF] Fix transpose when axes is not a param (apache#4327) * [Relay][Frontend][TF] Use _infer_value_simulated when axes is not a const to Transpose * uncomment tests * dummy change to retrigger ci * [RUNTIME] Add device query for AMD GcnArch (apache#4341) * add gcnArch query * kGcnArch query for cuda is a no-op * [Test][Relay][Pass] Add test case for lambda lift (apache#4317) * [Relay][Frontend][ONNX] operator support: DepthToSpace, SpaceToDepth (apache#4271) * imp module is deprecated (apache#4275) * [VTA] Bug fix for padded load with large inputs (apache#4293) * bug fix for padded load with large inputs * Update TensorLoad.scala * Update test_vta_insn.py * fix inconsistent tag name (apache#4134) * [CodeGen] Add build config option disable_assert to control whether to generate assert (apache#4340) * Bump up CUDA log version in tophub.py (apache#4347) * Add check to ensure input file was successfully opened in NNVM deploy code demo (apache#4315) * [COMMUNITY] Add DISCLAIMER, KEYS for ASF release (apache#4345) * [COMMUNITY] Add DISCLAIMER, KEYS for ASF release * Add file name spec * [Relay][VM][Interpreter] Enable first-class constructors in VM and interpreter via eta expansion (apache#4218) * Fix constructor pretty printing * Make Module::HasDef name consistent with API * Add VM constructor compilation via eta expansion * Lint * Fix CI * Fix failing test * Address comment * Retrigger CI * Retrigger CI * Update dmlc_tvm_commit_id.txt
vinx13
pushed a commit
to vinx13/tvm
that referenced
this pull request
Mar 9, 2022
rebased [TIR][Schedule] fix reorder/buffer_flatten & finish CPU demo (apache#59) [CPU DEMO] Update cpu gemm demo and fix bug (apache#58) * [TIR][Schedule] introduce parallel and fix bugs for cpu demo * [TIR][Schedule] update cpu demo * [TIR][Schedule] fix lint * [TIR][Schedule] fix rebased [TIR][Schedule] introduce reduction block and CPU demo (apache#53) * [TIR] reduction : split_reduction * [TIR] reduction : split_reduction * [TIR] reduction : fuse_reduction * [TIR] reduction : cpu demo * [TIR] reduction : fix * [TIR] reduction : pattern detect remains * [TIR] reduction : pattern detect remains * [TIR] reduction : pattern match done * [TIR] reduction : fix lint * [TIR] reduction : fix * [TIR] reduction : fix * [TIR] reduction : fix * [TIR] reduction : fix * [TIR] reduction : rebased * [TIR] reduction : rebased [TIR][Schedule] introduce cache_read cache_write (apache#54) * [TIR][Schedule] introduce cache_read cache_write * [TIR][Schedule] add more comments * [TIR][Schedule] fix problem and add comments * [TIR][Schedule] address comments [TIR] schedule: introduce vectorize, unroll, loop validation (apache#47) * [TIR] vectorize : basically complete * [TIR] vectorize&unroll : update comments&unroll * [TIR] vectorize&unroll : rebased * [TIR] vectorize, unroll, cpu_demo: done * [TIR] vectorize, unroll, cpu_demo: simplify * [TIR] vectorize, unroll, cpu_demo: fix * [TIR] reduction : rebased * [TIR] reduction : fix [TIR][Schedule] fix sref and scopes problem during replace and compute_at (apache#50) * [TIR][Schedule] fix sref and scopes problem during replace and compute_at * [TIR][Schedule] fix * [TIR][Schedule] fix [TIR][Refactor] move function to ScheduleNode [TIR] Schedule: introduce primitive compute_at (apache#36) * [TIR] Schedule: introduce primitive compute_at * [TIR] Schedule: address comments * [TIR] Schedule: address comments * [TIR] Schedule: address comments * [TIR] Schedule: add check to compute_at * [TIR] Schedule: address comments * [TIR] Schedule: address comments [TIR] Schedule: introduce primitive reorder (apache#37) * [Schedule] debug * [TIR] Schedule: reorder, loop type detect remains * [TIR] reorder complete * [TIR] reorder complete * [TIR] fix * [TIR] reorder : rebased complete * [TIR] reorder : fix container.h * [TIR] reorder : fix * [TIR] reorder : fix * [TIR] reorder : fix * [TIR] reorder : simplify * [TIR] reorder : simplify * [TIR] reorder : simplify * [TIR] reorder : fix * [TIR] reorder : fix * [TIR] reorder : rebased * [TIR] reorder : rebased rebase [TIR] Schedule: introduce BlockRealize and Block SRef reuse(apache#39) * [TIR] BlockRealize: schedule refactor * [TIR] BlockRealize: debug * [TIR] BlockRealize finish * [TIR] BlockRealize finish * [TIR] BlockRealize fix * [TIR] BlockRealize update test * [TIR] BlockRealize: add loop var reuse * [TIR] BlockRealize: add loop var reuse * [TIR] BlockRealize: fix * [TIR] BlockRealize: fix * [TIR] BlockRealize: fix * [TIR] BlockRealize: fix * [TIR] BlockRealize: fix * [TIR] BlockRealize: fix * [TIR] BlockRealize: fix * [TIR] BlockRealize: fix * [TIR] BlockRealize: fix * [TIR] BlockRealize: fix [TIR] compare for module (apache#38) * [TIR] compare for module * [TIR] fix * [TIR] fix * [TIR] fix * [TIR] fix * [TIR] fix * [TIR] fix [Hybrid] Module init [Hybrid] Module print [Hybrid] Module print with meta [Hybrid] adjust [Hybrid] finished but without lint and comment check [Hybrid] fix lint [Hybrid] comments [Hybrid] fix script decoration API [Hybrid] using IRModule [Hybrid] fix [Hybrid] adjust API [Hybrid] fix [Hybrid] fix [Hybrid] fix [Hybrid] fix symbol table, adjust API, introduce meta_mutator and resolve import issue [Hybrid] fix lint [TIR] introduce pass BufferFlatten (apache#32) * [TIR] introduce pass BufferFlatten * [Tir] add comments & remove old TeLower * [TIR] split GatherRegion and BufferFlatten to two Visitor/Mutator * [TIR] address comments: Only consider stmt scope * [TIR] BufferFlatten: address comments * [TIR] BufferFlatten: fold BlockFlattener into BufferFlattener * [TIR] BufferFlatten: add asserts * [TIR] BufferFlatten: use Equal in testcase * [TIR] Equal Pass: Enhanced the pass * [TIR] Equal Pass: add comments [Hybrid] refactor using Doc, introduce annotation, enhance parser (apache#28) * [Hybrid] refactor printer, enhance parser * [Hybrid] refactor * [Hybrid] fix * [Hybrid] fix * [Hybrid] fix namespace issue * [Hybrid] compare using Equal [TIR] rebased [TE] fix replace again and add primitive fuse and split (apache#27) * [TE] add: schedule primitive fuse * [TE] add: schedule primitive split * [TE] address comments: add IRSubstitueInScope and other minor fix * [TE] address comments: Enhance Equal api and fix split by nparts * [TE] address comments [Hybrid] introduce printer (apache#25) * [Hybrid] substitute Block with SeqStmt, change block() syntax * [Hybrid] add printer, type declare intrin * [Hybrid] refactor * [Hybrid] meta * [Hybrid] refactor * [Hybrid] macro [TE] fix replace (apache#23) * [TE] fix replace * [TE] fix replace: add more tests * [TE] fix replace: add more tests [TE] rebased [Hybrid] python syntax parser (apache#20) * [Hybrid] python syntax parser * [Hybrid] add a testcase * [Hybrid] improve comments and fix bugs * [Hybrid] improve comments, refactor __internal_assert, add new testcases * [Hybrid] improve error report message, refactor intrin * [Hybrid] separate ScopeEmitter from parser * [Hybrid] refactor type check * [Hybrid] refactor intrin * [Hybrid] refactor intrin, allow register external functions with argument type checking, add a testcase * [Hybrid] address comments, fix a bug in te/ir.h * [Hybrid] remove type check * [Hybrid] python syntax parser * [Hybrid] add a testcase * [Hybrid] improve comments and fix bugs * [Hybrid] improve comments, refactor __internal_assert, add new testcases * [Hybrid] improve error report message, refactor intrin * [Hybrid] separate ScopeEmitter from parser * [Hybrid] refactor type check * [Hybrid] refactor intrin * [Hybrid] refactor intrin, allow register external functions with argument type checking, add a testcase * [Hybrid] address comments, fix a bug in te/ir.h * [Hybrid] remove type check * [Hybrid] refactor intrin, scope_handler, special_stmt * [Hybrid] address comments * [Hybrid] clean code, improve error reporting & testcase * [Hybrid] clean code * [Hybrid] clean code [IR] introduce dependency graph and write map [TE] refactor and clean codebase [TE] refactor IR [TE] introduce schedule, dependency graph and support fuse and split (apache#17) * fix lint * introduce dependency graph * enable create schedule * support get axes * fix lint * revert Set * add schedule primitive fuse * address comment * support split [IR] Introduce SeqStmt add TeLower pass and enable to run Te IR (apache#15) * add function data structure add TeLower pass to transform Te to current IR enable to run Te IR * address comments * unify terminology TensorIR data structure init (apache#14) * init te data structure * finish printer and enhanced ir_builder * address the comments Co-authored-by: Bohan Hou <32121147+spectrometerHBH@users.noreply.github.com>
cyx-6
pushed a commit
to cyx-6/tvm
that referenced
this pull request
Jul 3, 2022
junrushao
added a commit
to cyx-6/tvm
that referenced
this pull request
Jul 4, 2022
cyx-6
pushed a commit
to cyx-6/tvm
that referenced
this pull request
Jul 13, 2022
Hzfengsy
pushed a commit
to Hzfengsy/tvm
that referenced
this pull request
Jul 30, 2022
yzh119
added a commit
to yzh119/tvm
that referenced
this pull request
Nov 22, 2022
vinx13
pushed a commit
to vinx13/tvm
that referenced
this pull request
Mar 27, 2023
Grab some of the fixes that have gone upstream in the last few days. This did also include a rebase onto tvm/main but the number of changes is relatively small.
krishnaraj36
pushed a commit
to krishnaraj36/tvm_mainline
that referenced
this pull request
Aug 9, 2024
Test case for clip layer added
srkreddy1238
added a commit
to CodeLinaro/tvm
that referenced
this pull request
Oct 9, 2024
Test case for clip layer added
LeiWang1999
added a commit
to LeiWang1999/tvm
that referenced
this pull request
Nov 8, 2024
…pache#59) * Raise error if CUDA_DEVICE_ORDER=PCI_BUS_ID env is not applied in multi-gpu sys * quotes * protect bad gpu_id * fix logic * clean comment * Update target_detector.py --------- Co-authored-by: Lei Wang <34334180+LeiWang1999@users.noreply.github.com>
srkreddy1238
added a commit
to CodeLinaro/tvm
that referenced
this pull request
Nov 11, 2024
Test case for clip layer added
srkreddy1238
added a commit
to srkreddy1238/tvm
that referenced
this pull request
Nov 11, 2024
Test case for clip layer added
tqchen
pushed a commit
that referenced
this pull request
Nov 12, 2024
clip test case updated (#59) Test case for clip layer added
srkreddy1238
added a commit
to CodeLinaro/tvm
that referenced
this pull request
Nov 14, 2024
Test case for clip layer added
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.