Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[DOCS] Point docs to the ASF site. #5178

Merged
merged 2 commits into from
Mar 30, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/PULL_REQUEST_TEMPLATE.md
Original file line number Diff line number Diff line change
@@ -1 +1 @@
Thanks for contributing to TVM! Please refer to guideline https://docs.tvm.ai/contribute/ for useful information and tips. After the pull request is submitted, please request code reviews from [Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers) by @ them in the pull request thread.
Thanks for contributing to TVM! Please refer to guideline https://tvm.apache.org/docs/contribute/ for useful information and tips. After the pull request is submitted, please request code reviews from [Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers) by @ them in the pull request thread.
2 changes: 1 addition & 1 deletion CONTRIBUTORS.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ TVM Contributors
TVM adopts the Apache way and governs by merit. We believe that it is important to create an inclusive community where everyone can use,
contribute to, and influence the direction of the project. We actively invite contributors who have earned the merit to be part of the development community.

See the [community structure document](http://docs.tvm.ai/contribute/community.html) for the explanation of community structure and contribution guidelines.
See the [community structure document](https://tvm.apache.org/docs/contribute/community.html) for the explanation of community structure and contribution guidelines.

## Mentors

Expand Down
78 changes: 39 additions & 39 deletions NEWS.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ to that issue so it can get added.
### Relay in Production
Relay is a functional, differentiable programming language designed to be an expressive intermediate representation for machine learning systems. Relay supports algebraic data types, closures, control flow, and recursion, allowing it to directly represent more complex models than computation graph-based IRs (e.g., NNVM) can. In TVM v0.6, Relay is in stable phase and is ready for production.

* Algebraic Data Types (ADT) support (#2442, #2575). ADT provides an expressive, efficient, and safe way to realize recursive computation (e.g., RNN). Refer to https://docs.tvm.ai/langref/relay_adt.html for more information.
* Algebraic Data Types (ADT) support (#2442, #2575). ADT provides an expressive, efficient, and safe way to realize recursive computation (e.g., RNN). Refer to https://tvm.apache.org/docs/langref/relay_adt.html for more information.
* Pass manager for Relay (#2546, #3226, #3234, #3191)
* Most frameworks have been supported in Relay, including ONNX, Keras, Tensorflow, Caffe2, CoreML, NNVMv1, MXNet (#2246).
* Explicitly manifest memory and tensor allocations in Relay. (#3560)
Expand Down Expand Up @@ -75,7 +75,7 @@ Relay is designed to natively support first-order and higher-order differentiati
Low-bit inference is getting more and more popular as it benefits both the performance and storage usage. TVM now supports two types of quantization. 1. Automatic quantizaion takes floating-point precision model, does per-layer calibration and generates low-bit model. 2. TVM also imports pre-quantized model from Tensorflow and MXNet, a new dialect QNN is introduced to handle further lowering to normal operators.

* Automatic Quantization
- Low-bit automatic quantization supported. (#2116). The workflow includes annotation, calibration and transformation.
- Low-bit automatic quantization supported. (#2116). The workflow includes annotation, calibration and transformation.
- Refactor quantization codebase and fix model accuracy. (#3543)
- KL-divergence-based per-layer calibration. (#3538)
- Add option to select which convolution layers are quantized. (#3173)
Expand Down Expand Up @@ -164,14 +164,14 @@ Rust language support in TVM includes two parts. 1. The frontend wraps the curre
* Vision operator `roi_align` (#2618).
* `where` operator for MXNet (#2647).
* Deformable conv2d (#2908)
* Faster-RCNN Proposal OP (#2725)
* ROI Pool operator (#2811)
* Gluoncv SSD support on CPU (#2353)
* Faster-RCNN Proposal OP (#2725)
* ROI Pool operator (#2811)
* Gluoncv SSD support on CPU (#2353)
* shape, reverse, and sign op (#2749, #2800, #2775)
* tile and repeat op (#2720)
* logical operators (#2743, #2453)
* stack op (#2729)
* NCHWc upsampling (#2806)
* NCHWc upsampling (#2806)
* clip and wrap mode support in take (#2858)
* AlterLayout support for `intel_graphics` conv2d , depthwise conv2d (#2729, #2806)
* Add foldr1 operator (#2928)
Expand Down Expand Up @@ -215,7 +215,7 @@ Rust language support in TVM includes two parts. 1. The frontend wraps the curre

### Frontend and User Interface
* Frontend darknet (#2773)
* Support tf.gather (#2935)
* Support tf.gather (#2935)
* Support tf.where (#2936)
* Adding ADD operator to tflite frontend for compiling the MobileNetV2 (#2919)
* Support SpaceToBatchND/BatchToSpaceND in Tensorflow frontend (#2943)
Expand Down Expand Up @@ -281,7 +281,7 @@ Rust language support in TVM includes two parts. 1. The frontend wraps the curre

### Runtime and Backend Support
* Make external library extend TVM's NDArray more easily (#2613).
* Improvements for NNPACK integratation, includes ci test, winograd (#2846, #2868, #2856, #2721)
* Improvements for NNPACK integratation, includes ci test, winograd (#2846, #2868, #2856, #2721)
* Improvements for OpenCL runtime (#2741, #2737)
* GraphRuntime: Enable sharing parameters of a model among multiple threads (#3384)
* Android runtime argsort support (#3472)
Expand Down Expand Up @@ -343,7 +343,7 @@ Rust language support in TVM includes two parts. 1. The frontend wraps the curre
* Higher order reverse mode automatic differentiation that work with control flow (#2496)
* Integer arithmetic analyzers, includes modular set analysis, const integer bound analysis and rewrite simplifier (#2904, #2851, #2768, #2722, #2668, #2860)
* Improve operator fusion for TupleGetItem in relay (#2914, #2929
* Compute FLOP of autotvm template for int8 models (#2776)
* Compute FLOP of autotvm template for int8 models (#2776)
* Common subexpression elimination pass in Relay (#2639)
* Improve quantization in Relay (#2723)
* Refactor `build_func` in measure module of autotvm to better support cross compiler (#2927)
Expand Down Expand Up @@ -437,12 +437,12 @@ Rust language support in TVM includes two parts. 1. The frontend wraps the curre
* Relay now supports saving and loading parameter dictionaries. (#2620)
* Add `max_num_threads` to Hybrid Script, which allows users to get max number of threads for GPU targets ([#2672](#2672/)).
* Improvements for tensorflow frontend (#2830, #2757, #2586), includes decompiling tf control flow (#2830)
* Improvements for mxnet frontend (#2844, #2777, #2772, #2706, #2704, #2709,, #2739)
* Improvements for mxnet frontend (#2844, #2777, #2772, #2706, #2704, #2709,, #2739)
* Improvements for keras frontend (#2842, #2854)
* Improvements for DarkNet frontend (#2673)
* Improvements for ONNX frontend (#2843, #2840)
* Better profile result dump in Chrome Tracing format (#2922, #2863)
* Unified error handling in NNVM and Relay frontends (#2828)
* Unified error handling in NNVM and Relay frontends (#2828)
* Improve NNVM to Relay conversion (#2734)
* Remove `input_0d_mismatch` special handling for TF Frontend(#3087)
* Bumped ONNX version from 1.1.0 to 1.4.1 (#3286)
Expand Down Expand Up @@ -509,7 +509,7 @@ Rust language support in TVM includes two parts. 1. The frontend wraps the curre
* Documentation on operators (#2761)
* Add gradient operator tutorial docs (#2751)
* Add compiler pass tutorial docs (#2746)
* Add Android Tutorial (#2977)
* Add Android Tutorial (#2977)
* Developer documentation for InferBound pass (#3126)
* Add missing targets to `target_name` documentation (#3128)
* Various documentation improvements (#3133)
Expand Down Expand Up @@ -540,10 +540,10 @@ Rust language support in TVM includes two parts. 1. The frontend wraps the curre

### Build and Test
* Increate the robuteness of CI test (#2841, #2798, #2793, #2788, #2781, #2727, #2710, #2711, #2923)
* Improve conda build (#2742)
* Improve conda build (#2742)
* Add caffe2 nnvm frontend to CI (#3018)
* Use bridge network and expose port on macOS when launch docker image (#3086)
* Run DarkNet tests (#2673)
* Run DarkNet tests (#2673)
* Add file type check (#3116)
* Always run cpptest during build to ensure library correctness (#3147)
* Handle more file types in ASF header (#3235)
Expand Down Expand Up @@ -641,41 +641,41 @@ Rust language support in TVM includes two parts. 1. The frontend wraps the curre
* [Tensor Expression] Fix missing reduction init predicates. (#2495)
* [Relay] Fix missing argument for NCHWc in Relay. (#2627)
* [TOPI] Fix `Nms_ir` data race. (#2600)
* Fix `compute_inline` with multiple outputs (#2934)
* Fix `compute_inline` with multiple outputs (#2934)
* [TEXPR][PASS] Fix thread all reduce to avoid write after read hazzard (#2937)
* [FRONTEND][TENSORFLOW] bug fix for tensorflow official slim models. (#2864)
* [FRONTEND][ONNX] Some bug fixes and Shape operator fixed for relay. (#2850)
* Turn on `USE_SORT` by default (#2916)
* [DOCKER] Upgrade ci-cpu to latest v0.50 (#2901)
* [TESTS] Import script robustness (set -u) (#2896)
* [Relay] Fix name of bias in testing.mlp (#2892)
* Turn on `USE_SORT` by default (#2916)
* [DOCKER] Upgrade ci-cpu to latest v0.50 (#2901)
* [TESTS] Import script robustness (set -u) (#2896)
* [Relay] Fix name of bias in testing.mlp (#2892)
* [TESTS] Improve script robustness (#2893)
* Add dense schedules to `__init__` for cpu (#2855)
* [Apps] [howto_deploy] fix cxx-flags order and build directory (#2888)
* [Relay] Add TVM_DLL for ANF/GNF conversion #2883
* [Apps] [howto_deploy] fix cxx-flags order and build directory (#2888)
* [Relay] Add TVM_DLL for ANF/GNF conversion #2883
* [Relay] Fix Relay ARM CPU depthwise spatial pack schedule alter op layout issue. (#2861)
* Fix setting up hints for getaddrinfo (#2872)
* Add missing sgx includes (#2878)
* Fix error reporting for missing axis (#2835)
* Fix setting up hints for getaddrinfo (#2872)
* Add missing sgx includes (#2878)
* Fix error reporting for missing axis (#2835)
* Fix an OrderDict initilization bug. (#2862)
* Fix Xcode 10 metal compile error (#2836)
* tvmrpc: Fix includes (#2825)
* Fix `init_proj.py`: Team ID expected (#2824)
* [DOCKER] Fix git clone failure. (#2816)
* upgrade java style-check due to CVE-2019-9658 (#2817)
* [Relay][Quantization] Fix duplicated simulated quantization (#2803)
* [Bugfix] Repeat and tile bug fixed, relay tests added (#2804)
* Fix caffe2 relay frontend (#2733)
* Fix a bug in nnvm to relay converter. (#2756)
* Ensure loop count is a constant before trying to unroll. (#2797)
* xcode.py: Decode bytes before output #2833
* [WIN] Fix a bug in `find_llvm` when specify llvm-config (#2758)
* [DLPACK] fix flaky ctypes support (#2759)
* tvmrpc: Fix includes (#2825)
* Fix `init_proj.py`: Team ID expected (#2824)
* [DOCKER] Fix git clone failure. (#2816)
* upgrade java style-check due to CVE-2019-9658 (#2817)
* [Relay][Quantization] Fix duplicated simulated quantization (#2803)
* [Bugfix] Repeat and tile bug fixed, relay tests added (#2804)
* Fix caffe2 relay frontend (#2733)
* Fix a bug in nnvm to relay converter. (#2756)
* Ensure loop count is a constant before trying to unroll. (#2797)
* xcode.py: Decode bytes before output #2833
* [WIN] Fix a bug in `find_llvm` when specify llvm-config (#2758)
* [DLPACK] fix flaky ctypes support (#2759)
* [Bugfix][Relay][Frontend] Fix bug in mxnet converter for `slick_like` (#2744)
* [DOCS] Fix tutorial (#2724)
* [DOCS] Fix tutorial (#2724)
* [TOPI][Relay] Fix default `out_dtype` for `conv2d_NCHWc` and Relay (#2702)
* [Relay] fix checkwellform (#2705)
* fix prelu, now can use on 2d input and add one test (#2875)
* [Relay] fix checkwellform (#2705)
* fix prelu, now can use on 2d input and add one test (#2875)
* [CODEGEN][OPENCL] Fix compile error about ternary expression. (#2821)
* Fix Placeholder issue (#2834)
* Fix makedirs() condition in contrib (#2942)
Expand Down
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@

<img src=https://raw.githubusercontent.com/apache/incubator-tvm-site/master/images/logo/tvm-logo-small.png width=128/> Open Deep Learning Compiler Stack
==============================================
[Documentation](https://docs.tvm.ai) |
[Documentation](https://tvm.apache.org/docs) |
[Contributors](CONTRIBUTORS.md) |
[Community](https://tvm.apache.org/community) |
[Release Notes](NEWS.md)
Expand All @@ -36,7 +36,7 @@ License
Contribute to TVM
-----------------
TVM adopts apache committer model, we aim to create an open source project that is maintained and owned by the community.
Checkout the [Contributor Guide](https://docs.tvm.ai/contribute/)
Checkout the [Contributor Guide](https://tvm.apache.org/docs/contribute/)

Acknowledgement
---------------
Expand Down
2 changes: 1 addition & 1 deletion apps/android_deploy/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -122,7 +122,7 @@ If everything goes well, you will find compile tools in `/opt/android-toolchain-

### Place compiled model on Android application assets folder

Follow instruction to get compiled version model for android target [here.](http://docs.tvm.ai/deploy/android.html)
Follow instruction to get compiled version model for android target [here.](https://tvm.apache.org/docs/deploy/android.html)

Copied these compiled model deploy_lib.so, deploy_graph.json and deploy_param.params to apps/android_deploy/app/src/main/assets/ and modify TVM flavor changes on [java](https://github.com/apache/incubator-tvm/blob/master/apps/android_deploy/app/src/main/java/org/apache/tvm/android/demo/MainActivity.java#L81)

Expand Down
14 changes: 7 additions & 7 deletions apps/benchmark/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,13 +35,13 @@ In general, the performance should also be good.

It is recommended that you run tuning by yourself if you have your customized network or devices.
Please follow the tutorial for
[NVIDIA GPU](https://docs.tvm.ai/tutorials/autotvm/tune_conv2d_cuda.html),
[ARM CPU](https://docs.tvm.ai/tutorials/autotvm/tune_relay_arm.html),
[Mobile GPU](https://docs.tvm.ai/tutorials/autotvm/tune_relay_mobile_gpu.html).
[NVIDIA GPU](https://tvm.apache.org/docs/tutorials/autotvm/tune_conv2d_cuda.html),
[ARM CPU](https://tvm.apache.org/docs/tutorials/autotvm/tune_relay_arm.html),
[Mobile GPU](https://tvm.apache.org/docs/tutorials/autotvm/tune_relay_mobile_gpu.html).

### NVIDIA GPU

Build TVM with LLVM and CUDA enabled. [Help](https://docs.tvm.ai/install/from_source.html)
Build TVM with LLVM and CUDA enabled. [Help](https://tvm.apache.org/docs/install/from_source.html)

```bash
python3 gpu_imagenet_bench.py --model 1080ti
Expand All @@ -58,7 +58,7 @@ You need to use it for reproducing benchmark results.

**Note**: We use llvm-4.0 in our tuning environment. Mismatch of the LLVM version during tuning and deployment can influence the performance, so you have to use a same version for reproduction.

0. Build TVM with LLVM enabled. [Help](https://docs.tvm.ai/install/from_source.html)
0. Build TVM with LLVM enabled. [Help](https://tvm.apache.org/docs/install/from_source.html)

1. Start an RPC Tracker on the host machine
```bash
Expand All @@ -67,7 +67,7 @@ python3 -m tvm.exec.rpc_tracker

2. Register devices to the tracker
* For Linux device
* Build tvm runtime on your device [Help](https://docs.tvm.ai/tutorials/frontend/deploy_model_on_rasp.html#build-tvm-runtime-on-device)
* Build tvm runtime on your device [Help](https://tvm.apache.org/docs/tutorials/frontend/deploy_model_on_rasp.html#build-tvm-runtime-on-device)
* Register your device to tracker by
```bash
python3 -m tvm.exec.rpc_server --tracker=[HOST_IP]:9190 --key=[DEVICE_KEY]
Expand Down Expand Up @@ -123,7 +123,7 @@ python3 -m tvm.exec.rpc_tracker

### AMD GPU

Build TVM with LLVM and ROCm enabled. [Help](https://docs.tvm.ai/install/from_source.html)
Build TVM with LLVM and ROCm enabled. [Help](https://tvm.apache.org/docs/install/from_source.html)
```bash
python3 gpu_imagenet_bench.py --model gfx900 --target rocm
```
2 changes: 1 addition & 1 deletion apps/howto_deploy/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,4 +26,4 @@ Type the following command to run the sample code under the current folder(need
./run_example.sh
```

Checkout [How to Deploy TVM Modules](http://docs.tvm.ai/deploy/cpp_deploy.html) for more information.
Checkout [How to Deploy TVM Modules](https://tvm.apache.org/docs/deploy/cpp_deploy.html) for more information.
2 changes: 1 addition & 1 deletion docs/README.txt
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ TVM Documentations
==================
This folder contains the source of TVM documents

- A hosted version of doc is at http://docs.tvm.ai
- A hosted version of doc is at https://tvm.apache.org/docs
- pip install sphinx>=1.5.5 sphinx-gallery sphinx_rtd_theme matplotlib Image recommonmark "Pillow<7"
- Build tvm first in the root folder.
- To build locally, you need to enable USE_CUDA, USE_OPENCL, LLVM_CONFIG in config.mk and then type "make html" in this folder.
Expand Down
8 changes: 4 additions & 4 deletions docs/vta/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,8 +26,8 @@ We present three installation guides, each extending on the previous one:

## VTA Simulator Installation

You need [TVM installed](https://docs.tvm.ai/install/index.html) on your machine.
For a quick and easy start, use the pre-built [TVM Docker image](https://docs.tvm.ai/install/docker.html).
You need [TVM installed](https://tvm.apache.org/docs/install/index.html) on your machine.
For a quick and easy start, checkout the [Docker Guide](https://tvm.apache.org/docs/install/docker.html).

You'll need to set the following paths to use VTA:
```bash
Expand Down Expand Up @@ -60,7 +60,7 @@ python <tvm root>/vta/tests/python/integration/test_benchmark_topi_conv2d.py

> Note: You'll notice that for every convolution layer, the throughput gets reported in GOPS. These numbers are actually the computational throughput that the simulator achieves, by evaluating the convolutions in software.

You are invited to try out our [VTA programming tutorials](https://docs.tvm.ai/vta/tutorials/index.html).
You are invited to try out our [VTA programming tutorials](https://tvm.apache.org/docs/vta/tutorials/index.html).


### Advanced Configuration (optional)
Expand Down Expand Up @@ -193,7 +193,7 @@ python <tvm root>/vta/tests/python/integration/test_benchmark_topi_conv2d.py

The performance metrics measured on the Pynq board will be reported for each convolutional layer.

You can also try out our [VTA programming tutorials](https://docs.tvm.ai/vta/tutorials/index.html).
You can also try out our [VTA programming tutorials](https://tvm.apache.org/docs/vta/tutorials/index.html).

## VTA Custom Test Setup for Intel FPGA

Expand Down
4 changes: 2 additions & 2 deletions jvm/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ This folder contains the Java interface for TVM runtime. It brings TVM runtime t

- JDK 1.6+. Oracle JDK and OpenJDK are well tested.
- Maven 3 for build.
- LLVM (TVM4J need LLVM support. Please refer to [build-the-shared-library](https://docs.tvm.ai/install/from_source.html#build-the-shared-library) for how to enable LLVM support.)
- LLVM (TVM4J need LLVM support. Please refer to [build-the-shared-library](https://tvm.apache.org/docs/install/from_source.html#build-the-shared-library) for how to enable LLVM support.)

### Modules

Expand All @@ -45,7 +45,7 @@ TVM4J contains three modules:

### Build

First please refer to [Installation Guide](http://docs.tvm.ai/install/) and build runtime shared library from the C++ codes (libtvm\_runtime.so for Linux and libtvm\_runtime.dylib for OSX).
First please refer to [Installation Guide](https://tvm.apache.org/docs/install/) and build runtime shared library from the C++ codes (libtvm\_runtime.so for Linux and libtvm\_runtime.dylib for OSX).

Then you can compile tvm4j by

Expand Down
2 changes: 1 addition & 1 deletion rust/frontend/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -109,7 +109,7 @@ and the model correctly predicts the input image as **tiger cat**.

## Installations

Please follow TVM [installations](https://docs.tvm.ai/install/index.html), `export TVM_HOME=/path/to/tvm` and add `libtvm_runtime` to your `LD_LIBRARY_PATH`.
Please follow TVM [installations](https://tvm.apache.org/docs/install/index.html), `export TVM_HOME=/path/to/tvm` and add `libtvm_runtime` to your `LD_LIBRARY_PATH`.

*Note:* To run the end-to-end examples and tests, `tvm` and `topi` need to be added to your `PYTHONPATH` or it's automatic via an Anaconda environment when it is installed individually.

Expand Down
2 changes: 1 addition & 1 deletion rust/frontend/examples/resnet/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ This end-to-end example shows how to:
* use the provided Rust frontend API to test for an input image

To run the example with pretrained resnet weights, first `tvm` and `mxnet` must be installed for the python build. To install mxnet for cpu, run `pip install mxnet`
and to install `tvm` with `llvm` follow the [TVM installation guide](https://docs.tvm.ai/install/index.html).
and to install `tvm` with `llvm` follow the [TVM installation guide](https://tvm.apache.org/docs/install/index.html).

* **Build the example**: `cargo build

Expand Down
Loading