diff --git a/.github/md-link-config.json b/.github/md-link-config.json index e68593a030..34d874e98e 100644 --- a/.github/md-link-config.json +++ b/.github/md-link-config.json @@ -3,6 +3,12 @@ { "pattern": "^https://developer.nvidia.com//" }, + { + "pattern": "^https://developer.android.com/" + }, + { + "pattern": "^https://developer.qualcomm.com/" + }, { "pattern": "^http://localhost" } diff --git a/docs/en/01-how-to-build/jetsons.md b/docs/en/01-how-to-build/jetsons.md index 9786d04c97..185e7380ab 100644 --- a/docs/en/01-how-to-build/jetsons.md +++ b/docs/en/01-how-to-build/jetsons.md @@ -289,7 +289,7 @@ pip install -r requirements/build.txt pip install -v -e . # or "python setup.py develop" ``` -1. Follow [this document](../02-how-to-run/convert_model.md) on how to convert model files. +2. Follow [this document](docs/en/02-how-to-run/convert_model.md) on how to convert model files. For this example, we have used [retinanet_r18_fpn_1x_coco.py](https://github.com/open-mmlab/mmdetection/blob/3.x/configs/retinanet/retinanet_r18_fpn_1x_coco.py) as the model config, and [this file](https://download.openmmlab.com/mmdetection/v2.0/retinanet/retinanet_r18_fpn_1x_coco/retinanet_r18_fpn_1x_coco_20220407_171055-614fd399.pth) as the corresponding checkpoint file. Also for deploy config, we have used [detection_tensorrt_dynamic-320x320-1344x1344.py](https://github.com/open-mmlab/mmdeploy/blob/master/configs/mmdet/detection/detection_tensorrt_dynamic-320x320-1344x1344.py) diff --git a/docs/en/04-supported-codebases/mmedit.md b/docs/en/04-supported-codebases/mmedit.md index 7abd08a2e7..c72a1d201e 100644 --- a/docs/en/04-supported-codebases/mmedit.md +++ b/docs/en/04-supported-codebases/mmedit.md @@ -188,8 +188,8 @@ Besides python API, mmdeploy SDK also provides other FFI (Foreign Function Inter | [SRCNN](https://github.com/open-mmlab/mmediting/tree/1.x/configs/srcnn) | super-resolution | Y | Y | Y | Y | Y | | [ESRGAN](https://github.com/open-mmlab/mmediting/tree/1.x/configs/esrgan) | super-resolution | Y | Y | Y | Y | Y | | [ESRGAN-PSNR](https://github.com/open-mmlab/mmediting/tree/1.x/configs/esrgan) | super-resolution | Y | Y | Y | Y | Y | -| [SRGAN](https://github.com/open-mmlab/mmediting/tree/1.x/configs/srresnet_srgan) | super-resolution | Y | Y | Y | Y | Y | -| [SRResNet](https://github.com/open-mmlab/mmediting/tree/1.x/configs/srresnet_srgan) | super-resolution | Y | Y | Y | Y | Y | +| [SRGAN](https://github.com/open-mmlab/mmediting/tree/1.x/configs/srgan_resnet) | super-resolution | Y | Y | Y | Y | Y | +| [SRResNet](https://github.com/open-mmlab/mmediting/tree/1.x/configs/srgan_resnet) | super-resolution | Y | Y | Y | Y | Y | | [Real-ESRGAN](https://github.com/open-mmlab/mmediting/tree/1.x/configs/real_esrgan) | super-resolution | Y | Y | Y | Y | Y | | [EDSR](https://github.com/open-mmlab/mmediting/tree/1.x/configs/edsr) | super-resolution | Y | Y | Y | N | Y | | [RDN](https://github.com/open-mmlab/mmediting/tree/1.x/configs/rdn) | super-resolution | Y | Y | Y | Y | Y | diff --git a/docs/en/05-supported-backends/onnxruntime.md b/docs/en/05-supported-backends/onnxruntime.md index 10b1714bd2..51a134bd04 100644 --- a/docs/en/05-supported-backends/onnxruntime.md +++ b/docs/en/05-supported-backends/onnxruntime.md @@ -63,4 +63,4 @@ Take custom operator `roi_align` for example. ## References - [How to export Pytorch model with custom op to ONNX and run it in ONNX Runtime](https://github.com/onnx/tutorials/blob/master/PyTorchCustomOperator/README.md) -- [How to add a custom operator/kernel in ONNX Runtime](https://github.com/microsoft/onnxruntime/blob/master/docs/AddingCustomOp.md) +- [How to add a custom operator/kernel in ONNX Runtime](https://onnxruntime.ai/docs/reference/operators/add-custom-op.html) diff --git a/docs/zh_cn/05-supported-backends/onnxruntime.md b/docs/zh_cn/05-supported-backends/onnxruntime.md index 2d109dcfe6..4b3a25e4d0 100644 --- a/docs/zh_cn/05-supported-backends/onnxruntime.md +++ b/docs/zh_cn/05-supported-backends/onnxruntime.md @@ -63,4 +63,4 @@ Take custom operator `roi_align` for example. ## References - [How to export Pytorch model with custom op to ONNX and run it in ONNX Runtime](https://github.com/onnx/tutorials/blob/master/PyTorchCustomOperator/README.md) -- [How to add a custom operator/kernel in ONNX Runtime](https://github.com/microsoft/onnxruntime/blob/master/docs/AddingCustomOp.md) +- [How to add a custom operator/kernel in ONNX Runtime](https://onnxruntime.ai/docs/reference/operators/add-custom-op.html) diff --git a/docs/zh_cn/05-supported-backends/openvino.md b/docs/zh_cn/05-supported-backends/openvino.md index 9eccc3cc44..dc4188fcb0 100644 --- a/docs/zh_cn/05-supported-backends/openvino.md +++ b/docs/zh_cn/05-supported-backends/openvino.md @@ -17,7 +17,7 @@ pip install openvino-dev 3. Install MMDeploy following the [instructions](../01-how-to-build/build_from_source.md). -To work with models from [MMDetection](https://github.com/open-mmlab/mmdetection/blob/master/docs/get_started.md), you may need to install it additionally. +To work with models from [MMDetection](https://github.com/open-mmlab/mmdetection/blob/3.x/docs/en/get_started.md), you may need to install it additionally. ## Usage