Skip to content

Commit

Permalink
Replace other links from 0.2.1 to 0.3.0
Browse files Browse the repository at this point in the history
Change-Id: Ia4a6c4f81e69694c10dbbb5d7f2b61a802221918
  • Loading branch information
zachgk committed Feb 22, 2020
1 parent 5b71ee4 commit 42c011c
Show file tree
Hide file tree
Showing 20 changed files with 51 additions and 50 deletions.
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -79,6 +79,7 @@ The following pseudocode demonstrates running training:
- [JavaDoc API Reference](https://javadoc.djl.ai/)

## Release Notes
* [0.3.0](https://github.com/awslabs/djl/releases/tag/v0.3.0) ([Code](https://github.com/awslabs/djl/tree/v0.3.0))
* [0.2.1](https://github.com/awslabs/djl/releases/tag/v0.2.1) ([Code](https://github.com/awslabs/djl/tree/v0.2.1))
* [0.2.0 Initial release](https://github.com/awslabs/djl/releases/tag/v0.2.0) ([Code](https://github.com/awslabs/djl/tree/v0.2.0))

Expand Down
4 changes: 2 additions & 2 deletions api/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ This module contains the core API of the Deep Java Library (DJL) project. It inc

## Documentation

The latest javadocs can be found on the [djl.ai website](https://javadoc.djl.ai/api/0.2.1/index.html).
The latest javadocs can be found on the [djl.ai website](https://javadoc.djl.ai/api/0.3.0/index.html).

You can also build the latest javadocs locally using the following command:

Expand All @@ -31,6 +31,6 @@ You can pull the DJL API from the central Maven repository by including the foll
<dependency>
<groupId>ai.djl</groupId>
<artifactId>api</artifactId>
<version>0.2.1</version>
<version>0.3.0</version>
</dependency>
```
4 changes: 2 additions & 2 deletions basicdataset/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ This module contains the following datasets:

## Documentation

The latest javadocs can be found on the [djl.ai website](https://javadoc.djl.ai/basicdataset/0.2.1/index.html).
The latest javadocs can be found on the [djl.ai website](https://javadoc.djl.ai/basicdataset/0.3.0/index.html).

You can also build the latest javadocs locally using the following command:

Expand All @@ -34,6 +34,6 @@ You can pull the module from the central Maven repository by including the follo
<dependency>
<groupId>ai.djl</groupId>
<artifactId>basicdataset</artifactId>
<version>0.2.1</version>
<version>0.3.0</version>
</dependency>
```
2 changes: 1 addition & 1 deletion docs/development/development_guideline.md
Original file line number Diff line number Diff line change
Expand Up @@ -105,7 +105,7 @@ You can create your own NDArray renderer as follows:

Please make sure to:
- Check the "On-demand" option, which causes IntelliJ to only render the NDArray when you click on the variable.
- Change the "Use following expression" field to something like [toDebugString(100, 10, 10, 20)](https://javadoc.djl.ai/mxnet-engine/0.2.1/ai/djl/mxnet/engine/MxNDArray.html#toDebugString-int-int-int-int-)
- Change the "Use following expression" field to something like [toDebugString(100, 10, 10, 20)](https://javadoc.djl.ai/mxnet-engine/0.3.0/ai/djl/mxnet/engine/MxNDArray.html#toDebugString-int-int-int-int-)
if you want to adjust the range of NDArray's debug output.

## Common Problems
Expand Down
4 changes: 2 additions & 2 deletions docs/development/how_to_use_dataset.md
Original file line number Diff line number Diff line change
Expand Up @@ -111,8 +111,8 @@ api group: 'org.apache.commons', name: 'commons-csv', version: '1.7'
### Step 2: Implementation
In order to extend the dataset, the following dependencies are required:
```
api "ai.djl:api:0.2.1"
api "ai.djl:basicdataset:0.2.1"
api "ai.djl:api:0.3.0"
api "ai.djl:basicdataset:0.3.0"
```
There are three parts we need to implement for CSVDataset.

Expand Down
2 changes: 1 addition & 1 deletion examples/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ The following examples are included:
## Prerequisites

* You need to have Java Development Kit version 8 or later installed on your system. For more information, see [Setup](../docs/development/setup.md).
* You should be familiar with the API documentation in the DJL [Javadoc](https://javadoc.djl.ai/api/0.2.1/index.html).
* You should be familiar with the API documentation in the DJL [Javadoc](https://javadoc.djl.ai/api/0.3.0/index.html).


# Getting started: 30 seconds to run an example
Expand Down
2 changes: 1 addition & 1 deletion examples/docs/multithread_inference.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ It can help to increase the throughput of your inference on multi-core CPUs and
## Overview

DJL predictor is not thread-safe.
You must create a new [Predictor](https://javadoc.djl.ai/api/0.2.1/index.html?ai/djl/inference/Predictor.html) for each thread.
You must create a new [Predictor](https://javadoc.djl.ai/api/0.3.0/index.html?ai/djl/inference/Predictor.html) for each thread.

For a reference implementation, see [Multi-threaded Benchmark](../src/main/java/ai/djl/examples/inference/benchmark/MultithreadedBenchmark.java).

Expand Down
4 changes: 2 additions & 2 deletions examples/docs/train_cifar10_resnet.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,9 +62,9 @@ It's part of the optimization algorithm that controls how fast to move towards r
During the training process, you should usually reduce the learning rate periodically to prevent the model from plateauing.
You will also need different learning rate strategies based on whether you are using a pre-trained model or training from scratch.
DJL provides several built-in `LearningRateTracker`s to suit your needs. For more information, see the
[documentation](https://javadoc.djl.ai/api/0.2.1/index.html?ai/djl/training/optimizer/learningrate/LearningRateTracker.html).
[documentation](https://javadoc.djl.ai/api/0.3.0/index.html?ai/djl/training/optimizer/learningrate/LearningRateTracker.html).

Here, you use a [`MultiFactorTracker`](https://javadoc.djl.ai/api/0.2.1/index.html?ai/djl/training/optimizer/learningrate/MultiFactorTracker.html),
Here, you use a [`MultiFactorTracker`](https://javadoc.djl.ai/api/0.3.0/index.html?ai/djl/training/optimizer/learningrate/MultiFactorTracker.html),
which allows you to reduce the learning rate after a specified number of periods.
We use a base learning rate of `0.001`, and reduce it by `sqrt(0.1)` every specified number of epochs.
For a pre-trained model, you reduce the learning rate at the 2nd, 5th, and 8th epoch because it take less time to train and converge.
Expand Down
4 changes: 2 additions & 2 deletions fasttext/fasttext-engine/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ This is a shallow wrapper around [JFastText](https://github.com/vinhkhuc/JFastTe

## Documentation

The latest javadocs can be found on the [djl.ai website](https://javadoc.djl.ai/fasttext-engine/0.2.1/index.html).
The latest javadocs can be found on the [djl.ai website](https://javadoc.djl.ai/fasttext-engine/0.3.0/index.html).

You can also build the latest javadocs locally using the following command:

Expand All @@ -35,7 +35,7 @@ You can pull the fastText engine from the central Maven repository by including
<dependency>
<groupId>ai.djl.fasttext</groupId>
<artifactId>fasttext-engine</artifactId>
<version>0.2.1</version>
<version>0.3.0</version>
<scope>runtime</scope>
</dependency>
```
4 changes: 2 additions & 2 deletions jupyter/BERTQA.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -302,7 +302,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Armed with the needed knowledge, you can write an implementation of the `Translator` interface. `BertTranslator` uses the code snippets explained previously to implement the `processInput`method. For more information, see [`NDManager`](https://javadoc.djl.ai/api/0.2.1/ai/djl/ndarray/NDManager.html).\n",
"Armed with the needed knowledge, you can write an implementation of the `Translator` interface. `BertTranslator` uses the code snippets explained previously to implement the `processInput`method. For more information, see [`NDManager`](https://javadoc.djl.ai/api/0.3.0/ai/djl/ndarray/NDManager.html).\n",
"\n",
"```\n",
"manager.create(Number[] data, Shape)\n",
Expand Down Expand Up @@ -400,7 +400,7 @@
"Congrats! You have created your first Translator! We have pre-filled the `processOutput()` that will process the `NDList` and return it in a desired format. `processInput()` and `processOutput()` offer the flexibility to get the predictions from the model in any format you desire. \n",
"\n",
"\n",
"With the Translator implemented, you need to bring up the predictor that uses your `Translator` to start making predictions. You can find the usage for `Predictor` in the [Predictor Javadoc](https://javadoc.djl.ai/api/0.2.1/ai/djl/inference/Predictor.html). Create a translator and use the `question` and `resourceDocument` provided previously."
"With the Translator implemented, you need to bring up the predictor that uses your `Translator` to start making predictions. You can find the usage for `Predictor` in the [Predictor Javadoc](https://javadoc.djl.ai/api/0.3.0/ai/djl/inference/Predictor.html). Create a translator and use the `question` and `resourceDocument` provided previously."
]
},
{
Expand Down
8 changes: 4 additions & 4 deletions jupyter/tutorial/create_your_first_network.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -87,16 +87,16 @@
"\n",
"### NDArray\n",
"\n",
"The core data type used for working with Deep Learning is the [NDArray](https://javadoc.djl.ai/api/0.2.1/index.html?ai/djl/ndarray/NDArray.html). An NDArray represents a multidimensional, fixed-size homogeneous array. It has very similar behavior to the Numpy python package with the addition of efficient computing. We also have a helper class, the [NDList](https://javadoc.djl.ai/api/0.2.1/index.html?ai/djl/ndarray/NDList.html) which is a list of NDArrays which can have different sizes and data types.\n",
"The core data type used for working with Deep Learning is the [NDArray](https://javadoc.djl.ai/api/0.3.0/index.html?ai/djl/ndarray/NDArray.html). An NDArray represents a multidimensional, fixed-size homogeneous array. It has very similar behavior to the Numpy python package with the addition of efficient computing. We also have a helper class, the [NDList](https://javadoc.djl.ai/api/0.3.0/index.html?ai/djl/ndarray/NDList.html) which is a list of NDArrays which can have different sizes and data types.\n",
"\n",
"### Block API\n",
"\n",
"In DJL, [Blocks](https://javadoc.djl.ai/api/0.2.1/index.html?ai/djl/nn/Block.html) serve a purpose similar to functions that convert an input `NDList` to an output `NDList`. They can represent single operations, parts of a neural network, and even the whole neural network. What makes blocks special is that they contain a number of parameters that are used in their function and are trained during deep learning. As these parameters are trained, the function represented by the blocks get more and more accurate.\n",
"In DJL, [Blocks](https://javadoc.djl.ai/api/0.3.0/index.html?ai/djl/nn/Block.html) serve a purpose similar to functions that convert an input `NDList` to an output `NDList`. They can represent single operations, parts of a neural network, and even the whole neural network. What makes blocks special is that they contain a number of parameters that are used in their function and are trained during deep learning. As these parameters are trained, the function represented by the blocks get more and more accurate.\n",
"\n",
"When building these block functions, the easiest way is to use composition. Similar to how functions are built by calling other functions, blocks can be built by combining other blocks. We refer to the containing block as the parent and the sub-blocks as the children.\n",
"\n",
"\n",
"We provide several helpers to make it easy to build common block composition structures. For the MLP we will use the [SequentialBlock](https://javadoc.djl.ai/api/0.2.1/index.html?ai/djl/nn/SequentialBlock.html), a container block whose children form a chain of blocks where each child block feeds its output to the next child block in a sequence.\n",
"We provide several helpers to make it easy to build common block composition structures. For the MLP we will use the [SequentialBlock](https://javadoc.djl.ai/api/0.3.0/index.html?ai/djl/nn/SequentialBlock.html), a container block whose children form a chain of blocks where each child block feeds its output to the next child block in a sequence.\n",
" "
]
},
Expand All @@ -115,7 +115,7 @@
"source": [
"## Step 4: Add blocks to SequentialBlock\n",
"\n",
"An MLP is organized into several layers. Each layer is composed of a [Linear Block](https://javadoc.djl.ai/api/0.2.1/index.html?ai/djl/nn/core/Linear.html) and a non-linear activation function. If we just had two linear blocks in a row, it would be the same as a combined linear block ($f(x) = W_2(W_1x) = (W_2W_1)x = W_{combined}x$). An activation is used to intersperse between the linear blocks to allow them to represent non-linear functions. We will use the popular [ReLU](https://javadoc.djl.ai/api/0.2.1/ai/djl/nn/Activation.html#reluBlock--) as our activation function.\n",
"An MLP is organized into several layers. Each layer is composed of a [Linear Block](https://javadoc.djl.ai/api/0.3.0/index.html?ai/djl/nn/core/Linear.html) and a non-linear activation function. If we just had two linear blocks in a row, it would be the same as a combined linear block ($f(x) = W_2(W_1x) = (W_2W_1)x = W_{combined}x$). An activation is used to intersperse between the linear blocks to allow them to represent non-linear functions. We will use the popular [ReLU](https://javadoc.djl.ai/api/0.3.0/ai/djl/nn/Activation.html#reluBlock--) as our activation function.\n",
"\n",
"The first layer and last layers have fixed sizes depending on your desired input and output size. However, you are free to choose the number and sizes of the middle layers in the network. We will create a smaller MLP with two middle layers that gradually decrease the size. Typically, you would experiment with different values to see what works the best on your data set."
]
Expand Down
4 changes: 2 additions & 2 deletions jupyter/tutorial/image_classification_with_your_model.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -112,7 +112,7 @@
"source": [
"## Step 3: Create a `Translator`\n",
"\n",
"The [`Translator`](https://javadoc.djl.ai/api/0.2.1/index.html?ai/djl/translate/Translator.html) is used to encapsulate the pre-processing and post-processing functionality of your application. The input to the processInput and processOutput should be single data items, not batches."
"The [`Translator`](https://javadoc.djl.ai/api/0.3.0/index.html?ai/djl/translate/Translator.html) is used to encapsulate the pre-processing and post-processing functionality of your application. The input to the processInput and processOutput should be single data items, not batches."
]
},
{
Expand Down Expand Up @@ -146,7 +146,7 @@
"source": [
"## Step 4: Create Predictor\n",
"\n",
"Using the translator, we will create a new [`Predictor`](https://javadoc.djl.ai/api/0.2.1/index.html?ai/djl/inference/Predictor.html). The predictor is the main class to orchestrate the inference process. During inference, a trained model is used to predict values, often for production use cases. The predictor is not thread-safe so you should create multiple predictors on the same model with one predictor in each thread."
"Using the translator, we will create a new [`Predictor`](https://javadoc.djl.ai/api/0.3.0/index.html?ai/djl/inference/Predictor.html). The predictor is the main class to orchestrate the inference process. During inference, a trained model is used to predict values, often for production use cases. The predictor is not thread-safe so you should create multiple predictors on the same model with one predictor in each thread."
]
},
{
Expand Down
Loading

0 comments on commit 42c011c

Please sign in to comment.