Skip to content

Commit

Permalink
Merge pull request #18 from vsl9/master
Browse files Browse the repository at this point in the history
Minor docs updates
  • Loading branch information
okuchaiev authored Sep 18, 2019
2 parents 909e975 + 72e3f11 commit 651a206
Show file tree
Hide file tree
Showing 4 changed files with 7 additions and 11 deletions.
4 changes: 2 additions & 2 deletions README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ See `this video <https://nvidia.github.io/NeMo/>`_ for a quick walk-through.
**Requirements**

1) Python 3.6 or 3.7
2) Pytorch 1.2 with GPU support
2) PyTorch 1.2 with GPU support
3) NVIDIA APEX. Install from here: https://github.com/NVIDIA/apex


Expand Down Expand Up @@ -81,7 +81,7 @@ instead of

.. code-block:: bash
# Install the ASR collection from collections/nemo_asr
# Install the ASR collection from collections/nemo_asr
apt-get install libsndfile1
cd collections/nemo_asr
pip install .
Expand Down
8 changes: 2 additions & 6 deletions docs/sources/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ A "Neural Module" is a block of code that computes a set of outputs from a set o

Neural Modules’ inputs and outputs have Neural Type for semantic checking.

An application built with NeMo application is a Directed Acyclic Graph(DAG) of connected modules enabling researchers to define and build new speech and nlp networks easily through API Compatible modules.
An application built with NeMo application is a Directed Acyclic Graph (DAG) of connected modules enabling researchers to define and build new speech and nlp networks easily through API Compatible modules.


**Introduction**
Expand All @@ -49,14 +49,10 @@ See this video for a walk-through.
**Requirements**

1) Python 3.6 or 3.7
2) Pytorch 1.2 with GPU support
2) PyTorch 1.2 with GPU support
3) NVIDIA APEX: https://github.com/NVIDIA/apex


**Documentation**
TBD


**Getting started**

If desired, you can start with `NGC PyTorch container <https://ngc.nvidia.com/catalog/containers/nvidia:pytorch>`_ which already includes
Expand Down
2 changes: 1 addition & 1 deletion examples/asr/ASR-Jasper-Tutorial.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -354,7 +354,7 @@
"metadata": {},
"source": [
"## Mixed Precision Training\n",
"Mixed precision and distributed training in NeMo is based on <a href=\"https://github.com/NVIDIA/apex\">NVIDIA’s APEX library</a>. This is installed with NVIDIA's NGC Pytorch container with an example of updating in the example Dockerfile.\n",
"Mixed precision and distributed training in NeMo is based on <a href=\"https://github.com/NVIDIA/apex\">NVIDIA’s APEX library</a>. This is installed with NVIDIA's NGC PyTorch container with an example of updating in the example Dockerfile.\n",
"\n",
"> **Note** - _Because mixed precision requires Tensor Cores it\n",
"> only works on NVIDIA Volta and Turing based GPUs._\n",
Expand Down
4 changes: 2 additions & 2 deletions examples/start_here/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ Just learns simple function `y=sin(x)`.
Simply run from `examples/start_here` folder.

# ChatBot Example
This is an adaptation of [Pytorch Chatbot tutorial](https://pytorch.org/tutorials/beginner/chatbot_tutorial.html)
This is an adaptation of [PyTorch Chatbot tutorial](https://pytorch.org/tutorials/beginner/chatbot_tutorial.html)
Simply run from `examples/start_here` folder.

During training it will print **SOURCE**, **PREDICTED RESPONSE** and **TARGET**.
Expand Down Expand Up @@ -43,4 +43,4 @@ outputs, hidden = decoder(targets=tgt,
max_target_len=max_tgt_length)
...
```
Simply run from `examples/start_here` folder.
Simply run from `examples/start_here` folder.

0 comments on commit 651a206

Please sign in to comment.