diff --git a/examples/contextnet/README.md b/examples/contextnet/README.md
index f788a67dbb..3619707a5d 100644
--- a/examples/contextnet/README.md
+++ b/examples/contextnet/README.md
@@ -17,3 +17,31 @@ Training, see `python examples/contextnet/train_*.py --help`
Testing, see `python examples/contextnet/test_*.py --help`
TFLite Conversion, see `python examples/contextnet/tflite_*.py --help`
+
+## RNN Transducer Subwords - Results on LibriSpeech
+
+**Summary**
+
+- Number of subwords: 1008
+- Maximum length of a subword: 10
+- Subwords corpus: all training sets
+- Number of parameters: 12,075,320
+- Number of epochs: 86
+- Train on: 8 Google Colab TPUs
+- Train hours: 8.375 days uncontinuous (each day I trained 12 epoch because colab only allows 12 hours/day and 1 epoch required 1 hour) => 86 hours continuous (3.58333333 days)
+
+**Pretrained and Config**, go to [drive](https://drive.google.com/drive/folders/1fzOkwKaOcMUMD9BAjcLLmSG2Tfpeabbq?usp=sharing)
+
+**Epoch Transducer Loss**
+
+
+
+**Epoch Learning Rate**
+
+
+
+**Error Rates**
+
+| **Test-clean** | Test batch size | Epoch | WER (%) | CER (%) |
+| :------------: | :-------------: | :---: | :----------------: | :----------------: |
+| _Greedy_ | 1 | 86 | 10.356436669826508 | 5.8370333164930344 |
\ No newline at end of file
diff --git a/examples/contextnet/figs/1008_epoch_learning_rate.svg b/examples/contextnet/figs/1008_epoch_learning_rate.svg
new file mode 100644
index 0000000000..f1c16b7273
--- /dev/null
+++ b/examples/contextnet/figs/1008_epoch_learning_rate.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/examples/contextnet/figs/1008_subword_contextnet_loss.svg b/examples/contextnet/figs/1008_subword_contextnet_loss.svg
new file mode 100644
index 0000000000..f1c5ca2799
--- /dev/null
+++ b/examples/contextnet/figs/1008_subword_contextnet_loss.svg
@@ -0,0 +1 @@
+
\ No newline at end of file