warp-ctc requires torch and running ./install.sh for torch requires sudo ./clean.sh sudo TORCH_NVCC_FLAGS="-D__CUDA_NO_HALF_OPERATORS__" ./install.sh and answer yes at the end then create a new shell rm -rf warp-ctc git clone https://github.com/baidu-research/warp-ctc.git cd warp-ctc create a build directory: mkdir build cd build if you have a non standard CUDA install export CUDA_BIN_PATH=/path_to_cuda so that CMake detects CUDA and to ensure Torch is detected, make sure th is in $PATH run cmake and build: cmake ../ make sudo make install edit .bashrc and add cd ~/ install mxnet but change make/config.mk in incubator-mxnet directory by uncommenting: WARPCTC_PATH = $(HOME)/warp-ctc MXNET_PLUGINS += plugin/warpctc/warpctc.mk cd ~/ git clone --recursive https://github.com/apache/incubator-mxnet.git mxnet cd mxnet cd make vi config.mk # do the uncommenting mentioned above cd ../.. make -j USE_BLAS=openblas USE_CUDA=1 USE_CUDA_PATH=/usr/local/cuda USE_CUDNN=1 USE_NCCL=1 USE_NCCL_PATH=/usr/local/nccl cd python sudo pip install -e . follow install instruction from speech_recognition/Readme.md for LibriSpeech installation and creation of json files mkdir checkpoints mkdir logs edit deepspeech.cfg to run on one GPU and correct training json file name diff deepspeech.cfg deepspeech_orig.cfg 22,23c22 < #context = gpu0,gpu1,gpu2 < context = gpu0 --- > context = gpu0,gpu1,gpu2 45c44 < train_json = ./train_corpus.json --- > train_json = ./train_corpus_all.json cd example/speech_recognition edit train.py diff train_orig.py train.py 26c26 < from tensorboard import SummaryWriter --- > #from tensorboard import SummaryWriter > import tensorflow as tf 136c136,137 < summary_writer = SummaryWriter(tblog_dir) --- > #summary_writer = SummaryWriter(tblog_dir) > summary_writer = tf.summary.FileWriter(tblog_dir) export CUDA_VISIBLE_DEVICES=0 tensorflow is needed for the summary.filewriter export PYTHONPATH=~/tf_r19_92_714_py:$PYTHONPATH python main.py --configfile deepspeech.cfg > deepspeech_v100_train_1.log 2>&1 the application spends about 15 minutes processing statistics on the input files through a large number of python jobs executing on CPUs...they do go away eventually