Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

reformat some code #2809

Merged
merged 15 commits into from
Nov 1, 2018
1 change: 0 additions & 1 deletion egs/swbd/s5c/local/chain/tuning/run_tdnn_lstm_1a.sh
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,6 @@ chunk_width=150
chunk_left_context=40
chunk_right_context=0
xent_regularize=0.025
self_repair_scale=0.00001
label_delay=5
# decode options
extra_left_context=50
Expand Down
1 change: 0 additions & 1 deletion egs/swbd/s5c/local/chain/tuning/run_tdnn_lstm_1b.sh
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,6 @@ chunk_width=150
chunk_left_context=40
chunk_right_context=0
xent_regularize=0.025
self_repair_scale=0.00001
label_delay=5
# decode options
extra_left_context=50
Expand Down
1 change: 0 additions & 1 deletion egs/swbd/s5c/local/chain/tuning/run_tdnn_lstm_1c.sh
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,6 @@ chunk_width=150
chunk_left_context=40
chunk_right_context=0
xent_regularize=0.025
self_repair_scale=0.00001
label_delay=5
# decode options
extra_left_context=50
Expand Down
1 change: 0 additions & 1 deletion egs/swbd/s5c/local/chain/tuning/run_tdnn_lstm_1d.sh
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,6 @@ decode_iter=final

# training options
xent_regularize=0.025
self_repair_scale=0.00001
label_delay=5

chunk_left_context=40
Expand Down
1 change: 0 additions & 1 deletion egs/swbd/s5c/local/chain/tuning/run_tdnn_lstm_1e.sh
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,6 @@ decode_nj=50

# training options
xent_regularize=0.01
self_repair_scale=0.00001
label_delay=5

chunk_left_context=40
Expand Down
1 change: 0 additions & 1 deletion egs/swbd/s5c/local/chain/tuning/run_tdnn_lstm_1f.sh
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,6 @@ decode_iter=final

# training options
xent_regularize=0.01
self_repair_scale=0.00001
label_delay=5

chunk_left_context=40
Expand Down
1 change: 0 additions & 1 deletion egs/swbd/s5c/local/chain/tuning/run_tdnn_lstm_1g.sh
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,6 @@ decode_iter=final

# training options
xent_regularize=0.01
self_repair_scale=0.00001
label_delay=5

chunk_left_context=40
Expand Down
1 change: 0 additions & 1 deletion egs/swbd/s5c/local/chain/tuning/run_tdnn_lstm_1h.sh
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,6 @@ decode_iter=final

# training options
xent_regularize=0.01
self_repair_scale=0.00001
label_delay=5

chunk_left_context=40
Expand Down
1 change: 0 additions & 1 deletion egs/swbd/s5c/local/chain/tuning/run_tdnn_lstm_1i.sh
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,6 @@ decode_iter=final

# training options
xent_regularize=0.01
self_repair_scale=0.00001
label_delay=5

chunk_left_context=40
Expand Down
1 change: 0 additions & 1 deletion egs/swbd/s5c/local/chain/tuning/run_tdnn_lstm_1j.sh
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,6 @@ decode_nj=50

# training options
xent_regularize=0.01
self_repair_scale=0.00001
label_delay=5

chunk_left_context=40
Expand Down
1 change: 0 additions & 1 deletion egs/swbd/s5c/local/chain/tuning/run_tdnn_lstm_1k.sh
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,6 @@ decode_nj=50

# training options
xent_regularize=0.01
self_repair_scale=0.00001
label_delay=5

chunk_left_context=40
Expand Down
1 change: 0 additions & 1 deletion egs/swbd/s5c/local/chain/tuning/run_tdnn_lstm_1l.sh
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,6 @@ chunk_width=150
chunk_left_context=40
chunk_right_context=0
xent_regularize=0.025
self_repair_scale=0.00001
label_delay=5
dropout_schedule='0,0@0.20,0.3@0.50,0'
# decode options
Expand Down
1 change: 0 additions & 1 deletion egs/swbd/s5c/local/chain/tuning/run_tdnn_lstm_1m.sh
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,6 @@ frames_per_chunk_primary=$(echo $frames_per_chunk | cut -d, -f1)
chunk_left_context=40
chunk_right_context=0
xent_regularize=0.025
self_repair_scale=0.00001
label_delay=5
# decode options
extra_left_context=50
Expand Down
1 change: 0 additions & 1 deletion egs/swbd/s5c/local/chain/tuning/run_tdnn_lstm_1n.sh
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,6 @@ frames_per_chunk_primary=$(echo $frames_per_chunk | cut -d, -f1)
chunk_left_context=40
chunk_right_context=0
xent_regularize=0.025
self_repair_scale=0.00001
label_delay=5
# decode options
extra_left_context=50
Expand Down
3 changes: 0 additions & 3 deletions egs/wsj/s5/utils/format_lm_sri.sh
Original file line number Diff line number Diff line change
Expand Up @@ -48,8 +48,6 @@ else
out_dir=$3
fi

mkdir -p $out_dir

for f in $lm $lang_dir/words.txt; do
if [ ! -f $f ]; then
echo "$0: expected input file $f to exist."
Expand All @@ -73,7 +71,6 @@ trap 'rm -rf "$tmpdir"' EXIT
mkdir -p $out_dir
cp -r $lang_dir/* $out_dir || exit 1;

lm_base=$(basename $lm '.gz')
awk '{print $1}' $out_dir/words.txt > $tmpdir/voc || exit 1;

# Change the LM vocabulary to be the intersection of the current LM vocabulary
Expand Down
2 changes: 1 addition & 1 deletion src/cudamatrix/cu-value.h
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@
#ifndef KALDI_CUDAMATRIX_CU_VALUE_H_
#define KALDI_CUDAMATRIX_CU_VALUE_H_

#include <cudamatrix/cu-device.h>
#include "cudamatrix/cu-device.h"

namespace kaldi {

Expand Down
2 changes: 1 addition & 1 deletion src/doc/hmm.dox
Original file line number Diff line number Diff line change
Expand Up @@ -98,7 +98,7 @@ numbered state of a
"prototype HMM" has two variables "forward_pdf_class" and "self_loop_pdf_class".
The "self_loop_pdf_class" is a kind of pdf-class that is associated
with self-loop transition. It is by default identical to "forward_pdf_class",
but it can be used to define less-convectional HMM topologies
but it can be used to define less-conventional HMM topologies
where the pdfs on the self-loop and forward transitions are different.
The decision to allow the pdf-class on just the self-loop to be different,
while not embracing a fully "arc-based" representation where the pdfs on
Expand Down
2 changes: 1 addition & 1 deletion src/nnet3/decodable-online-looped.cc
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@
// See the Apache 2 License for the specific language governing permissions and
// limitations under the License.

#include <nnet3/decodable-online-looped.h>
#include "nnet3/decodable-online-looped.h"
#include "nnet3/nnet-utils.h"

namespace kaldi {
Expand Down
2 changes: 1 addition & 1 deletion src/tree/tree-renderer.cc
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@
// See the Apache 2 License for the specific language governing permissions and
// limitations under the License.

#include <tree/tree-renderer.h>
#include "tree/tree-renderer.h"

namespace kaldi {
const int32 TreeRenderer::kEdgeWidth = 1;
Expand Down