Skip to content

Commit

Permalink
Merge pull request BVLC#1293 from sguada/new_lr_policies
Browse files Browse the repository at this point in the history
Fixed paths for Added Multistep, Poly and Sigmoid learning rate decay policies
  • Loading branch information
sguada committed Oct 16, 2014
2 parents 3aa2a6d + c76b08a commit 70c1b1c
Show file tree
Hide file tree
Showing 2 changed files with 8 additions and 8 deletions.
Original file line number Diff line number Diff line change
@@ -1,7 +1,5 @@
# The training protocol buffer definition
train_net: "lenet_train.prototxt"
# The testing protocol buffer definition
test_net: "lenet_test.prototxt"
# The train/test net protocol buffer definition
net: "examples/mnist/lenet_train_test.prototxt"
# test_iter specifies how many forward passes the test should carry out.
# In the case of MNIST, we have test batch size 100 and 100 test iterations,
# covering the full 10,000 testing images.
Expand All @@ -27,7 +25,6 @@ display: 100
max_iter: 10000
# snapshot intermediate results
snapshot: 5000
snapshot_prefix: "lenet"
# solver mode: 0 for CPU and 1 for GPU
solver_mode: 1
device_id: 1
snapshot_prefix: "examples/mnist/lenet_multistep"
# solver mode: CPU or GPU
solver_mode: GPU
3 changes: 3 additions & 0 deletions examples/mnist/readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -283,3 +283,6 @@ You just did! All the training was carried out on the GPU. In fact, if you would
and you will be using CPU for training. Isn't that easy?

MNIST is a small dataset, so training with GPU does not really introduce too much benefit due to communication overheads. On larger datasets with more complex models, such as ImageNet, the computation speed difference will be more significant.

### How to reduce the learning rate a fixed steps?
Look at lenet_multistep_solver.prototxt

0 comments on commit 70c1b1c

Please sign in to comment.