Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

about running #17

Open
FDZero opened this issue Aug 19, 2019 · 1 comment
Open

about running #17

FDZero opened this issue Aug 19, 2019 · 1 comment

Comments

@FDZero
Copy link

FDZero commented Aug 19, 2019

Hi,there is a new property
i run this demo on python2.7 and caffe(make by source code in CPU-ONLY mode) , when i run "python main.py" , it pushed a problem

root@linaro-alip:/usr/local/deeppark# python deep-parking/main.py
INFO 2019-08-19 07:54:48: Setting up mAlexNet trained on CNRPark-EXT_sunny, validated on CNRPark-EXT_overcast, CNRPark-EXT_rainy, tested on CNRPark-EXT_overcast_CNRPark-EXT_rainy ...
INFO 2019-08-19 07:54:52: Training on CNRPark-EXT_sunny while validating on CNRPark-EXT_overcast, CNRPark-EXT_rainy ...
I0819 07:54:53.452330 15857 net.cpp:200] pool3 needs backward computation.
I0819 07:54:53.452353 15857 net.cpp:200] relu3 needs backward computation.
I0819 07:54:53.452368 15857 net.cpp:200] conv3 needs backward computation.
I0819 07:54:53.452384 15857 net.cpp:200] pool2 needs backward computation.
I0819 07:54:53.452399 15857 net.cpp:200] relu2 needs backward computation.
I0819 07:54:53.452421 15857 net.cpp:200] conv2 needs backward computation.
I0819 07:54:53.452436 15857 net.cpp:200] pool1 needs backward computation.
I0819 07:54:53.452451 15857 net.cpp:200] relu1 needs backward computation.
I0819 07:54:53.452464 15857 net.cpp:200] conv1 needs backward computation.
I0819 07:54:53.452486 15857 net.cpp:202] label_data_1_split does not need backward computation.
I0819 07:54:53.452502 15857 net.cpp:202] data does not need backward computation.
I0819 07:54:53.452514 15857 net.cpp:244] This network produces output accuracy
I0819 07:54:53.452529 15857 net.cpp:244] This network produces output loss
I0819 07:54:53.452570 15857 net.cpp:257] Network initialization done.
I0819 07:54:53.452848 15857 solver.cpp:57] Solver scaffolding done.
I0819 07:54:53.452966 15857 caffe.cpp:239] Starting Optimization
I0819 07:54:53.452992 15857 solver.cpp:289] Solving mAlexNet
I0819 07:54:53.453006 15857 solver.cpp:290] Learning Rate Policy: step
I0819 07:54:53.453198 15857 solver.cpp:347] Iteration 0, Testing net (#0)
I0819 07:54:53.453311 15857 blocking_queue.cpp:49] Waiting for data
DEBUG 2019-08-19 07:55:06: Summarizing mAlexNet-tr_CNRPark-EXT_sunny-vl_CNRPark-EXT_overcast_CNRPark-EXT_rainy-ts_CNRPark-EXT_overcast_CNRPark-EXT_rainy ...
Traceback (most recent call last):
File "deep-parking/main.py", line 62, in
pyffe.summarize(exps).to_csv('results.csv')
File "/usr/local/deeppark/deep-parking/pyffe/experiment.py", line 84, in summarize
r = e.summarize()
File "/usr/local/deeppark/deep-parking/pyffe/experiment.py", line 42, in decorator
return function(*args, **kwargs)
File "/usr/local/deeppark/deep-parking/pyffe/experiment.py", line 548, in summarize
last_iter = log_data['train']['iteration'][-1]
IndexError: list index out of range

anyway, i want to kown if this demo run success, how to show result? Will create a .csv file or pop a screen?

@fabiocarrara
Copy link
Owner

fabiocarrara commented Aug 19, 2019

For each experiment, a directory should be created containing training outputs, i.e. all the caffe prototxt files used to train the network and to evaluate it on the test sets, snapshots of the trained network, databases of predictions, a csv containing the results, etc.

Those are the only outputs of this code (which is meant to reproduce the paper's results).
No demo-like things are implemented here; any visualization of the results on the camera frames is up to you (contributions are welcome).

Regarding the error you get, your run has probably failed at the beginning of the training phase. I'm guessing that the caffe process failed to find image data and exited prematurely. That may be the reason why the script dies when trying to retrieve training log data. Please double-check that your dataset paths are correctly set (check also #3).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants