Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Running only the saved models #8

Open
NarenBabuR opened this issue Oct 8, 2018 · 8 comments
Open

Running only the saved models #8

NarenBabuR opened this issue Oct 8, 2018 · 8 comments

Comments

@NarenBabuR
Copy link

Can you please tell the changes to be made in main.py to run only the trained model

@fabiocarrara
Copy link
Owner

fabiocarrara commented Oct 8, 2018

You can use the forward.py script provided in pyffe.

python pyffe/forward.py

usage: forward.py [-h] [-mf MEAN_FILE] [-mp MEAN_PIXEL] [--nogpu]
                  [-rf ROOT_FOLDER]
                  deploy_file caffemodel image_list output_file
[...]
positional arguments:
  deploy_file           Path to the deploy file
  caffemodel            Path to a .caffemodel
  image_list            Path to an image list
  output_file           Name of output file

Ignore mean_file and mean_pixel arguments (they are not used in deep-parking experiments).
You just need to provide:

  • the deploy.prototxt file
  • the trained model (caffemodel)
  • a txt file containing the URLs of images to analyze, one per row (image_list)
  • a name for the output file (output_file), the output is a numpy file (.npy)

Example:

python pyffe/forward.py path/to/deploy.prototxt path/to/snapshot_iter_xxx.caffemodel images.txt predictions.npy

where an example of images.txt is:

/path/to/image1.png
/path/to/image2.png
...

@NarenBabuR
Copy link
Author

NarenBabuR commented Oct 8, 2018

Thank you very much for the DETAILED reply.

  1. Can you just give an example for the above

  2. Mainly I need to work with Video file as input (as your YouTube video sample).
    Can you please tell me how to proceed with this.

Since I'm new to Deep Learning., I don't know much of it.
Thanxs in advance

@fabiocarrara
Copy link
Owner

I updated the first answer with an example.
About videos, our model only works on pre-extracted image patches.
The visualization part you see on YouTube use our model and is implemented in Java + OpenCV. Unfortunately, we were not responsible for that part, and we do not have any code to share.
However, I think you can easily reimplement that with newer versions of OpenCV (>= 3.3), which added the support for caffe models in the DNN module.

Some guides for Python:

@NarenBabuR
Copy link
Author

NarenBabuR commented Oct 8, 2018 via email

@ahadafzal
Copy link

I ran the pyffe/forward.py

python3 pyffe/forward.py ~/Downloads/CNRPark+EXT_Trained_Models_mAlexNet/mAlexNet-on-UFPR05/deploy.prototxt ~/Downloads/CNRPark+EXT_Trained_Models_mAlexNet/mAlexNet-on-UFPR05/snapshot_iter_16170.caffemodel images.txt prediction.npy

Here is the content of images.txt:
m

here is the output of prediction.npy:
o

I think the output is not expected. the prediction is not expected.
Any help on this @fabiocarrara please.

@nikola310
Copy link

Did you solve your problem @ahadafzal ? I'm also having the same issue.

@ahadafzal
Copy link

@nikola310 nope. I didn't use this later. I opted for vgg16 model. Recent published a paper also in IEEE Scopus. 🙂

@nikola310
Copy link

@ahadafzal I see. I'll have to check it out then 😃

In case anyone stumbles upon this problem, since I was trying to test on the same data sets used during training, my solution was to use the appropriate patched image for each model. So if you're trying to run model trained on CNRPark, you have to use CNRPark patched images.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants