You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, @dutran , thank you for your great work.
I finetuned the r2plus1d model on my own dataset using train_net.py, then, I got the best test accuracy 0.72 and the corresponding model r2plus1d_3.mdl. However, when I use the same test dataset and the r2plus1d_3.mdl to run test_net.py, the test accuracy is low. It is about 0.2. And I also tried to extract the features using extract_features.py and then got the test accuracy using dense_prediciton_aggregation.py. The test accuracy is low too, it is at most 0.12.
It makes me feel confused. Why is the test accuracy so different?
I know the value of the decode_type may influence the test accuracy, but I wonder if there are any other reasons that could affect the test accuracy? Could you give me some advice? Thank you.
The text was updated successfully, but these errors were encountered:
It is hard to tell why it gives you different performance since you did not provide enough data/info here. But it's worth for you to check if you have the same hyper-parameter settings with train_net vs. test_net and featture_extraction?
Thank you for your response.
Follow the guideline, the .csv file of input data in train_net and test_net is "org_video, label", the .csv file of input data in feature_extraction is "org_video, label, start_frm, video_id". Then, I create lmdb file. The hyper-parameters setting with train_net, test_net and feature_extraction are shown in the picture.
I was checked the hyper-parameter, but I could not find anything wrong. Thank you.
Hi,
@dutran , thank you for your great work.
I finetuned the r2plus1d model on my own dataset using train_net.py, then, I got the best test accuracy 0.72 and the corresponding model r2plus1d_3.mdl. However, when I use the same test dataset and the r2plus1d_3.mdl to run test_net.py, the test accuracy is low. It is about 0.2. And I also tried to extract the features using extract_features.py and then got the test accuracy using dense_prediciton_aggregation.py. The test accuracy is low too, it is at most 0.12.
It makes me feel confused. Why is the test accuracy so different?
I know the value of the decode_type may influence the test accuracy, but I wonder if there are any other reasons that could affect the test accuracy? Could you give me some advice? Thank you.
The text was updated successfully, but these errors were encountered: