-
-
Notifications
You must be signed in to change notification settings - Fork 16.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
explanation of the val.py #8968
Comments
Assuming that your two datasets contain the same classes and you have a --data dataset.yaml file for each of the datasets. If you want to "test" it you have to use detect.py or use Torch hub inference and loop trough the opposite images of the tested dataset. I hope this helps |
Thanks a lot I have another some questions, please! the second one is taht do I need to reset the parameters as my custom-trained model? I used a 16 of batch size and workers 4, but in your val.py default is 32 and 8 respectively.
the last question, can I use val.py with two different datasets which have a different number of classes? |
You mention them as two datasets, which causes me to believe that they are placed in different locations and that would be the main difference between dataset1.yaml and dataset2.yaml
These are flags for the smart inference mode, their behavior is described in line 366 to 391: ` if opt.task in ('train', 'val', 'test'): # run normally
No you can use any batch size and worker size that fits your training machine to make training as fast as possible.
Nice that you found the answer on your own. I think if you want to validate on a dataset that does not match your .pt file, you have to rewrite val to ignore the classes that hasn't been trained on. I hope this helps |
you are my hero! thanks a lot do I need to reset the parameters as my custom-trained model? |
val.py is essentially using the same dataloader as train.py I believe, so you can use your values, but as far as i known val.py is not as demanding, so you might be able to perform validation faster by increasing batch size and workers compared to your training. At some point you will reach your memory / cpu limit which will cause a crash probably. Please don't forget to mark the issue as solved, if your question has been answered 🙂 |
but still did not get what should I do here. what supposed to be the commands?! |
Isn't the code from my first answer enough? If you want to you can set batch size and workers, but if it can run on default then that will make it validate faster For dataset 1 |
@MartinPedersenpp i noticed a misunderstanding in the previous response. To evaluate two different trained models with the same YAML file using val.py, you can use the following commands: For dataset 1: python path/to/val.py --weights dataset1bestorlast.pt --data dataset1.yaml For dataset 2: python path/to/val.py --weights dataset2bestorlast.pt --data dataset2.yaml You can keep the batch size and workers as they are in the val.py file, or adjust them according to your specific machine's capabilities. However, using default values should work efficiently for validation. I hope this clears up any confusion. Let me know if you need further assistance. |
Search before asking
Question
Hi there!
Thanks a lot for your work.
I have a question please, I could not understand the usage of val.py
In my case, I have two different datasets and both are trained with yolov5. Now, is it possible to use the val.py file to validate the trained model of each dataset in the other? For example, I have dataset1 and want to test or Validate it in dataset2, is that possible?
could you please answer me with an example of the commands?
Thanks a lot
Additional
No response
The text was updated successfully, but these errors were encountered: