-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Questions about whether to use additional datasets #5
Comments
Hi Olaf, Thank you for your interest! Yes, I used GPUs for training. It seems that something went wrong during training. Maybe the scale parameter of the loss function somehow blew up. Sometimes I also observed this. Does this error occur every time you are training? If you want you can try to replace the loss with the AdaProj loss, which is numerically much more stable: https://github.com/wilkinghoff/AdaProj Best, |
First of all, thank you for your advice. I will run the code in the link you sent on my server. I have been encountering the above problem for about 8 times. Secondly, can I ask two questions? In the ./eval_data directory, I put the additional dataset and the evaluation dataset together according to the machine type, and then put the development dataset in the ./dev_data directory. I want to confirm whether this is correct? Thank you |
Hi Kevin, |
Hi Olaf, yes, I think so. You should have a structure like this: Best, |
Hi Kevin, Additionally, I will make sure to cite your work wherever I use your code, acknowledging your contribution. Thank you again for your support! Best regards, |
Hi Olaf, that's good! I sometimes noticed this error when storing/re-loading the extracted features to hard disk within the same run. You can either just re-run the script after the features are already stored or only store them in RAM to solve the issue. Best, |
Hi Olaf, Sorry, but I cannot provide you any other code as this is the version I have been running on my machine. Best, |
Thank you~ |
Hi Olaf, |
Hi,Kevin Best,Olaf |
Hi Olaf, the results for the evaluation set are completely random and the reason is that the audio recordings and label files do not correspond to each other because they are not sorted correctly. This depends on your operating system. You can try some sorting of the file lists before loading (both for the files themselves and the corresponding labels). Best, |
Hi,Kevin |
Hi Olaf, |
Hi Olaf, |
Hi Olaf,
as these lines take the mean over 10 independent trials and thus are more reliable results then just taking the results of a single run. Best, |
Hi, Kevin Thanks for your help, thanks again and I wish you all the best in the future. Olaf |
Hi Kevin,
I am very interested in your great work at DCASE2023.Therefore,I have some questions . Did you use GPU for training in your TensorFlow code? I got an error during the training process. Have you encountered this before? How did you handle it? Thank you. Here is the error message and error code.
Thank you!
Olaf
The text was updated successfully, but these errors were encountered: