-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
different result on training #1
Comments
Hi Fanhan,
Will have a look when I have time as I am currently busy with travelling.
But from your results it seems CE is even better than any other models
which is not reasonable to me. It would be helpful if you can provide some
details such as the definition of CE, what is the preprocessing, so that I
can replicate your results to understand it precisely.
On Thu, 21 Jun 2018 at 17:22, jiangfanhan ***@***.***> wrote:
Dear Xingjun:
I have tried your method to train the 12-layer-cnn on CIFAR-10 with 20%
noise rate,I also observe the decrease and increase of LID score. but in my
experiment. the test accuracy is 88.34% by just using cross-entropy as loss
function. (in your paper is 73.12%) for 40% noise rate the test accuracy is
84.88% (65.07% in your paper),the results is even better than the results
using your D2L method. the only difference during the training process may
lies on the preprocessing of the training image (what I use is described in
the loss correction method CVPR'17). I wonder why the difference is so much
and whether you can try the preprocessing method as in CVPR'17 and train
the network again.
Thanks !
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#1>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AOIQBkfXlds333ojoX-eFD0tx77Zjyejks5t-2XkgaJpZM4Uxs6j>
.
--
Best regards,
Daniel
|
Hi Daniel, Regards, |
I have the same issue... I can't reproduce the results on the paper using d2l method even though I didn't change the codes... @xingjunm Could you check out the source code? |
Hi pokaxpoka, thank you for your report. I do find reproductivity issue when tested with a fresh run on new devices. I will try to fix it. Can you provide the details of your results? Did it fail to converge with CIFAR-10 and 40%/60% noise rates? |
Thanks @xingjunm When I tried your method it fails to converge with CIFAR-10 (40%/60% noise rate) and CIFAR-100. Thanks for your update. |
any update? does anyone replicate the result? |
I have uploaded an old version of the code old_version/d2l_old.zip, can someone test if this version works? |
Thank you all for your interest. And sorry that it took me so long to have time fix the issue. Sorry for the waiting. Good luck to you all with your papers. |
@jiangfanhan , I did also observe that the accuracy of CCE is much higher than reported in the paper, if early stopping criterion is used with respect to (noisy) validation data. I got CCE accuracy=0.81 |
Dear Xingjun:
I have tried your method to train the 12-layer-cnn on CIFAR-10 with 20% noise rate,I also observe the decrease and increase of LID score. but in my experiment. the test accuracy is 88.34% by just using cross-entropy as loss function. (in your paper is 73.12%) for 40% noise rate the test accuracy is 84.88% (65.07% in your paper),the results is even better than the results using your D2L method. the only difference during the training process may lies on the preprocessing of the training image (what I use is described in the loss correction method CVPR'17). I wonder why the difference is so much and whether you can try the preprocessing method as in CVPR'17 and train the network again.
Thanks !
The text was updated successfully, but these errors were encountered: