Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Confusing on using the margin parameter m in AdaCos loss #13

Open
rose-jinyang opened this issue Apr 21, 2020 · 5 comments
Open

Confusing on using the margin parameter m in AdaCos loss #13

rose-jinyang opened this issue Apr 21, 2020 · 5 comments

Comments

@rose-jinyang
Copy link

Hello
How are you?
Thanks for contributing this project.
But I am confused on using the margin parameter m in AdaCos loss.
u do not use the parameter m in AdaCos layer.
How can I understand this?
Thanks

@rose-jinyang
Copy link
Author

Hi
Moreover, I found that your AdaCos layer implementation is different with the following.
https://cpp-learning.com/adacos/#AdaCos
image
How can I understand this?
Please let me know asap.
Thanks

@rose-jinyang rose-jinyang changed the title Confusing on using margin parameter m in AdaCos loss Confusing on using the margin parameter m in AdaCos loss Apr 21, 2020
@rose-jinyang
Copy link
Author

Hi
Could u implement Keras version of AdaCos?
I found that DaisukeIshibe/Keras-Adacos has some issues in dynamic tuning of scale parameter.
Especially, I'm interested in multi-gpu implementation.
Thanks

@rose-jinyang
Copy link
Author

rose-jinyang commented Apr 24, 2020

Hi
Did u train AdaCos model on muli-gpu?
I met an issue when train AdaCos keras version for your torch version.
I think that what multi-gpu change a scale parameter in common may be a reason.
how do u think this?
Thanks

@ReverseSystem001
Copy link

Hello
How are you?
Thanks for contributing this project.
But I am confused on using the margin parameter m in AdaCos loss.
u do not use the parameter m in AdaCos layer.
How can I understand this?
Thanks
The paper 3.3 says that we will focus on automatically tuning the scale parameter s in the reminder ofthis paper. It seems like that the author does not use margin m

@BRO-HAMMER
Copy link

BRO-HAMMER commented Aug 28, 2020

Edit: I found the explanation:
"We therefore propose to automatically tune the scale parameter s and eliminate the margin parameter m from our loss function, which makes our proposed AdaCos loss different from state-of-the-art softmax loss variants with angular margin."

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants