-
Notifications
You must be signed in to change notification settings - Fork 3.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
nn4.v2 Training Progress #55
Comments
Reduces the number of parameters from 7472144 to 6959088.
Hi all, I started training a new model on Dec 20 and it's still improving now (Dec 28). It looks a lot different than the loss plot for nn4.v1: I think the variance at the beginning of the in-progress experiment shows Also, I think the LRN next to the pooling in the early layers makes the Interested to hear anybody else's interpretations. -Brandon. |
nn4.v2 released! Details in this mailing list post: |
@bamos @melgor.
And I'd like to know the reason to do so. Memory reduce or something? Can't find any text mentioned the reason. By the way, my Say, with same epoch_size setting say 250, Got any ideas about potential reasons? |
Hi @myme5261314
I didn't include these because section 3.3 of the FaceNet paper says:
Unfortunately the phrasing is vague and doesn't say the layers they don't have the 5x5 convolutions in. I haven't tried training with the 5x5 convolutions added here. -Brandon. |
Hi,
About slow converge:
What about the Optimizer. OpenFace use AdaDelta. Are you using same optimizer? |
Add celeba dataset (identities not yet available)The text was updated successfully, but these errors were encountered: