Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Not always true. #5

Open
xingp-ng opened this issue Apr 2, 2024 · 2 comments
Open

Not always true. #5

xingp-ng opened this issue Apr 2, 2024 · 2 comments

Comments

@xingp-ng
Copy link

xingp-ng commented Apr 2, 2024

We believe that we have reproduced the appropriate results, but there are still some questions that we would like to be answered.

  1. The results are generated in such a way that it usually takes one out of 8 to find a suitable result, making it difficult to use them in practice.

  2. These results may not be aligned with the content image.

  3. Is there any technique to alleviate these problems?

@yardenfren1996
Copy link
Owner

Regarding your questions, since it is an optimization process. There's a possibility that training different LoRAs on the same image will yield different results due to different initializations. Sometimes, the optimization process struggles to 'learn' the given concept perfectly.
I recommend training with different seeds, or adjusting other training parameters.
And since it is a personalization technique, the resulting image may not align perfectly with the content image. I suggest integrating our approach with content preservation techniques like ControlNet, although I haven't personally tested this.

@liusida
Copy link

liusida commented Jun 16, 2024

Ha-ha, interesting workaround. 5 B-LoRAs with different seeds have the comparable size of a LoRA model xD

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants