-
Notifications
You must be signed in to change notification settings - Fork 378
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Lora Tags and Comfy UI, and breaking random tags #130
Comments
the If I'm tracking you correctly here, it looks like the issue is that the lora is pulled out from the prompt before the random is processed, ie you want it to only sometimes use the lora but instead it's always using it? |
with the above commit (and an upstream rework), Let me know if there's any cases that still need to be handled better. |
I will update and test this later tonight, but in my experience the comfyui prompt does in fact handle the lora tag any where in the prompts, at least with SDXL and ClipTextEncodeSDXL node. I had a been testing loading a large chain of lora's at a weight of 0 and then using that tag to trigger them when certain concepts needed some additional assistance. The part about being passed as a lora loader node in comfy I am not following, I don't see something to add to node network that would have those lora's from the prompt. That would be a fine solution but I am missing something. |
Comfy applies loras via the If you're seeing anything that indicates otherwise in testing, my first recommendation is to pick a different lora, one that is more likely to produce an extremely clear visual distinction (SDXL knows what a dwarf is), say eg something trained on a specific character that the base model can't replicate well. Swarm's usage of the Lora list in input is within the workflow generator (ie it is incompatible with custom workflows atm) wherein it will simply emit LoraLoader nodes for every lora you have selected. |
I had a misunderstanding of how that tag worked inside of comfy and it does add the lora with the given weight to the model even if it doesn't have it in the graph. Removing the lora prompt from the input on both sides I do get the same output with the same prompt and seed. If I add it back in via the LoadLora or the prompt box in comfyui the lora is applied, if I add it back in from the generate tab it doesn't matter what lora's are referenced it has no change on the output of the workflow. The issue is when I add it back in via the prompt in generate I am not seeing that lora take any effect, I can add extreme ones that will change the entire style of the output and nothing is happening. Here is the workflow if that is of any help. |
Yeah, I replicated this, although the base model understands what a dwarf is, a lot of times it will miss on asking for a female dwarf, and that lora solved for that. I think that in the workflow the lora tag may have just been enough random tokens to trip some of the seeds I was on from producing a male dwarf or a dwarf that looked more like an elf and was just from a poor sample set. It would be nice to be able to dynamically load the loras from the prompt but after digging into the ComfyUIAPIAbstractBackend it looks like you are going to need to create a new custom node that can chain a bunch of loras from the T2IParamTypes.Loras/T2IParamTypes.LoraWeights and inject them in a similar way to the prompt. And then it would get more complicated if you wanted to inject lora's at prompt time into multiple different nodes in the workflow. I wrote some code to modify the prompt behavior and inject it to generate a bunch of outputs all at once 😄 var loras = user_input.Get(T2IParamTypes.Loras);
var loraWeight = user_input.Get(T2IParamTypes.LoraWeights);
String loraString = "";
if (loras.Count > 0)
{
StringBuilder rebuildLoras = new StringBuilder();
for (int i = 0; loras.Count > i; i++)
{
rebuildLoras.Append($"<lora:{loras[i]}:{loraWeight[i]}>");
}
loraString = rebuildLoras.ToString();
}
var prompt = $"{user_input.Get(T2IParamTypes.Prompt)}{loraString}"; So that was a fun little experiment to drive home how things actually worked and to get to know the code base a little bit better. The work here is genuinely great and I cant wait to see how this evolves as it moves forward. |
Actually! After thinking about it, can solve this by just giving a cheat node for custom workflows to use:
It's basically a python self-contained multi-lora-loader designed with inputs matched to a format Swarm can send em in, so if you have it present and load your workflow in swarm, it will automatically detect & use it. (If you want multiple usages, you can also have primitives with Thus, any swarm prompt that adds loras to the standard params will automatically send the loras to this node and it'll work as expected |
Man, you're fast. I started digging into how comfy was coded and looking at writing a module to effectively do that. After digging through your code and seeing how the tags are being parsed, I was going to add to the tags. I was trying to use a third parameter for the name of the target node and use the code you already had to inject data into there. IE |
I did find something with the random tags and the current tag processing. And with a bit of rest and setting up an IDE I have more details to start from this time :)
If you place a tag inside of the random tag it gets stripped out
Or in this case, breaks the tag processing entirely
Stepping through the code when it calls out to this method
https://github.com/FreneticLLC/FreneticUtilities/blob/master/FreneticUtilities/FreneticToolkit/StringConversionHelper.cs#L372
here
https://github.com/Stability-AI/StableSwarmUI/blob/master/src/Text2Image/T2IParamInput.cs#L241C39-L241C39
It is stripping out the lora tag entirely from the random tag and replacing it with an empty string.
It then shows up in the metadata as a lora model that was loaded but the lora text is not passed to the comfyui prompt so it doesn't actually trigger the use of the lora.
From the generate page
From Comfy
The text was updated successfully, but these errors were encountered: