Replies: 7 comments 2 replies
-
Same issue here |
Beta Was this translation helpful? Give feedback.
-
Same issue |
Beta Was this translation helpful? Give feedback.
-
As I'm not seeing a solution either here or in the Github issue, allow me to provide more information as I'm investigating the same issue. Step 1: During Bootup of the container, I spotted this error message. (See Attach #1)
Step 2: Suspecting that, based on BTR's statement above, it was failing to download the .ckpt, I searched around and found the Hugging Face download. I went and downloaded it there and placed it in the directory. Step 3: With the issue persisting, I additionally performed a file size check, and it successfully made the copy trip from my machine to the Unraid server. (See Attach #2) Step 4: I looked back to Midnight Commander on the host OS and realized that the file in models/ldm is a symlink. Upon looking at the cached file in user_data, I found that that was 0 bytes. Step 5: Deleted the file the symlink is pointing to that is 0 bytes. Moved the sd-v1-4.cpkt file over to the user_data cached models directory, renamed the file, rebooted the container. Lo and behold, it's not error'ing out anymore. Conclusion: I'm a little confused, as the other posts in this server and even the model manager show Hugging face, but Attach #1 clearly shows a googleapi link. I'm not sure what the intention is, but it doesn't seem to be working by default. Elsewise, the container on Unraid does not include any instructions regarding this and I'm not seeing tost OS and realized that the file in models/ldm is a symlink. Upon looking at the cached file in user_data, I found that that was 0 bytes. Step 5: Deleted the file the symlink is pointing to that is 0 bytes. Moved the sd-v1-4.cpkt file over to the user_data cached models directory, renamed the file, rebooted the container. Lo and behold, it's not error'ing out anymore. Conclusion: I'm a little confused, as the other posts in the discord server and even the model manager show Hugging face, but Attach #1 clearly shows a googleapi link. I'm not sure what the intention is, but it doesn't seem to be working by default. Elsewise, the container on Unraid does not include any instructions regarding this and I'm not seeing them in the Docker Install instructions. |
Beta Was this translation helpful? Give feedback.
-
same issue for me, I couldn't find anything under issues, so have raised one from this discussion. |
Beta Was this translation helpful? Give feedback.
-
I have the same issue here in unraid os: File "/sd/scripts/sd_utils.py", line 928, in load_sd_model checking RealESRGAN_x4plus.pth... checking RealESRGAN_x4plus_anime_6B.pth... checking project.yaml... checking model.ckpt... checking waifu-diffusion.ckpt... checking trinart.ckpt... checking model__base_caption.pth... checking pytorch_model.bin... checking config.json... checking merges.txt... checking preprocessor_config.json... checking special_tokens_map.json... checking tokenizer.json... checking tokenizer_config.json... checking vocab.json... Already up to date. You can now view your Streamlit app in your browser. Validating model files... checking RealESRGAN_x4plus.pth... checking RealESRGAN_x4plus_anime_6B.pth... checking project.yaml... checking model.ckpt... checking waifu-diffusion.ckpt... It looks like there are a few issues going on in these logs. First, it seems that there is an issue with the loading of a file called "webui_streamlit.py" which is causing an error in the scriptrunner. Regenerate resp |
Beta Was this translation helpful? Give feedback.
-
Same issue for me. |
Beta Was this translation helpful? Give feedback.
-
I got it working with the wget command issues from host from someone listed above.
Best regards,
This e-mail communication is intended only for the addressee(s) named above and any others who have been specifically authorized to receive it and may contain information that is privileged, confidential or otherwise protected from disclosure. If you are not the intended recipient of this e-mail communication, please do not copy, use or disclose to others the content of this communication. Please notify the sender that you have received this e-mail in error by replying to it and then delete the e-mail from your system and any copies of it. No confidentiality or privilege is waived or lost by any transmission errors.
…________________________________
From: shad0wca7 ***@***.***>
Sent: Monday, January 23, 2023 10:08:10 AM
To: Sygil-Dev/sygil-webui ***@***.***>
Cc: madman2012 ***@***.***>; Comment ***@***.***>
Subject: Re: [Sygil-Dev/sygil-webui] EOFError: Ran out of input - Unraid Docker (Discussion #1672)
Same issue for me.
—
Reply to this email directly, view it on GitHub<#1672 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/ABMS5U42UM6U73LEHJCYTQTWT2NFVANCNFSM6AAAAAASFCBPTQ>.
You are receiving this because you commented.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
Hey, using https://hub.docker.com/r/hlky/sd-webui to build the docker instance, it builds fine and webui comes up with no errors in logs, when attempting to generate text to image it just spits out this error;
EOFError: Ran out of input
Traceback:
File "/opt/conda/lib/python3.8/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 562, in _run_script
exec(code, module.dict)
File "/sd/scripts/webui_streamlit.py", line 174, in
layout()
File "/sd/scripts/webui_streamlit.py", line 138, in layout
layout()
File "/sd/scripts/txt2img.py", line 320, in layout
load_models(False, st.session_state["use_GFPGAN"], st.session_state["use_RealESRGAN"], st.session_state["RealESRGAN_model"], server_state["CustomModel_available"],
File "/sd/scripts/sd_utils.py", line 303, in load_models
config, device, model, modelCS, modelFS = load_sd_model(custom_model)
File "/sd/scripts/sd_utils.py", line 928, in load_sd_model
model = load_model_from_config(config, ckpt_path)
File "/sd/scripts/sd_utils.py", line 332, in load_model_from_config
pl_sd = torch.load(ckpt, map_location="cpu")
File "/opt/conda/lib/python3.8/site-packages/torch/serialization.py", line 713, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "/opt/conda/lib/python3.8/site-packages/torch/serialization.py", line 920, in _legacy_load
magic_number = pickle_module.load(f, **pickle_load_args)
tried to recrete it and it just did the same thing, not sure where im failing, the documentation is... lacking. fwiw it's unraid OS.
running nvidia-smi in docker command shows its seeing the 1050ti just fine
nvidia-smi
Sat Nov 19 02:40:02 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 515.65.01 Driver Version: 515.65.01 CUDA Version: 11.7 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce ... Off | 00000000:84:00.0 Off | N/A |
| 25% 60C P8 N/A / 75W | 2MiB / 4096MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
any ideas?
Beta Was this translation helpful? Give feedback.
All reactions