Microsoft Windows [Version 10.0.19045.5131] (c) Microsoft Corporation. All rights reserved. ┌[ 6:13:35] [D:\!Down] └─ #e: ┌[ 6:13:36] [E:\] └─ #cd sddev ┌[ 6:13:38] [E:\sddev] └─ #astart.bat Using VENV: E:\sddev\venv 06:13:40-942887 INFO Starting SD.Next 06:13:40-946422 INFO Logger: file="E:\sddev\sdnext.log" level=DEBUG size=65 mode=create 06:13:40-948421 INFO Python: version=3.10.11 platform=Windows bin="E:\sddev\venv\Scripts\python.exe" venv="E:\sddev\venv" 06:13:41-142520 INFO Version: app=sd.next updated=2024-11-28 hash=75dd6219 branch=dev url=https://github.com/vladmandic/automatic/tree/dev ui=dev 06:13:41-898274 INFO Repository latest available 6846f4e5d3e650d2a8dc8460906901086a350d27 2024-11-22T18:01:15Z 06:13:41-905787 INFO Platform: arch=AMD64 cpu=Intel64 Family 6 Model 151 Stepping 5, GenuineIntel system=Windows release=Windows-10-10.0.19045-SP0 python=3.10.11 docker=False 06:13:41-906788 DEBUG Packages: venv=venv site=['venv', 'venv\\lib\\site-packages'] 06:13:41-909101 INFO Args: ['--autolaunch', '--debug', '--uv'] 06:13:41-909101 DEBUG Setting environment tuning 06:13:41-910101 DEBUG Torch allocator: "garbage_collection_threshold:0.40,max_split_size_mb:512,backend:cudaMallocAsync" 06:13:41-935794 DEBUG Torch overrides: cuda=False rocm=False ipex=False directml=False openvino=False zluda=False 06:13:41-942884 INFO CUDA: nVidia toolkit detected 06:13:42-118862 INFO Install: verifying requirements 06:13:42-127995 INFO Verifying packages 06:13:42-168561 DEBUG Timestamp repository update time: Fri Nov 29 05:42:25 2024 06:13:42-169559 INFO Startup: standard 06:13:42-170558 INFO Verifying submodules 06:13:44-038023 DEBUG Git submodule: extensions-builtin/sd-extension-chainner / main 06:13:44-115609 DEBUG Git submodule: extensions-builtin/sd-extension-system-info / main 06:13:44-180237 DEBUG Git submodule: extensions-builtin/sd-webui-agent-scheduler / main 06:13:44-251138 DEBUG Git submodule: extensions-builtin/sdnext-modernui / dev 06:13:44-321126 DEBUG Git submodule: extensions-builtin/stable-diffusion-webui-rembg / master 06:13:44-388454 DEBUG Git submodule: modules/k-diffusion / master 06:13:44-450828 DEBUG Git submodule: wiki / master 06:13:44-484936 DEBUG Register paths 06:13:44-530556 DEBUG Installed packages: 198 06:13:44-531540 DEBUG Extensions all: ['Lora', 'sd-extension-chainner', 'sd-extension-system-info', 'sd-webui-agent-scheduler', 'sdnext-modernui', 'stable-diffusion-webui-rembg'] 06:13:44-659180 DEBUG Extension installer: E:\sddev\extensions-builtin\sd-webui-agent-scheduler\install.py 06:13:48-874781 DEBUG Extension installer: E:\sddev\extensions-builtin\stable-diffusion-webui-rembg\install.py 06:13:56-153373 DEBUG Extensions all: [] 06:13:56-154886 INFO Extensions enabled: ['Lora', 'sd-extension-chainner', 'sd-extension-system-info', 'sd-webui-agent-scheduler', 'sdnext-modernui', 'stable-diffusion-webui-rembg'] 06:13:56-155395 INFO Install: verifying requirements 06:13:56-156393 DEBUG Setup complete without errors: 1732909436 06:13:56-160914 DEBUG Extension preload: {'extensions-builtin': 0.0, 'extensions': 0.0} 06:13:56-161911 INFO Command line args: ['--autolaunch', '--debug', '--uv'] autolaunch=True uv=True debug=True 06:13:56-163435 DEBUG Env flags: ['SD_MODELSDIR=E:\\SM\\Data\\Models'] 06:13:56-164420 DEBUG Starting module: 06:14:16-327336 INFO Torch: torch==2.5.1+cu124 torchvision==0.20.1+cu124 06:14:16-329359 INFO Packages: diffusers==0.32.0.dev0 transformers==4.46.2 accelerate==1.1.1 gradio==3.43.2 06:14:17-559210 DEBUG Huggingface cache: folder="C:\Users\paul_\.cache\huggingface\hub" 06:14:17-641610 INFO Device detect: memory=16.0 optimization=none 06:14:17-643611 DEBUG Read: file="config.json" json=50 bytes=2176 time=0.000 06:14:18-990133 DEBUG Setting validation: "extra_networks"="Lora" default="['All']" choices=['All'] 06:14:18-990133 DEBUG Setting validation: "extra_networks"="Embedding" default="['All']" choices=['All'] 06:14:18-991593 DEBUG Setting validation: "extra_networks"="Model" default="['All']" choices=['All'] 06:14:18-993592 DEBUG Setting validation: "extra_networks"="History" default="['All']" choices=['All'] 06:14:18-995093 INFO Engine: backend=Backend.DIFFUSERS compute=cuda device=cuda attention="Scaled-Dot-Product" mode=no_grad 06:14:19-004670 DEBUG Read: file="html\reference.json" json=59 bytes=31585 time=0.009 06:14:19-229582 INFO Torch parameters: backend=cuda device=cuda config=Auto dtype=torch.bfloat16 vae=torch.bfloat16 unet=torch.bfloat16 context=no_grad nohalf=False nohalfvae=False upscast=False deterministic=False test-fp16=True test-bf16=True optimization="Scaled-Dot-Product" 06:14:19-514442 DEBUG ONNX: version=1.20.1 provider=CUDAExecutionProvider, available=['AzureExecutionProvider', 'CPUExecutionProvider'] 06:14:19-646632 INFO Device: device=NVIDIA GeForce RTX 4060 Ti n=1 arch=sm_90 capability=(8, 9) cuda=12.4 cudnn=90100 driver=566.14 06:14:19-768887 DEBUG Importing LDM 06:14:19-781673 DEBUG Entering start sequence 06:14:19-782198 INFO Using models path: E:\SM\Data\Models 06:14:19-787447 DEBUG Initializing 06:14:19-839354 INFO Available VAEs: path="E:\SM\Data\Models\VAE" items=8 06:14:19-840876 INFO Available UNets: path="E:\SM\Data\Models\UNET" items=1 06:14:19-842383 INFO Available TEs: path="E:\SM\Data\Models\Text-encoder" items=0 06:14:19-843496 INFO Disabled extensions: ['sdnext-modernui'] 06:14:19-852827 DEBUG Read: file="cache.json" json=2 bytes=772 time=0.005 06:14:19-870299 DEBUG Read: file="metadata.json" json=481 bytes=1362346 time=0.016 06:14:19-925965 DEBUG Scanning diffusers cache: folder="E:\SM\Data\Models\Diffusers" items=22 time=0.04 06:14:19-927994 INFO Available Models: path="E:\SM\Data\Models\Stable-diffusion" items=99 time=0.08 06:14:20-058058 INFO Available Yolo: path="E:\SM\Data\Models\yolo" items=8 downloaded=6 06:14:20-059589 INFO Load extensions 06:14:20-596256 INFO Available LoRAs: path="E:\SM\Data\Models\Lora" items=411 folders=2 time=0.18 06:14:21-405165 INFO Extension: script='extensions-builtin\sd-webui-agent-scheduler\scripts\task_scheduler.py' Using sqlite file: extensions-builtin\sd-webui-agent-scheduler\task_scheduler.sqlite3 06:14:21-411165 DEBUG Extensions init time: 1.35 pulid_ext.py=0.24 Lora=0.40 sd-extension-chainner=0.22 sd-webui-agent-scheduler=0.39 06:14:21-443581 DEBUG Read: file="html/upscalers.json" json=4 bytes=2672 time=0.008 06:14:21-453741 DEBUG Read: file="extensions-builtin\sd-extension-chainner\models.json" json=24 bytes=2719 time=0.008 06:14:21-454956 DEBUG chaiNNer models: path="E:\SM\Data\Models\chaiNNer" defined=24 discovered=0 downloaded=0 06:14:21-458090 DEBUG Upscaler type=ESRGAN folder="E:\SM\Data\Models\ESRGAN" model="4xUltrasharp_4xUltrasharpV10" path="E:\SM\Data\Models\ESRGAN\4xUltrasharp_4xUltrasharpV10.pth" 06:14:21-459157 DEBUG Upscaler type=ESRGAN folder="E:\SM\Data\Models\ESRGAN" model="4x_foolhardy_Remacri" path="E:\SM\Data\Models\ESRGAN\4x_foolhardy_Remacri.pth" 06:14:21-460180 DEBUG Upscaler type=ESRGAN folder="E:\SM\Data\Models\ESRGAN" model="4x_NMKD-Siax_200k" path="E:\SM\Data\Models\ESRGAN\4x_NMKD-Siax_200k.pth" 06:14:21-461781 DEBUG Upscaler type=ESRGAN folder="E:\SM\Data\Models\ESRGAN" model="4x_Valar_v1" path="E:\SM\Data\Models\ESRGAN\4x_Valar_v1.pth" 06:14:21-462859 DEBUG Upscaler type=ESRGAN folder="E:\SM\Data\Models\ESRGAN" model="ESRGAN_4x" path="E:\SM\Data\Models\ESRGAN\ESRGAN_4x.pth" 06:14:21-466041 DEBUG Upscaler type=SCUNet folder="E:\SM\Data\Models\SCUNet" model="ScuNET" path="E:\SM\Data\Models\SCUNet\ScuNET.pth" 06:14:21-467550 DEBUG Upscaler type=SwinIR folder="E:\SM\Data\Models\SwinIR" model="SwinIR_4x" path="E:\SM\Data\Models\SwinIR\SwinIR_4x.pth" 06:14:21-468793 INFO Available Upscalers: items=60 downloaded=14 user=7 time=0.06 types=['None', 'Lanczos', 'Nearest', 'ChaiNNer', 'AuraSR', 'ESRGAN', 'LDSR', 'RealESRGAN', 'SCUNet', 'SD', 'SwinIR'] 06:14:22-041184 INFO Available Styles: folder="E:\SM\Data\Models\styles" items=309 time=0.57 06:14:22-048502 INFO UI start 06:14:22-049502 DEBUG UI themes available: type=Standard themes=13 06:14:22-050509 INFO UI theme: type=Standard name="black-teal" 06:14:22-057058 DEBUG UI theme: css="E:\sddev\javascript\black-teal.css" base="sdnext.css" user="None" 06:14:22-060625 DEBUG UI initialize: txt2img 06:14:23-398765 DEBUG Networks: page='lora' items=411 subfolders=12 tab=txt2img folders=['E:\\SM\\Data\\Models\\Lora', 'E:\\SM\\Data\\Models\\LyCORIS'] list=0.84 thumb=0.05 desc=0.17 info=5.09 workers=8 sort=Date [Newest] 06:14:23-405071 DEBUG Networks: page='embedding' items=111 subfolders=3 tab=txt2img folders=['E:\\SM\\Data\\Models\\embeddings'] list=1.31 thumb=0.01 desc=0.03 info=1.04 workers=8 sort=Date [Newest] 06:14:23-413122 DEBUG Networks: page='model' items=157 subfolders=9 tab=txt2img folders=['E:\\SM\\Data\\Models\\Stable-diffusion', 'E:\\SM\\Data\\Models\\Diffusers', 'models\\Reference'] list=0.28 thumb=0.05 desc=0.08 info=0.70 workers=8 sort=Date [Newest] 06:14:23-417129 DEBUG Networks: page='history' items=0 subfolders=0 tab=txt2img folders=[] list=0.00 thumb=0.00 desc=0.00 info=0.00 workers=8 sort=Date [Newest] 06:14:23-698966 DEBUG UI initialize: img2img 06:14:23-847736 DEBUG UI initialize: control models=E:\SM\Data\Models\control 06:14:24-401377 DEBUG Read: file="ui-config.json" json=0 bytes=2 time=0.001 06:14:24-550407 DEBUG UI themes available: type=Standard themes=13 06:14:25-327093 DEBUG Extension list: processed=397 installed=6 enabled=5 disabled=1 visible=397 hidden=0 06:14:25-503983 DEBUG Root paths: ['E:\\sddev'] 06:14:25-599100 INFO Local URL: http://127.0.0.1:7860/ 06:14:25-600128 DEBUG Gradio functions: registered=1864 06:14:25-603099 DEBUG API middleware: [, ] 06:14:25-606257 DEBUG API initialize 06:14:25-972396 INFO [AgentScheduler] Task queue is empty 06:14:25-974216 INFO [AgentScheduler] Registering APIs 06:14:26-076516 DEBUG Scripts setup: ['IP Adapters:0.034', 'XYZ Grid:0.036', 'Ctrl-X: Controlling Structure and Appearance:0.007', 'Face: Multiple ID Transfers:0.017', 'K-Diffusion Samplers:0.153', 'LUT Color grading:0.007', 'PuLID: ID Customization:0.008', 'Style Aligned Image Generation:0.007', 'Video: AnimateDiff:0.009', 'Video: CogVideoX:0.01', 'Video: SVD:0.005', 'Video: VGen Image-to-Video:0.008'] 06:14:26-077516 DEBUG Model metadata: file="metadata.json" no changes 06:14:26-080519 DEBUG Model requested: fn=run: 06:14:26-081522 INFO Load model: select="sd15\cyberrealistic_v40 [481d75ae9d]" 06:14:26-085041 INFO Autodetect model: detect="Stable Diffusion" class=StableDiffusionPipeline file="E:\SM\Data\Models\Stable-diffusion\sd15\cyberrealistic_v40.safetensors" size=2034MB 06:14:26-098027 DEBUG Autodetect: modules={'cond_stage_model': {'transformer': {}}, 'first_stage_model': {'decoder': {}, 'encoder': {}, 'post_quant_conv': {}, 'quant_conv': {}}, 'model': {'diffusion_model': {}}, 'model_ema': {}} list=['alphas_cumprod', 'alphas_cumprod_prev', 'betas', 'cond_stage_model.transformer', 'first_stage_model.decoder', 'first_stage_model.encoder', 'first_stage_model.post_quant_conv', 'first_stage_model.quant_conv', 'log_one_minus_alphas_cumprod', 'model.diffusion_model', 'model_ema.decay', 'model_ema.num_updates', 'posterior_log_variance_clipped', 'posterior_mean_coef1', 'posterior_mean_coef2', 'posterior_variance', 'sqrt_alphas_cumprod', 'sqrt_one_minus_alphas_cumprod', 'sqrt_recip_alphas_cumprod', 'sqrt_recipm1_alphas_cumprod'] time=0.01 06:14:26-101086 INFO Autodetect vae: detect="Stable Diffusion" class=StableDiffusionPipeline file="E:\SM\Data\Models\Stable-diffusion\sd15\cyberrealistic_v40.safetensors" size=2034MB 06:14:26-104594 DEBUG Autodetect: modules={'cond_stage_model': {'transformer': {}}, 'first_stage_model': {'decoder': {}, 'encoder': {}, 'post_quant_conv': {}, 'quant_conv': {}}, 'model': {'diffusion_model': {}}, 'model_ema': {}} list=['alphas_cumprod', 'alphas_cumprod_prev', 'betas', 'cond_stage_model.transformer', 'first_stage_model.decoder', 'first_stage_model.encoder', 'first_stage_model.post_quant_conv', 'first_stage_model.quant_conv', 'log_one_minus_alphas_cumprod', 'model.diffusion_model', 'model_ema.decay', 'model_ema.num_updates', 'posterior_log_variance_clipped', 'posterior_mean_coef1', 'posterior_mean_coef2', 'posterior_variance', 'sqrt_alphas_cumprod', 'sqrt_one_minus_alphas_cumprod', 'sqrt_recip_alphas_cumprod', 'sqrt_recipm1_alphas_cumprod'] time=0.00 06:14:26-106574 INFO Load module: type=VAE model="E:\SM\Data\Models\VAE\vae-ft-mse-840000-ema-pruned.safetensors" source=settings config={'low_cpu_mem_usage': False, 'torch_dtype': torch.bfloat16, 'use_safetensors': True, 'config': 'configs/sd15\\vae'} Diffusers 1.27s/it ████████ 100% 6/6 00:07 00:00 Loading pipeline components... 06:14:35-423146 DEBUG Setting model: pipeline=StableDiffusionPipeline config={'low_cpu_mem_usage': True, 'torch_dtype': torch.bfloat16, 'load_connected_pipeline': True, 'extract_ema': False, 'config': 'configs/sd15', 'use_safetensors': True, 'cache_dir': 'C:\\Users\\paul_\\.cache\\huggingface\\hub'} 06:14:38-891543 INFO Load network: type=embeddings loaded=105 skipped=6 time=3.46 06:14:38-893078 DEBUG Setting model: component=VAE name="vae-ft-mse-840000-ema-pruned.safetensors" 06:14:38-894059 DEBUG Setting model: component=VAE slicing=True 06:14:38-895072 DEBUG Setting model: attention="Scaled-Dot-Product" 06:14:38-903240 DEBUG Setting model: offload=model limit=0.0 06:14:39-141493 DEBUG GC: utilization={'gpu': 7, 'ram': 11, 'threshold': 40} gc={'collected': 8652, 'saved': 0.0} before={'gpu': 1.19, 'ram': 3.44} after={'gpu': 1.19, 'ram': 3.44, 'retries': 0, 'oom': 0} device=cuda fn=reload_model_weights:load_diffuser time=0.22 06:14:39-147011 INFO Load model: time=12.83 vae=1.37 load=7.98 embeddings=3.46 native=512 memory={'ram': {'used': 3.44, 'total': 31.83}, 'gpu': {'used': 1.19, 'total': 16.0}, 'retries': 0, 'oom': 0} 06:14:39-149548 DEBUG Script callback init time: system-info.py:app_started=0.27 task_scheduler.py:app_started=0.12 06:14:39-150594 INFO Startup time: 42.98 torch=16.49 onnx=0.11 gradio=3.19 diffusers=0.16 libraries=3.66 extensions=1.35 models=0.08 detailer=0.13 upscalers=0.06 networks=0.58 ui-networks=1.49 ui-txt2img=0.26 ui-img2img=0.11 ui-control=0.17 ui-extras=0.23 ui-settings=0.27 ui-extensions=0.69 ui-defaults=0.11 launch=0.15 api=0.08 app-started=0.39 checkpoint=13.07 06:14:39-152571 DEBUG Save: file="config.json" json=50 bytes=2106 time=0.003 06:14:39-156607 INFO Launching browser 06:14:44-932937 DEBUG UI themes available: type=Standard themes=13 06:14:45-043867 INFO Browser session: user=None client=127.0.0.1 agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/127.0.0.0 Safari/537.36 06:14:45-590137 INFO MOTD: N/A 06:14:45-890520 INFO UI: ready time=4.933 06:14:45-919529 DEBUG UI: connected 06:14:53-457131 INFO Settings: changed=1 ['show_progress_every_n_steps'] 06:14:53-460366 DEBUG Save: file="config.json" json=49 bytes=2070 time=0.003 06:15:04-109824 INFO Applying hypertile: unet=256 06:15:04-170704 INFO Base: class=StableDiffusionPipeline 06:15:04-173288 DEBUG Sampler: sampler=default class=PNDMScheduler: {'num_train_timesteps': 1000, 'beta_start': 0.00085, 'beta_end': 0.012, 'beta_schedule': 'scaled_linear', 'trained_betas': None, 'skip_prk_steps': True, 'set_alpha_to_one': False, 'prediction_type': 'epsilon', 'timestep_spacing': 'leading', 'steps_offset': 1, 'clip_sample': False} 06:15:04-871693 DEBUG Torch generator: device=cuda seeds=[2293567292] 06:15:04-873201 DEBUG Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE batch=1/1x1 set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 6, 'num_inference_steps': 20, 'eta': 1.0, 'guidance_rescale': 0.7, 'output_type': 'latent', 'width': 512, 'height': 512, 'parser': 'native'} Progress 1.89s/it ████████▌ 25% 5/20 00:14 00:28 Base06:15:19-844166 DEBUG VAE load: type=approximate model="E:\SM\Data\Models\VAE-approx\model.pt" Progress 1.06it/s █████████████████████████████████ 100% 20/20 00:18 00:00 Base 06:15:33-891541 DEBUG VAE decode: vae name="vae-ft-mse-840000-ema-pruned" dtype=torch.bfloat16 device=cpu upcast=False slicing=True tiling=False latents shape=torch.Size([1, 4, 64, 64]) dtype=torch.bfloat16 device=cuda:0 time=6.147 06:15:34-001225 INFO Save: image="outputs\text\2024-11-30\00000-cyberrealistic_v40-cat.png" type=PNG width=512 height=512 size=456400 06:15:34-007376 INFO Processed: images=1 its=0.67 time=29.87 timers={'init': 0.03, 'prepare': 0.04, 'encode': 0.59, 'move': 0.8, 'preview': 7.16, 'pipeline': 19.39, 'decode': 9.54, 'post': 0.1} memory={'ram': {'used': 4.12, 'total': 31.83}, 'gpu': {'used': 1.55, 'total': 16.0}, 'retries': 0, 'oom': 0} 06:15:38-214822 INFO Applying hypertile: unet=256 06:15:38-224313 INFO Base: class=StableDiffusionPipeline 06:15:38-225821 DEBUG Sampler: sampler=default class=PNDMScheduler: {'num_train_timesteps': 1000, 'beta_start': 0.00085, 'beta_end': 0.012, 'beta_schedule': 'scaled_linear', 'trained_betas': None, 'skip_prk_steps': True, 'set_alpha_to_one': False, 'prediction_type': 'epsilon', 'timestep_spacing': 'leading', 'steps_offset': 1, 'clip_sample': False} 06:15:38-508118 DEBUG Torch generator: device=cuda seeds=[781831862] 06:15:38-509559 DEBUG Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE batch=1/1x1 set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 6, 'num_inference_steps': 20, 'eta': 1.0, 'guidance_rescale': 0.7, 'output_type': 'latent', 'width': 512, 'height': 512, 'parser': 'native'} Progress 10.51it/s █████████████████████████████████ 100% 20/20 00:01 00:00 Base 06:15:41-135781 DEBUG VAE decode: vae name="vae-ft-mse-840000-ema-pruned" dtype=torch.bfloat16 device=cpu upcast=False slicing=True tiling=False latents shape=torch.Size([1, 4, 64, 64]) dtype=torch.bfloat16 device=cuda:0 time=0.056 06:15:41-347718 INFO Save: image="outputs\text\2024-11-30\00001-cyberrealistic_v40-cat.png" type=PNG width=512 height=512 size=414705 06:15:41-358307 INFO Processed: images=1 its=6.38 time=3.13 timers={'encode': 0.28, 'move': 0.29, 'preview': 0.19, 'pipeline': 2.56, 'decode': 0.16, 'post': 0.12} memory={'ram': {'used': 4.04, 'total': 31.83}, 'gpu': {'used': 2.21, 'total': 16.0}, 'retries': 0, 'oom': 0} 06:15:45-792586 INFO Applying hypertile: unet=256 06:15:45-801863 INFO Base: class=StableDiffusionPipeline 06:15:45-803869 DEBUG Sampler: sampler=default class=PNDMScheduler: {'num_train_timesteps': 1000, 'beta_start': 0.00085, 'beta_end': 0.012, 'beta_schedule': 'scaled_linear', 'trained_betas': None, 'skip_prk_steps': True, 'set_alpha_to_one': False, 'prediction_type': 'epsilon', 'timestep_spacing': 'leading', 'steps_offset': 1, 'clip_sample': False} 06:15:46-052180 DEBUG Torch generator: device=cuda seeds=[19522078] 06:15:46-053217 DEBUG Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE batch=1/1x1 set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 6, 'num_inference_steps': 20, 'eta': 1.0, 'guidance_rescale': 0.7, 'output_type': 'latent', 'width': 512, 'height': 512, 'parser': 'native'} Progress 10.51it/s █████████████████████████████████ 100% 20/20 00:01 00:00 Base 06:15:48-701474 DEBUG VAE decode: vae name="vae-ft-mse-840000-ema-pruned" dtype=torch.bfloat16 device=cpu upcast=False slicing=True tiling=False latents shape=torch.Size([1, 4, 64, 64]) dtype=torch.bfloat16 device=cuda:0 time=0.052 06:15:48-916710 INFO Save: image="outputs\text\2024-11-30\00002-cyberrealistic_v40-cat.png" type=PNG width=512 height=512 size=397924 06:15:48-922637 INFO Processed: images=1 its=6.40 time=3.12 timers={'encode': 0.25, 'move': 0.26, 'preview': 0.13, 'pipeline': 2.58, 'decode': 0.16, 'post': 0.12} memory={'ram': {'used': 4.07, 'total': 31.83}, 'gpu': {'used': 2.21, 'total': 16.0}, 'retries': 0, 'oom': 0} 06:15:57-328408 DEBUG Detailer settings: models=['yolov8n-face'] classes= strength=0.5 conf=0.6 max=2 iou=0.5 size=0-1 padding=20 06:15:59-796838 DEBUG Server: alive=True requests=319 memory=3.93/31.83 status='idle' task='' timestamp=None id='' job=0 jobs=0 total=1 step=0 steps=0 queued=0 uptime=102 elapsed=93.72 eta=None progress=0 06:16:02-430648 INFO Applying hypertile: unet=256 06:16:02-440157 INFO Base: class=StableDiffusionPipeline 06:16:02-442169 DEBUG Sampler: sampler=default class=PNDMScheduler: {'num_train_timesteps': 1000, 'beta_start': 0.00085, 'beta_end': 0.012, 'beta_schedule': 'scaled_linear', 'trained_betas': None, 'skip_prk_steps': True, 'set_alpha_to_one': False, 'prediction_type': 'epsilon', 'timestep_spacing': 'leading', 'steps_offset': 1, 'clip_sample': False} 06:16:02-701149 DEBUG Torch generator: device=cuda seeds=[3330001618] 06:16:02-702178 DEBUG Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE batch=1/1x1 set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 6, 'num_inference_steps': 20, 'eta': 1.0, 'guidance_rescale': 0.7, 'output_type': 'latent', 'width': 512, 'height': 512, 'parser': 'native'} Progress 4.14it/s █████████████████████████████████ 100% 20/20 00:04 00:00 Base 06:16:08-285973 DEBUG VAE decode: vae name="vae-ft-mse-840000-ema-pruned" dtype=torch.bfloat16 device=cpu upcast=False slicing=True tiling=False latents shape=torch.Size([1, 4, 64, 64]) dtype=torch.bfloat16 device=cuda:0 time=0.054 06:16:08-492260 INFO Save: image="outputs\text\2024-11-30\00003-cyberrealistic_v40-woman red dress-before-detailer.png" type=PNG width=512 height=512 size=411466 06:16:08-746956 INFO Load: type=Detailer name="yolov8n-face" model="E:\SM\Data\Models\yolo\yolov8n-face.pt" ultralytics=8.3.36 classes=['face'] 06:16:52-907909 INFO Detailer: model="yolov8n-face" items=[{'label': 'face', 'score': 0.85, 'size': '40x54'}, {'label': 'face', 'score': 0.64, 'size': '46x79'}] args={'conf': 0.6, 'iou': 0.5} denoise=0.5 blur=10 width=512 height=512 padding=20 06:16:52-909291 DEBUG Detailer: prompt="woman, red dress" negative="" 06:16:52-912278 DEBUG Pipeline class change: original=StableDiffusionPipeline target=StableDiffusionInpaintPipeline device=cuda:0 fn=process_images_inner:init 06:16:52-937065 DEBUG Mask: size=512x512 masked=12216px area=0.05 auto=None blur=0.078 erode=0.010 dilate=0.156 type=Grayscale time=0.02 06:16:52-943086 DEBUG Image resize: input= width=512 height=512 mode="Crop" upscaler="None" context="None" type=image result= time=0.00 fn=process_images_inner:init 06:16:52-951583 DEBUG Image resize: input= width=512 height=512 mode="Fill" upscaler="None" context="None" type=image result= time=0.00 fn=process_images_inner:init 06:16:52-960600 INFO Base: class=StableDiffusionInpaintPipeline 06:16:52-962601 DEBUG Sampler: sampler=default class=PNDMScheduler: {'num_train_timesteps': 1000, 'beta_start': 0.00085, 'beta_end': 0.012, 'beta_schedule': 'scaled_linear', 'trained_betas': None, 'skip_prk_steps': True, 'set_alpha_to_one': False, 'prediction_type': 'epsilon', 'timestep_spacing': 'leading', 'steps_offset': 1, 'clip_sample': False} 06:16:53-072735 DEBUG Torch generator: device=cuda seeds=[3129582872] 06:16:53-075046 DEBUG Diffuser pipeline: StableDiffusionInpaintPipeline task=DiffusersTaskType.INPAINTING batch=1/1x1 set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 7.0, 'num_inference_steps': 41, 'eta': 1.0, 'output_type': 'latent', 'image': [], 'mask_image': , 'strength': 0.5, 'height': 512, 'width': 512, 'parser': 'native'} Progress 10.06it/s █████████████████████████████████ 100% 20/20 00:01 00:00 Base 06:16:59-990610 DEBUG VAE decode: vae name="vae-ft-mse-840000-ema-pruned" dtype=torch.bfloat16 device=cuda:0 upcast=False slicing=True tiling=False latents shape=torch.Size([1, 4, 64, 64]) dtype=torch.bfloat16 device=cuda:0 time=0.656 06:17:00-225897 INFO Save: image="outputs\save\2024-11-30\00000-cyberrealistic_v40-woman red dress-before-detailer.png" type=PNG width=512 height=512 size=291292 06:17:00-232409 DEBUG Image resize: input= width=154 height=154 mode="Crop" upscaler="None" context="None" type=image result= time=0.00 fn=process_images_inner:apply_overlay 06:17:00-237709 DEBUG Pipeline class change: original=StableDiffusionInpaintPipeline target=StableDiffusionPipeline device=cuda:0 fn=restore:process_images_inner 06:17:00-242708 INFO Processed: images=1 its=2.73 time=7.33 timers={'init': 44.58, 'encode': 0.35, 'move': 0.39, 'preview': 3.67, 'pipeline': 11.76, 'decode': 0.93, 'post': 0.15} memory={'ram': {'used': 4.23, 'total': 31.83}, 'gpu': {'used': 2.47, 'total': 16.0}, 'retries': 0, 'oom': 0} 06:17:00-246499 DEBUG Pipeline class change: original=StableDiffusionPipeline target=StableDiffusionInpaintPipeline device=cuda:0 fn=process_images_inner:init 06:17:00-253987 DEBUG Mask: size=512x512 masked=14252px area=0.05 auto=None blur=0.078 erode=0.010 dilate=0.156 type=Grayscale time=0.01 06:17:00-258665 DEBUG Image resize: input= width=512 height=512 mode="Crop" upscaler="None" context="None" type=image result= time=0.00 fn=process_images_inner:init 06:17:00-266653 DEBUG Image resize: input= width=512 height=512 mode="Fill" upscaler="None" context="None" type=image result= time=0.00 fn=process_images_inner:init 06:17:00-275251 INFO Base: class=StableDiffusionInpaintPipeline 06:17:00-276760 DEBUG Sampler: sampler=default class=PNDMScheduler: {'num_train_timesteps': 1000, 'beta_start': 0.00085, 'beta_end': 0.012, 'beta_schedule': 'scaled_linear', 'trained_betas': None, 'skip_prk_steps': True, 'set_alpha_to_one': False, 'prediction_type': 'epsilon', 'timestep_spacing': 'leading', 'steps_offset': 1, 'clip_sample': False} 06:17:00-372944 DEBUG Torch generator: device=cuda seeds=[3129582872] 06:17:00-374254 DEBUG Diffuser pipeline: StableDiffusionInpaintPipeline task=DiffusersTaskType.INPAINTING batch=1/1x1 set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 7.0, 'num_inference_steps': 41, 'eta': 1.0, 'output_type': 'latent', 'image': [], 'mask_image': , 'strength': 0.5, 'height': 512, 'width': 512, 'parser': 'native'} Progress 10.20it/s █████████████████████████████████ 100% 20/20 00:01 00:00 Base 06:17:03-156323 DEBUG VAE decode: vae name="vae-ft-mse-840000-ema-pruned" dtype=torch.bfloat16 device=cuda:0 upcast=False slicing=True tiling=False latents shape=torch.Size([1, 4, 64, 64]) dtype=torch.bfloat16 device=cuda:0 time=0.651 06:17:03-397336 INFO Save: image="outputs\save\2024-11-30\00001-cyberrealistic_v40-woman red dress-before-detailer.png" type=PNG width=512 height=512 size=264206 06:17:03-404841 DEBUG Image resize: input= width=160 height=160 mode="Crop" upscaler="None" context="None" type=image result= time=0.00 fn=process_images_inner:apply_overlay 06:17:03-409855 DEBUG Pipeline class change: original=StableDiffusionInpaintPipeline target=StableDiffusionPipeline device=cuda:0 fn=restore:process_images_inner 06:17:03-415917 INFO Processed: images=1 its=6.31 time=3.17 timers={'init': 44.61, 'encode': 0.44, 'move': 0.5, 'preview': 3.7, 'pipeline': 13.87, 'decode': 1.69, 'post': 0.3} memory={'ram': {'used': 4.28, 'total': 31.83}, 'gpu': {'used': 2.22, 'total': 16.0}, 'retries': 0, 'oom': 0} 06:17:03-418431 DEBUG Detailer processed: models=['yolov8n-face'] 06:17:03-515351 INFO Save: image="outputs\text\2024-11-30\00004-cyberrealistic_v40-woman red dress.png" type=PNG width=512 height=512 size=411750 06:17:03-522462 INFO Processed: images=1 its=0.33 time=61.08 timers={'init': 44.61, 'encode': 0.44, 'move': 0.5, 'preview': 3.7, 'pipeline': 13.87, 'decode': 1.69, 'post': 0.41} memory={'ram': {'used': 4.27, 'total': 31.83}, 'gpu': {'used': 2.22, 'total': 16.0}, 'retries': 0, 'oom': 0} 06:17:07-091976 INFO Applying hypertile: unet=256 06:17:07-102005 INFO Base: class=StableDiffusionPipeline 06:17:07-104005 DEBUG Sampler: sampler=default class=PNDMScheduler: {'num_train_timesteps': 1000, 'beta_start': 0.00085, 'beta_end': 0.012, 'beta_schedule': 'scaled_linear', 'trained_betas': None, 'skip_prk_steps': True, 'set_alpha_to_one': False, 'prediction_type': 'epsilon', 'timestep_spacing': 'leading', 'steps_offset': 1, 'clip_sample': False} 06:17:07-204769 DEBUG Torch generator: device=cuda seeds=[1659991211] 06:17:07-207121 DEBUG Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE batch=1/1x1 set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 6, 'num_inference_steps': 20, 'eta': 1.0, 'guidance_rescale': 0.7, 'output_type': 'latent', 'width': 512, 'height': 512, 'parser': 'native'} Progress 10.00it/s █████████████████████████████████ 100% 20/20 00:02 00:00 Base 06:17:10-014563 DEBUG VAE decode: vae name="vae-ft-mse-840000-ema-pruned" dtype=torch.bfloat16 device=cuda:0 upcast=False slicing=True tiling=False latents shape=torch.Size([1, 4, 64, 64]) dtype=torch.bfloat16 device=cuda:0 time=0.789 06:17:10-249047 INFO Save: image="outputs\text\2024-11-30\00005-cyberrealistic_v40-woman red dress-before-detailer.png" type=PNG width=512 height=512 size=360035 06:17:10-286302 INFO Detailer: model="yolov8n-face" items=[{'label': 'face', 'score': 0.82, 'size': '59x82'}] args={'conf': 0.6, 'iou': 0.5} denoise=0.5 blur=10 width=512 height=512 padding=20 06:17:10-288288 DEBUG Detailer: prompt="woman, red dress" negative="" 06:17:10-291153 DEBUG Pipeline class change: original=StableDiffusionPipeline target=StableDiffusionInpaintPipeline device=cuda:0 fn=process_images_inner:init 06:17:10-299190 DEBUG Mask: size=512x512 masked=19732px area=0.08 auto=None blur=0.078 erode=0.010 dilate=0.156 type=Grayscale time=0.01 06:17:10-303725 DEBUG Image resize: input= width=512 height=512 mode="Crop" upscaler="None" context="None" type=image result= time=0.00 fn=process_images_inner:init 06:17:10-313970 DEBUG Image resize: input= width=512 height=512 mode="Fill" upscaler="None" context="None" type=image result= time=0.00 fn=process_images_inner:init 06:17:10-322003 INFO Base: class=StableDiffusionInpaintPipeline 06:17:10-324004 DEBUG Sampler: sampler=default class=PNDMScheduler: {'num_train_timesteps': 1000, 'beta_start': 0.00085, 'beta_end': 0.012, 'beta_schedule': 'scaled_linear', 'trained_betas': None, 'skip_prk_steps': True, 'set_alpha_to_one': False, 'prediction_type': 'epsilon', 'timestep_spacing': 'leading', 'steps_offset': 1, 'clip_sample': False} 06:17:10-413348 DEBUG Torch generator: device=cuda seeds=[852073816] 06:17:10-414364 DEBUG Diffuser pipeline: StableDiffusionInpaintPipeline task=DiffusersTaskType.INPAINTING batch=1/1x1 set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 7.0, 'num_inference_steps': 41, 'eta': 1.0, 'output_type': 'latent', 'image': [], 'mask_image': , 'strength': 0.5, 'height': 512, 'width': 512, 'parser': 'native'} Progress 10.04it/s █████████████████████████████████ 100% 20/20 00:01 00:00 Base 06:17:13-264668 DEBUG VAE decode: vae name="vae-ft-mse-840000-ema-pruned" dtype=torch.bfloat16 device=cuda:0 upcast=False slicing=True tiling=False latents shape=torch.Size([1, 4, 64, 64]) dtype=torch.bfloat16 device=cuda:0 time=0.652 06:17:13-505116 INFO Save: image="outputs\save\2024-11-30\00002-cyberrealistic_v40-woman red dress-before-detailer.png" type=PNG width=512 height=512 size=272477 06:17:13-511678 DEBUG Image resize: input= width=196 height=196 mode="Crop" upscaler="None" context="None" type=image result= time=0.00 fn=process_images_inner:apply_overlay 06:17:13-517709 DEBUG Pipeline class change: original=StableDiffusionInpaintPipeline target=StableDiffusionPipeline device=cuda:0 fn=restore:process_images_inner 06:17:13-522762 INFO Processed: images=1 its=6.19 time=3.23 timers={'init': 0.21, 'encode': 0.19, 'move': 0.21, 'preview': 0.16, 'pipeline': 4.18, 'decode': 1.66, 'post': 0.15} memory={'ram': {'used': 4.31, 'total': 31.83}, 'gpu': {'used': 2.19, 'total': 16.0}, 'retries': 0, 'oom': 0} 06:17:13-526272 DEBUG Detailer processed: models=['yolov8n-face'] 06:17:13-655177 INFO Save: image="outputs\text\2024-11-30\00006-cyberrealistic_v40-woman red dress.png" type=PNG width=512 height=512 size=359326 06:17:13-662120 INFO Processed: images=1 its=3.05 time=6.56 timers={'init': 0.21, 'encode': 0.19, 'move': 0.21, 'preview': 0.16, 'pipeline': 4.18, 'decode': 1.66, 'post': 0.29} memory={'ram': {'used': 4.3, 'total': 31.83}, 'gpu': {'used': 2.19, 'total': 16.0}, 'retries': 0, 'oom': 0} 06:17:19-517924 INFO Applying hypertile: unet=256 06:17:19-527495 INFO Base: class=StableDiffusionPipeline 06:17:19-529618 DEBUG Sampler: sampler=default class=PNDMScheduler: {'num_train_timesteps': 1000, 'beta_start': 0.00085, 'beta_end': 0.012, 'beta_schedule': 'scaled_linear', 'trained_betas': None, 'skip_prk_steps': True, 'set_alpha_to_one': False, 'prediction_type': 'epsilon', 'timestep_spacing': 'leading', 'steps_offset': 1, 'clip_sample': False} 06:17:19-639608 DEBUG Torch generator: device=cuda seeds=[2952724226] 06:17:19-640608 DEBUG Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE batch=1/1x1 set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 6, 'num_inference_steps': 20, 'eta': 1.0, 'guidance_rescale': 0.7, 'output_type': 'latent', 'width': 512, 'height': 512, 'parser': 'native'} Progress 9.96it/s █████████████████████████████████ 100% 20/20 00:02 00:00 Base 06:17:22-444286 DEBUG VAE decode: vae name="vae-ft-mse-840000-ema-pruned" dtype=torch.bfloat16 device=cuda:0 upcast=False slicing=True tiling=False latents shape=torch.Size([1, 4, 64, 64]) dtype=torch.bfloat16 device=cuda:0 time=0.781 06:17:22-632883 INFO Save: image="outputs\text\2024-11-30\00007-cyberrealistic_v40-woman red dress-before-detailer.png" type=PNG width=512 height=512 size=452934 06:17:22-668421 INFO Detailer: model="yolov8n-face" items=[{'label': 'face', 'score': 0.82, 'size': '42x58'}] args={'conf': 0.6, 'iou': 0.5} denoise=0.5 blur=10 width=512 height=512 padding=20 06:17:22-670419 DEBUG Detailer: prompt="woman, red dress" negative="" 06:17:22-672420 DEBUG Pipeline class change: original=StableDiffusionPipeline target=StableDiffusionInpaintPipeline device=cuda:0 fn=process_images_inner:init 06:17:22-680637 DEBUG Mask: size=512x512 masked=14296px area=0.05 auto=None blur=0.078 erode=0.010 dilate=0.156 type=Grayscale time=0.01 06:17:22-687680 DEBUG Image resize: input= width=512 height=512 mode="Crop" upscaler="None" context="None" type=image result= time=0.00 fn=process_images_inner:init 06:17:22-696534 DEBUG Image resize: input= width=512 height=512 mode="Fill" upscaler="None" context="None" type=image result= time=0.00 fn=process_images_inner:init 06:17:22-706589 INFO Base: class=StableDiffusionInpaintPipeline 06:17:22-707592 DEBUG Sampler: sampler=default class=PNDMScheduler: {'num_train_timesteps': 1000, 'beta_start': 0.00085, 'beta_end': 0.012, 'beta_schedule': 'scaled_linear', 'trained_betas': None, 'skip_prk_steps': True, 'set_alpha_to_one': False, 'prediction_type': 'epsilon', 'timestep_spacing': 'leading', 'steps_offset': 1, 'clip_sample': False} 06:17:22-796899 DEBUG Torch generator: device=cuda seeds=[2205164343] 06:17:22-797406 DEBUG Diffuser pipeline: StableDiffusionInpaintPipeline task=DiffusersTaskType.INPAINTING batch=1/1x1 set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 7.0, 'num_inference_steps': 41, 'eta': 1.0, 'output_type': 'latent', 'image': [], 'mask_image': , 'strength': 0.5, 'height': 512, 'width': 512, 'parser': 'native'} Progress 9.93it/s █████████████████████████████████ 100% 20/20 00:02 00:00 Base 06:17:25-740578 DEBUG VAE decode: vae name="vae-ft-mse-840000-ema-pruned" dtype=torch.bfloat16 device=cuda:0 upcast=False slicing=True tiling=False latents shape=torch.Size([1, 4, 64, 64]) dtype=torch.bfloat16 device=cuda:0 time=0.749 06:17:25-973519 INFO Save: image="outputs\save\2024-11-30\00003-cyberrealistic_v40-woman red dress-before-detailer.png" type=PNG width=512 height=512 size=290700 06:17:25-981064 DEBUG Image resize: input= width=172 height=172 mode="Crop" upscaler="None" context="None" type=image result= time=0.00 fn=process_images_inner:apply_overlay 06:17:25-987098 DEBUG Pipeline class change: original=StableDiffusionInpaintPipeline target=StableDiffusionPipeline device=cuda:0 fn=restore:process_images_inner 06:17:25-993112 INFO Processed: images=1 its=6.03 time=3.32 timers={'init': 0.16, 'encode': 0.19, 'move': 0.22, 'preview': 0.18, 'pipeline': 4.19, 'decode': 1.74, 'post': 0.15} memory={'ram': {'used': 4.31, 'total': 31.83}, 'gpu': {'used': 2.44, 'total': 16.0}, 'retries': 0, 'oom': 0} 06:17:25-995625 DEBUG Detailer processed: models=['yolov8n-face'] 06:17:26-075332 INFO Save: image="outputs\text\2024-11-30\00008-cyberrealistic_v40-woman red dress.png" type=PNG width=512 height=512 size=452758 06:17:26-082118 INFO Processed: images=1 its=3.05 time=6.56 timers={'init': 0.16, 'encode': 0.19, 'move': 0.22, 'preview': 0.18, 'pipeline': 4.19, 'decode': 1.74, 'post': 0.24} memory={'ram': {'used': 4.31, 'total': 31.83}, 'gpu': {'used': 2.44, 'total': 16.0}, 'retries': 0, 'oom': 0} 06:17:29-470980 INFO Applying hypertile: unet=256 06:17:29-480536 INFO Base: class=StableDiffusionPipeline 06:17:29-482477 DEBUG Sampler: sampler=default class=PNDMScheduler: {'num_train_timesteps': 1000, 'beta_start': 0.00085, 'beta_end': 0.012, 'beta_schedule': 'scaled_linear', 'trained_betas': None, 'skip_prk_steps': True, 'set_alpha_to_one': False, 'prediction_type': 'epsilon', 'timestep_spacing': 'leading', 'steps_offset': 1, 'clip_sample': False} 06:17:29-592539 DEBUG Torch generator: device=cuda seeds=[1229452647] 06:17:29-594542 DEBUG Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE batch=1/1x1 set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 6, 'num_inference_steps': 20, 'eta': 1.0, 'guidance_rescale': 0.7, 'output_type': 'latent', 'width': 512, 'height': 512, 'parser': 'native'} Progress 9.99it/s █████████████████████████████████ 100% 20/20 00:02 00:00 Base 06:17:32-394991 DEBUG VAE decode: vae name="vae-ft-mse-840000-ema-pruned" dtype=torch.bfloat16 device=cuda:0 upcast=False slicing=True tiling=False latents shape=torch.Size([1, 4, 64, 64]) dtype=torch.bfloat16 device=cuda:0 time=0.781 06:17:32-587351 INFO Save: image="outputs\text\2024-11-30\00009-cyberrealistic_v40-woman blue dress-before-detailer.png" type=PNG width=512 height=512 size=436085 06:17:32-629739 INFO Detailer: model="yolov8n-face" items=[{'label': 'face', 'score': 0.81, 'size': '51x77'}] args={'conf': 0.6, 'iou': 0.5} denoise=0.5 blur=10 width=512 height=512 padding=20 06:17:32-632725 DEBUG Detailer: prompt="woman, blue dress" negative="" 06:17:32-634725 DEBUG Pipeline class change: original=StableDiffusionPipeline target=StableDiffusionInpaintPipeline device=cuda:0 fn=process_images_inner:init 06:17:32-641977 DEBUG Mask: size=512x512 masked=17859px area=0.07 auto=None blur=0.078 erode=0.010 dilate=0.156 type=Grayscale time=0.01 06:17:32-647269 DEBUG Image resize: input= width=512 height=512 mode="Crop" upscaler="None" context="None" type=image result= time=0.00 fn=process_images_inner:init 06:17:32-655272 DEBUG Image resize: input= width=512 height=512 mode="Fill" upscaler="None" context="None" type=image result= time=0.00 fn=process_images_inner:init 06:17:32-664594 INFO Base: class=StableDiffusionInpaintPipeline 06:17:32-665570 DEBUG Sampler: sampler=default class=PNDMScheduler: {'num_train_timesteps': 1000, 'beta_start': 0.00085, 'beta_end': 0.012, 'beta_schedule': 'scaled_linear', 'trained_betas': None, 'skip_prk_steps': True, 'set_alpha_to_one': False, 'prediction_type': 'epsilon', 'timestep_spacing': 'leading', 'steps_offset': 1, 'clip_sample': False} 06:17:32-751507 DEBUG Torch generator: device=cuda seeds=[331418180] 06:17:32-753782 DEBUG Diffuser pipeline: StableDiffusionInpaintPipeline task=DiffusersTaskType.INPAINTING batch=1/1x1 set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 7.0, 'num_inference_steps': 41, 'eta': 1.0, 'output_type': 'latent', 'image': [], 'mask_image': , 'strength': 0.5, 'height': 512, 'width': 512, 'parser': 'native'} Progress 10.01it/s █████████████████████████████████ 100% 20/20 00:01 00:00 Base 06:17:35-640088 DEBUG VAE decode: vae name="vae-ft-mse-840000-ema-pruned" dtype=torch.bfloat16 device=cuda:0 upcast=False slicing=True tiling=False latents shape=torch.Size([1, 4, 64, 64]) dtype=torch.bfloat16 device=cuda:0 time=0.706 06:17:35-878356 INFO Save: image="outputs\save\2024-11-30\00004-cyberrealistic_v40-woman blue dress-before-detailer.png" type=PNG width=512 height=512 size=304363 06:17:35-885593 DEBUG Image resize: input= width=173 height=173 mode="Crop" upscaler="None" context="None" type=image result= time=0.00 fn=process_images_inner:apply_overlay 06:17:35-891104 DEBUG Pipeline class change: original=StableDiffusionInpaintPipeline target=StableDiffusionPipeline device=cuda:0 fn=restore:process_images_inner 06:17:35-897367 INFO Processed: images=1 its=6.14 time=3.26 timers={'init': 0.17, 'encode': 0.19, 'move': 0.22, 'preview': 0.19, 'pipeline': 4.17, 'decode': 1.7, 'post': 0.15} memory={'ram': {'used': 4.31, 'total': 31.83}, 'gpu': {'used': 2.44, 'total': 16.0}, 'retries': 0, 'oom': 0} 06:17:35-900872 DEBUG Detailer processed: models=['yolov8n-face'] 06:17:35-985724 INFO Save: image="outputs\text\2024-11-30\00010-cyberrealistic_v40-woman blue dress.png" type=PNG width=512 height=512 size=435648 06:17:35-992259 INFO Processed: images=1 its=3.07 time=6.51 timers={'init': 0.17, 'encode': 0.19, 'move': 0.22, 'preview': 0.19, 'pipeline': 4.17, 'decode': 1.7, 'post': 0.25} memory={'ram': {'used': 4.31, 'total': 31.83}, 'gpu': {'used': 2.44, 'total': 16.0}, 'retries': 0, 'oom': 0} 06:17:48-446622 INFO Applying hypertile: unet=256 06:17:48-482764 INFO Base: class=StableDiffusionPipeline 06:17:48-485341 DEBUG Sampler: sampler=default class=PNDMScheduler: {'num_train_timesteps': 1000, 'beta_start': 0.00085, 'beta_end': 0.012, 'beta_schedule': 'scaled_linear', 'trained_betas': None, 'skip_prk_steps': True, 'set_alpha_to_one': False, 'prediction_type': 'epsilon', 'timestep_spacing': 'leading', 'steps_offset': 1, 'clip_sample': False} 06:17:48-605882 DEBUG Torch generator: device=cuda seeds=[213168755] 06:17:48-606864 DEBUG Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE batch=1/1x1 set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 6, 'num_inference_steps': 20, 'eta': 1.0, 'guidance_rescale': 0.7, 'output_type': 'latent', 'width': 512, 'height': 512, 'parser': 'native'} Progress 4.08it/s █████████████████████████████████ 100% 20/20 00:04 00:00 Base 06:17:54-294469 DEBUG VAE decode: vae name="vae-ft-mse-840000-ema-pruned" dtype=torch.bfloat16 device=cuda:0 upcast=False slicing=True tiling=False latents shape=torch.Size([1, 4, 64, 64]) dtype=torch.bfloat16 device=cuda:0 time=0.768 06:17:54-503841 INFO Save: image="outputs\text\2024-11-30\00011-cyberrealistic_v40-woman blue dress-before-detailer.png" type=PNG width=512 height=512 size=400872 06:17:54-542981 INFO Detailer: model="yolov8n-face" items=[{'label': 'face', 'score': 0.79, 'size': '91x108'}] args={'conf': 0.6, 'iou': 0.5} denoise=0.5 blur=10 width=512 height=512 padding=20 06:17:54-544487 DEBUG Detailer: prompt="woman, blue dress" negative="" 06:17:54-547794 DEBUG Pipeline class change: original=StableDiffusionPipeline target=StableDiffusionInpaintPipeline device=cuda:0 fn=process_images_inner:init 06:17:54-555906 DEBUG Mask: size=512x512 masked=29014px area=0.11 auto=None blur=0.078 erode=0.010 dilate=0.156 type=Grayscale time=0.01 06:17:54-561051 DEBUG Image resize: input= width=512 height=512 mode="Crop" upscaler="None" context="None" type=image result= time=0.00 fn=process_images_inner:init 06:17:54-569219 DEBUG Image resize: input= width=512 height=512 mode="Fill" upscaler="None" context="None" type=image result= time=0.00 fn=process_images_inner:init 06:17:54-578460 INFO Base: class=StableDiffusionInpaintPipeline 06:17:54-580995 DEBUG Sampler: sampler=default class=PNDMScheduler: {'num_train_timesteps': 1000, 'beta_start': 0.00085, 'beta_end': 0.012, 'beta_schedule': 'scaled_linear', 'trained_betas': None, 'skip_prk_steps': True, 'set_alpha_to_one': False, 'prediction_type': 'epsilon', 'timestep_spacing': 'leading', 'steps_offset': 1, 'clip_sample': False} 06:17:54-672348 DEBUG Torch generator: device=cuda seeds=[705637365] 06:17:54-673333 DEBUG Diffuser pipeline: StableDiffusionInpaintPipeline task=DiffusersTaskType.INPAINTING batch=1/1x1 set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 7.0, 'num_inference_steps': 41, 'eta': 1.0, 'output_type': 'latent', 'image': [], 'mask_image': , 'strength': 0.5, 'height': 512, 'width': 512, 'parser': 'native'} Progress 9.94it/s █████████████████████████████████ 100% 20/20 00:02 00:00 Base 06:17:57-601638 DEBUG VAE decode: vae name="vae-ft-mse-840000-ema-pruned" dtype=torch.bfloat16 device=cuda:0 upcast=False slicing=True tiling=False latents shape=torch.Size([1, 4, 64, 64]) dtype=torch.bfloat16 device=cuda:0 time=0.736 06:17:57-840395 INFO Save: image="outputs\save\2024-11-30\00005-cyberrealistic_v40-woman blue dress-before-detailer.png" type=PNG width=512 height=512 size=296141 06:17:57-850066 DEBUG Image resize: input= width=222 height=222 mode="Crop" upscaler="None" context="None" type=image result= time=0.00 fn=process_images_inner:apply_overlay 06:17:57-855622 DEBUG Pipeline class change: original=StableDiffusionInpaintPipeline target=StableDiffusionPipeline device=cuda:0 fn=restore:process_images_inner 06:17:57-863662 INFO Processed: images=1 its=6.04 time=3.31 timers={'init': 0.21, 'encode': 0.21, 'move': 0.23, 'preview': 3.59, 'pipeline': 7.08, 'decode': 1.72, 'post': 0.16} memory={'ram': {'used': 4.31, 'total': 31.83}, 'gpu': {'used': 2.19, 'total': 16.0}, 'retries': 0, 'oom': 0} 06:17:57-866682 DEBUG Detailer processed: models=['yolov8n-face'] 06:17:57-969010 INFO Save: image="outputs\text\2024-11-30\00012-cyberrealistic_v40-woman blue dress.png" type=PNG width=512 height=512 size=400424 06:17:57-975807 INFO Processed: images=1 its=2.11 time=9.50 timers={'init': 0.21, 'encode': 0.21, 'move': 0.23, 'preview': 3.59, 'pipeline': 7.08, 'decode': 1.72, 'post': 0.27} memory={'ram': {'used': 4.3, 'total': 31.83}, 'gpu': {'used': 2.19, 'total': 16.0}, 'retries': 0, 'oom': 0} 06:17:59-697733 DEBUG Server: alive=True requests=578 memory=4.3/31.83 status='idle' task='' timestamp=None id='' job=0 jobs=0 total=1 step=0 steps=0 queued=0 uptime=222 elapsed=213.62 eta=None progress=0 06:18:05-444617 INFO Applying hypertile: unet=256 06:18:05-454186 INFO Base: class=StableDiffusionPipeline 06:18:05-455693 DEBUG Sampler: sampler=default class=PNDMScheduler: {'num_train_timesteps': 1000, 'beta_start': 0.00085, 'beta_end': 0.012, 'beta_schedule': 'scaled_linear', 'trained_betas': None, 'skip_prk_steps': True, 'set_alpha_to_one': False, 'prediction_type': 'epsilon', 'timestep_spacing': 'leading', 'steps_offset': 1, 'clip_sample': False} 06:18:05-562936 DEBUG Torch generator: device=cuda seeds=[3370501255] 06:18:05-564480 DEBUG Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE batch=1/1x1 set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 6, 'num_inference_steps': 20, 'eta': 1.0, 'guidance_rescale': 0.7, 'output_type': 'latent', 'width': 512, 'height': 512, 'parser': 'native'} Progress 9.98it/s █████████████████████████████████ 100% 20/20 00:02 00:00 Base 06:18:08-367537 DEBUG VAE decode: vae name="vae-ft-mse-840000-ema-pruned" dtype=torch.bfloat16 device=cuda:0 upcast=False slicing=True tiling=False latents shape=torch.Size([1, 4, 64, 64]) dtype=torch.bfloat16 device=cuda:0 time=0.78 06:18:08-602756 INFO Save: image="outputs\text\2024-11-30\00013-cyberrealistic_v40-woman blue dress-before-detailer.png" type=PNG width=512 height=512 size=352362 06:18:08-641441 INFO Detailer: model="yolov8n-face" items=[{'label': 'face', 'score': 0.82, 'size': '82x110'}] args={'conf': 0.6, 'iou': 0.5} denoise=0.5 blur=10 width=512 height=512 padding=20 06:18:08-643645 DEBUG Detailer: prompt="woman, blue dress" negative="" 06:18:08-645668 DEBUG Pipeline class change: original=StableDiffusionPipeline target=StableDiffusionInpaintPipeline device=cuda:0 fn=process_images_inner:init 06:18:08-653874 DEBUG Mask: size=512x512 masked=27606px area=0.11 auto=None blur=0.078 erode=0.010 dilate=0.156 type=Grayscale time=0.01 06:18:08-658856 DEBUG Image resize: input= width=512 height=512 mode="Crop" upscaler="None" context="None" type=image result= time=0.00 fn=process_images_inner:init 06:18:08-668417 DEBUG Image resize: input= width=512 height=512 mode="Fill" upscaler="None" context="None" type=image result= time=0.00 fn=process_images_inner:init 06:18:08-676488 INFO Base: class=StableDiffusionInpaintPipeline 06:18:08-678021 DEBUG Sampler: sampler=default class=PNDMScheduler: {'num_train_timesteps': 1000, 'beta_start': 0.00085, 'beta_end': 0.012, 'beta_schedule': 'scaled_linear', 'trained_betas': None, 'skip_prk_steps': True, 'set_alpha_to_one': False, 'prediction_type': 'epsilon', 'timestep_spacing': 'leading', 'steps_offset': 1, 'clip_sample': False} 06:18:08-768549 DEBUG Torch generator: device=cuda seeds=[100776195] 06:18:08-769566 DEBUG Diffuser pipeline: StableDiffusionInpaintPipeline task=DiffusersTaskType.INPAINTING batch=1/1x1 set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 7.0, 'num_inference_steps': 41, 'eta': 1.0, 'output_type': 'latent', 'image': [], 'mask_image': , 'strength': 0.5, 'height': 512, 'width': 512, 'parser': 'native'} Progress 9.97it/s █████████████████████████████████ 100% 20/20 00:02 00:00 Base 06:18:11-690471 DEBUG VAE decode: vae name="vae-ft-mse-840000-ema-pruned" dtype=torch.bfloat16 device=cuda:0 upcast=False slicing=True tiling=False latents shape=torch.Size([1, 4, 64, 64]) dtype=torch.bfloat16 device=cuda:0 time=0.73 06:18:11-928402 INFO Save: image="outputs\save\2024-11-30\00006-cyberrealistic_v40-woman blue dress-before-detailer.png" type=PNG width=512 height=512 size=286557 06:18:11-934968 DEBUG Image resize: input= width=203 height=203 mode="Crop" upscaler="None" context="None" type=image result= time=0.00 fn=process_images_inner:apply_overlay 06:18:11-941768 DEBUG Pipeline class change: original=StableDiffusionInpaintPipeline target=StableDiffusionPipeline device=cuda:0 fn=restore:process_images_inner 06:18:11-946810 INFO Processed: images=1 its=6.06 time=3.30 timers={'init': 0.21, 'encode': 0.19, 'move': 0.22, 'preview': 0.15, 'pipeline': 4.18, 'decode': 1.73, 'post': 0.15} memory={'ram': {'used': 4.3, 'total': 31.83}, 'gpu': {'used': 2.19, 'total': 16.0}, 'retries': 0, 'oom': 0} 06:18:11-949333 DEBUG Detailer processed: models=['yolov8n-face'] 06:18:12-081258 INFO Save: image="outputs\text\2024-11-30\00014-cyberrealistic_v40-woman blue dress.png" type=PNG width=512 height=512 size=350073 06:18:12-087744 INFO Processed: images=1 its=3.01 time=6.63 timers={'init': 0.21, 'encode': 0.19, 'move': 0.22, 'preview': 0.15, 'pipeline': 4.18, 'decode': 1.73, 'post': 0.29} memory={'ram': {'used': 4.3, 'total': 31.83}, 'gpu': {'used': 2.19, 'total': 16.0}, 'retries': 0, 'oom': 0} 06:18:21-989305 INFO Applying hypertile: unet=256 06:18:22-013819 INFO Base: class=StableDiffusionPipeline 06:18:22-017334 DEBUG Sampler: sampler=default class=PNDMScheduler: {'num_train_timesteps': 1000, 'beta_start': 0.00085, 'beta_end': 0.012, 'beta_schedule': 'scaled_linear', 'trained_betas': None, 'skip_prk_steps': True, 'set_alpha_to_one': False, 'prediction_type': 'epsilon', 'timestep_spacing': 'leading', 'steps_offset': 1, 'clip_sample': False} 06:18:22-142782 DEBUG Torch generator: device=cuda seeds=[3060763861] 06:18:22-144331 DEBUG Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE batch=1/1x1 set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 6, 'num_inference_steps': 20, 'eta': 1.0, 'guidance_rescale': 0.7, 'output_type': 'latent', 'width': 512, 'height': 512, 'parser': 'native'} Progress 9.99it/s █████████████████████████████████ 100% 20/20 00:02 00:00 Base 06:18:24-961680 DEBUG VAE decode: vae name="vae-ft-mse-840000-ema-pruned" dtype=torch.bfloat16 device=cuda:0 upcast=False slicing=True tiling=False latents shape=torch.Size([1, 4, 64, 64]) dtype=torch.bfloat16 device=cuda:0 time=0.799 06:18:25-183570 INFO Save: image="outputs\text\2024-11-30\00015-cyberrealistic_v40-woman blue dress-before-detailer.png" type=PNG width=512 height=512 size=387602 06:18:25-222208 INFO Detailer: model="yolov8n-face" items=[{'label': 'face', 'score': 0.85, 'size': '60x82'}] args={'conf': 0.6, 'iou': 0.5} denoise=0.5 blur=10 width=512 height=512 padding=20 06:18:25-224488 DEBUG Detailer: prompt="woman, blue dress" negative="" 06:18:25-227516 DEBUG Pipeline class change: original=StableDiffusionPipeline target=StableDiffusionInpaintPipeline device=cuda:0 fn=process_images_inner:init 06:18:25-235677 DEBUG Mask: size=512x512 masked=19828px area=0.08 auto=None blur=0.078 erode=0.010 dilate=0.156 type=Grayscale time=0.01 06:18:25-240319 DEBUG Image resize: input= width=512 height=512 mode="Crop" upscaler="None" context="None" type=image result= time=0.00 fn=process_images_inner:init 06:18:25-249666 DEBUG Image resize: input= width=512 height=512 mode="Fill" upscaler="None" context="None" type=image result= time=0.00 fn=process_images_inner:init 06:18:25-258933 INFO Base: class=StableDiffusionInpaintPipeline 06:18:25-260934 DEBUG Sampler: sampler=default class=PNDMScheduler: {'num_train_timesteps': 1000, 'beta_start': 0.00085, 'beta_end': 0.012, 'beta_schedule': 'scaled_linear', 'trained_betas': None, 'skip_prk_steps': True, 'set_alpha_to_one': False, 'prediction_type': 'epsilon', 'timestep_spacing': 'leading', 'steps_offset': 1, 'clip_sample': False} 06:18:25-351296 DEBUG Torch generator: device=cuda seeds=[256312024] 06:18:25-352296 DEBUG Diffuser pipeline: StableDiffusionInpaintPipeline task=DiffusersTaskType.INPAINTING batch=1/1x1 set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 7.0, 'num_inference_steps': 41, 'eta': 1.0, 'output_type': 'latent', 'image': [], 'mask_image': , 'strength': 0.5, 'height': 512, 'width': 512, 'parser': 'native'} Progress 9.88it/s █████████████████████████████████ 100% 20/20 00:02 00:00 Base 06:18:28-313680 DEBUG VAE decode: vae name="vae-ft-mse-840000-ema-pruned" dtype=torch.bfloat16 device=cuda:0 upcast=False slicing=True tiling=False latents shape=torch.Size([1, 4, 64, 64]) dtype=torch.bfloat16 device=cuda:0 time=0.746 06:18:28-553705 INFO Save: image="outputs\save\2024-11-30\00007-cyberrealistic_v40-woman blue dress-before-detailer.png" type=PNG width=512 height=512 size=268879 06:18:28-560767 DEBUG Image resize: input= width=175 height=175 mode="Crop" upscaler="None" context="None" type=image result= time=0.00 fn=process_images_inner:apply_overlay 06:18:28-565867 DEBUG Pipeline class change: original=StableDiffusionInpaintPipeline target=StableDiffusionPipeline device=cuda:0 fn=restore:process_images_inner 06:18:28-571404 INFO Processed: images=1 its=5.98 time=3.34 timers={'init': 0.22, 'encode': 0.21, 'move': 0.23, 'preview': 0.22, 'pipeline': 4.2, 'decode': 1.76, 'post': 0.16} memory={'ram': {'used': 4.31, 'total': 31.83}, 'gpu': {'used': 2.44, 'total': 16.0}, 'retries': 0, 'oom': 0} 06:18:28-574850 DEBUG Detailer processed: models=['yolov8n-face'] 06:18:28-691902 INFO Save: image="outputs\text\2024-11-30\00016-cyberrealistic_v40-woman blue dress.png" type=PNG width=512 height=512 size=387407 06:18:28-699659 INFO Processed: images=1 its=2.99 time=6.70 timers={'init': 0.22, 'encode': 0.21, 'move': 0.23, 'preview': 0.22, 'pipeline': 4.2, 'decode': 1.76, 'post': 0.28} memory={'ram': {'used': 4.3, 'total': 31.83}, 'gpu': {'used': 2.44, 'total': 16.0}, 'retries': 0, 'oom': 0} 06:18:31-960741 INFO Applying hypertile: unet=256 06:18:31-969741 INFO Base: class=StableDiffusionPipeline 06:18:31-971741 DEBUG Sampler: sampler=default class=PNDMScheduler: {'num_train_timesteps': 1000, 'beta_start': 0.00085, 'beta_end': 0.012, 'beta_schedule': 'scaled_linear', 'trained_betas': None, 'skip_prk_steps': True, 'set_alpha_to_one': False, 'prediction_type': 'epsilon', 'timestep_spacing': 'leading', 'steps_offset': 1, 'clip_sample': False} 06:18:32-077758 DEBUG Torch generator: device=cuda seeds=[221041483] 06:18:32-078778 DEBUG Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE batch=1/1x1 set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 6, 'num_inference_steps': 20, 'eta': 1.0, 'guidance_rescale': 0.7, 'output_type': 'latent', 'width': 512, 'height': 768, 'parser': 'native'} Progress 1.06it/s █████████████████████████████████ 100% 20/20 00:18 00:00 Base 06:19:01-004328 DEBUG VAE decode: vae name="vae-ft-mse-840000-ema-pruned" dtype=torch.bfloat16 device=cuda:0 upcast=False slicing=True tiling=False latents shape=torch.Size([1, 4, 96, 64]) dtype=torch.bfloat16 device=cuda:0 time=7.056 06:19:01-179857 INFO Save: image="outputs\text\2024-11-30\00017-cyberrealistic_v40-woman blue dress-before-detailer.png" type=PNG width=512 height=768 size=599241 06:19:44-991807 INFO Detailer: model="yolov8n-face" items=[{'label': 'face', 'score': 0.81, 'size': '96x117'}] args={'conf': 0.6, 'iou': 0.5} denoise=0.5 blur=10 width=512 height=512 padding=20 06:19:44-994321 DEBUG Detailer: prompt="woman, blue dress" negative="" 06:19:44-996664 DEBUG Pipeline class change: original=StableDiffusionPipeline target=StableDiffusionInpaintPipeline device=cuda:0 fn=process_images_inner:init 06:19:45-007603 DEBUG Mask: size=512x768 masked=31238px area=0.08 auto=None blur=0.078 erode=0.010 dilate=0.156 type=Grayscale time=0.01 06:19:45-014129 DEBUG Image resize: input= width=512 height=512 mode="Crop" upscaler="None" context="None" type=image result= time=0.00 fn=process_images_inner:init 06:19:45-024678 DEBUG Image resize: input= width=512 height=512 mode="Fill" upscaler="None" context="None" type=image result= time=0.00 fn=process_images_inner:init 06:19:45-033703 INFO Base: class=StableDiffusionInpaintPipeline 06:19:45-035705 DEBUG Sampler: sampler=default class=PNDMScheduler: {'num_train_timesteps': 1000, 'beta_start': 0.00085, 'beta_end': 0.012, 'beta_schedule': 'scaled_linear', 'trained_betas': None, 'skip_prk_steps': True, 'set_alpha_to_one': False, 'prediction_type': 'epsilon', 'timestep_spacing': 'leading', 'steps_offset': 1, 'clip_sample': False} 06:19:45-126663 DEBUG Torch generator: device=cuda seeds=[351637854] 06:19:45-129802 DEBUG Image resize: input= width=512 height=512 mode="Fixed" upscaler="None" context="None" type=image result= time=0.00 fn=task_specific_kwargs:resize_init_images 06:19:45-131377 DEBUG Diffuser pipeline: StableDiffusionInpaintPipeline task=DiffusersTaskType.INPAINTING batch=1/1x1 set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 7.0, 'num_inference_steps': 41, 'eta': 1.0, 'output_type': 'latent', 'image': [], 'mask_image': , 'strength': 0.5, 'height': 512, 'width': 512, 'parser': 'native'} Progress ?it/s 0% 0/20 00:00 ? Base06:19:45-768190 ERROR Hypertile error: width=512 height=512 Error while processing rearrange-reduction pattern "b (nh h nw w) c -> (b nh nw) (h w) c". Input tensor shape: torch.Size([2, 4096, 320]). Additional info: {'h': 26, 'w': 26, 'nh': 3, 'nw': 2}. Shape mismatch, 4096 != 4056 Progress 4.06it/s █████████████████████████████████ 100% 20/20 00:04 00:00 Base 06:19:50-930663 DEBUG VAE decode: vae name="vae-ft-mse-840000-ema-pruned" dtype=torch.bfloat16 device=cuda:0 upcast=False slicing=True tiling=False latents shape=torch.Size([1, 4, 64, 64]) dtype=torch.bfloat16 device=cuda:0 time=0.694 06:19:51-173220 INFO Save: image="outputs\save\2024-11-30\00008-cyberrealistic_v40-woman blue dress-before-detailer.png" type=PNG width=512 height=512 size=306114 06:19:51-179810 DEBUG Image resize: input= width=210 height=210 mode="Crop" upscaler="None" context="None" type=image result= time=0.00 fn=process_images_inner:apply_overlay 06:19:51-185854 DEBUG Pipeline class change: original=StableDiffusionInpaintPipeline target=StableDiffusionPipeline device=cuda:0 fn=restore:process_images_inner 06:19:51-191376 INFO Processed: images=1 its=3.23 time=6.19 timers={'init': 44.03, 'encode': 0.19, 'move': 0.22, 'preview': 10.07, 'pipeline': 23.98, 'decode': 10.83, 'post': 0.16} memory={'ram': {'used': 4.44, 'total': 31.83}, 'gpu': {'used': 2.19, 'total': 16.0}, 'retries': 0, 'oom': 0} 06:19:51-195349 DEBUG Detailer processed: models=['yolov8n-face'] 06:19:51-360188 INFO Save: image="outputs\text\2024-11-30\00018-cyberrealistic_v40-woman blue dress.png" type=PNG width=512 height=768 size=598988 06:19:51-366204 INFO Processed: images=1 its=0.25 time=79.40 timers={'init': 44.03, 'encode': 0.19, 'move': 0.22, 'preview': 10.07, 'pipeline': 23.98, 'decode': 10.83, 'post': 0.33} memory={'ram': {'used': 4.43, 'total': 31.83}, 'gpu': {'used': 2.19, 'total': 16.0}, 'retries': 0, 'oom': 0} 06:19:55-352183 INFO Applying hypertile: unet=256 06:19:55-362233 INFO Base: class=StableDiffusionPipeline 06:19:55-364234 DEBUG Sampler: sampler=default class=PNDMScheduler: {'num_train_timesteps': 1000, 'beta_start': 0.00085, 'beta_end': 0.012, 'beta_schedule': 'scaled_linear', 'trained_betas': None, 'skip_prk_steps': True, 'set_alpha_to_one': False, 'prediction_type': 'epsilon', 'timestep_spacing': 'leading', 'steps_offset': 1, 'clip_sample': False} 06:19:55-476029 DEBUG Torch generator: device=cuda seeds=[341987991] 06:19:55-477028 DEBUG Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE batch=1/1x1 set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 6, 'num_inference_steps': 20, 'eta': 1.0, 'guidance_rescale': 0.7, 'output_type': 'latent', 'width': 512, 'height': 768, 'parser': 'native'} Progress 3.62it/s █████████████████████████████████ 100% 20/20 00:05 00:00 Base 06:20:01-580104 DEBUG VAE decode: vae name="vae-ft-mse-840000-ema-pruned" dtype=torch.bfloat16 device=cuda:0 upcast=False slicing=True tiling=False latents shape=torch.Size([1, 4, 96, 64]) dtype=torch.bfloat16 device=cuda:0 time=0.565 06:20:01-887164 INFO Save: image="outputs\text\2024-11-30\00019-cyberrealistic_v40-woman blue dress-before-detailer.png" type=PNG width=512 height=768 size=647684 06:20:01-926181 INFO Detailer: model="yolov8n-face" items=[{'label': 'face', 'score': 0.79, 'size': '97x130'}] args={'conf': 0.6, 'iou': 0.5} denoise=0.5 blur=10 width=512 height=512 padding=20 06:20:01-928319 DEBUG Detailer: prompt="woman, blue dress" negative="" 06:20:01-931384 DEBUG Pipeline class change: original=StableDiffusionPipeline target=StableDiffusionInpaintPipeline device=cuda:0 fn=process_images_inner:init 06:20:01-943528 DEBUG Mask: size=512x768 masked=33868px area=0.09 auto=None blur=0.078 erode=0.010 dilate=0.156 type=Grayscale time=0.01 06:20:01-949571 DEBUG Image resize: input= width=512 height=512 mode="Crop" upscaler="None" context="None" type=image result= time=0.00 fn=process_images_inner:init 06:20:01-962716 DEBUG Image resize: input= width=512 height=512 mode="Fill" upscaler="None" context="None" type=image result= time=0.00 fn=process_images_inner:init 06:20:01-971241 INFO Base: class=StableDiffusionInpaintPipeline 06:20:01-972765 DEBUG Sampler: sampler=default class=PNDMScheduler: {'num_train_timesteps': 1000, 'beta_start': 0.00085, 'beta_end': 0.012, 'beta_schedule': 'scaled_linear', 'trained_betas': None, 'skip_prk_steps': True, 'set_alpha_to_one': False, 'prediction_type': 'epsilon', 'timestep_spacing': 'leading', 'steps_offset': 1, 'clip_sample': False} 06:20:02-068710 DEBUG Torch generator: device=cuda seeds=[419980317] 06:20:02-071844 DEBUG Image resize: input= width=512 height=512 mode="Fixed" upscaler="None" context="None" type=image result= time=0.00 fn=task_specific_kwargs:resize_init_images 06:20:02-073844 DEBUG Diffuser pipeline: StableDiffusionInpaintPipeline task=DiffusersTaskType.INPAINTING batch=1/1x1 set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 7.0, 'num_inference_steps': 41, 'eta': 1.0, 'output_type': 'latent', 'image': [], 'mask_image': , 'strength': 0.5, 'height': 512, 'width': 512, 'parser': 'native'} Progress ?it/s 0% 0/20 00:00 ? Base06:20:02-723808 ERROR Hypertile error: width=512 height=512 Error while processing rearrange-reduction pattern "b (nh h nw w) c -> (b nh nw) (h w) c". Input tensor shape: torch.Size([2, 4096, 320]). Additional info: {'h': 26, 'w': 26, 'nh': 3, 'nw': 2}. Shape mismatch, 4096 != 4056 Progress 9.78it/s █████████████████████████████████ 100% 20/20 00:02 00:00 Base 06:20:05-028706 DEBUG VAE decode: vae name="vae-ft-mse-840000-ema-pruned" dtype=torch.bfloat16 device=cuda:0 upcast=False slicing=True tiling=False latents shape=torch.Size([1, 4, 64, 64]) dtype=torch.bfloat16 device=cuda:0 time=0.716 06:20:05-262736 INFO Save: image="outputs\save\2024-11-30\00009-cyberrealistic_v40-woman blue dress-before-detailer.png" type=PNG width=512 height=512 size=342217 06:20:05-269881 DEBUG Image resize: input= width=244 height=244 mode="Crop" upscaler="None" context="None" type=image result= time=0.00 fn=process_images_inner:apply_overlay 06:20:05-276493 DEBUG Pipeline class change: original=StableDiffusionInpaintPipeline target=StableDiffusionPipeline device=cuda:0 fn=restore:process_images_inner 06:20:05-281593 INFO Processed: images=1 its=5.98 time=3.35 timers={'init': 0.24, 'encode': 0.2, 'move': 0.23, 'preview': 3.83, 'pipeline': 7.74, 'decode': 1.56, 'post': 0.15} memory={'ram': {'used': 4.37, 'total': 31.83}, 'gpu': {'used': 2.22, 'total': 16.0}, 'retries': 0, 'oom': 0} 06:20:05-285127 DEBUG Detailer processed: models=['yolov8n-face'] 06:20:05-423249 INFO Save: image="outputs\text\2024-11-30\00020-cyberrealistic_v40-woman blue dress.png" type=PNG width=512 height=768 size=646560 06:20:05-429758 INFO Processed: images=1 its=1.99 time=10.07 timers={'init': 0.24, 'encode': 0.2, 'move': 0.23, 'preview': 3.83, 'pipeline': 7.74, 'decode': 1.56, 'post': 0.3} memory={'ram': {'used': 4.36, 'total': 31.83}, 'gpu': {'used': 2.22, 'total': 16.0}, 'retries': 0, 'oom': 0} 06:20:55-267092 INFO Applying hypertile: unet=256 06:20:55-285736 INFO Base: class=StableDiffusionPipeline 06:20:55-297251 DEBUG Sampler: sampler=default class=PNDMScheduler: {'num_train_timesteps': 1000, 'beta_start': 0.00085, 'beta_end': 0.012, 'beta_schedule': 'scaled_linear', 'trained_betas': None, 'skip_prk_steps': True, 'set_alpha_to_one': False, 'prediction_type': 'epsilon', 'timestep_spacing': 'leading', 'steps_offset': 1, 'clip_sample': False} 06:20:55-416406 DEBUG Torch generator: device=cuda seeds=[1375781155] 06:20:55-418408 DEBUG Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE batch=1/1x1 set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 6, 'num_inference_steps': 20, 'eta': 1.0, 'guidance_rescale': 0.7, 'output_type': 'latent', 'width': 512, 'height': 768, 'parser': 'native'} Progress 3.65it/s █████████████████████████████████ 100% 20/20 00:05 00:00 Base 06:21:01-507740 DEBUG VAE decode: vae name="vae-ft-mse-840000-ema-pruned" dtype=torch.bfloat16 device=cuda:0 upcast=False slicing=True tiling=False latents shape=torch.Size([1, 4, 96, 64]) dtype=torch.bfloat16 device=cuda:0 time=0.591 06:21:01-808487 INFO Save: image="outputs\text\2024-11-30\00021-cyberrealistic_v40-woman blue dress-before-detailer.png" type=PNG width=512 height=768 size=657313 06:21:01-848339 INFO Detailer: model="yolov8n-face" items=[{'label': 'face', 'score': 0.85, 'size': '59x81'}] args={'conf': 0.6, 'iou': 0.5} denoise=0.5 blur=10 width=512 height=512 padding=20 06:21:01-850344 DEBUG Detailer: prompt="woman, blue dress" negative="" 06:21:01-852368 DEBUG Pipeline class change: original=StableDiffusionPipeline target=StableDiffusionInpaintPipeline device=cuda:0 fn=process_images_inner:init 06:21:01-864057 DEBUG Mask: size=512x768 masked=19599px area=0.05 auto=None blur=0.078 erode=0.010 dilate=0.156 type=Grayscale time=0.01 06:21:01-870184 DEBUG Image resize: input= width=512 height=512 mode="Crop" upscaler="None" context="None" type=image result= time=0.00 fn=process_images_inner:init 06:21:01-880824 DEBUG Image resize: input= width=512 height=512 mode="Fill" upscaler="None" context="None" type=image result= time=0.00 fn=process_images_inner:init 06:21:01-889441 INFO Base: class=StableDiffusionInpaintPipeline 06:21:01-890440 DEBUG Sampler: sampler=default class=PNDMScheduler: {'num_train_timesteps': 1000, 'beta_start': 0.00085, 'beta_end': 0.012, 'beta_schedule': 'scaled_linear', 'trained_betas': None, 'skip_prk_steps': True, 'set_alpha_to_one': False, 'prediction_type': 'epsilon', 'timestep_spacing': 'leading', 'steps_offset': 1, 'clip_sample': False} 06:21:01-974872 DEBUG Torch generator: device=cuda seeds=[1555654942] 06:21:01-977389 DEBUG Image resize: input= width=512 height=512 mode="Fixed" upscaler="None" context="None" type=image result= time=0.00 fn=task_specific_kwargs:resize_init_images 06:21:01-980986 DEBUG Diffuser pipeline: StableDiffusionInpaintPipeline task=DiffusersTaskType.INPAINTING batch=1/1x1 set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 7.0, 'num_inference_steps': 41, 'eta': 1.0, 'output_type': 'latent', 'image': [], 'mask_image': , 'strength': 0.5, 'height': 512, 'width': 512, 'parser': 'native'} Progress ?it/s 0% 0/20 00:00 ? Base06:21:02-626948 ERROR Hypertile error: width=512 height=512 Error while processing rearrange-reduction pattern "b (nh h nw w) c -> (b nh nw) (h w) c". Input tensor shape: torch.Size([2, 4096, 320]). Additional info: {'h': 26, 'w': 26, 'nh': 3, 'nw': 2}. Shape mismatch, 4096 != 4056 Progress 4.07it/s █████████████████████████████████ 100% 20/20 00:04 00:00 Base 06:21:07-804880 DEBUG VAE decode: vae name="vae-ft-mse-840000-ema-pruned" dtype=torch.bfloat16 device=cuda:0 upcast=False slicing=True tiling=False latents shape=torch.Size([1, 4, 64, 64]) dtype=torch.bfloat16 device=cuda:0 time=0.711 06:21:08-043918 INFO Save: image="outputs\save\2024-11-30\00010-cyberrealistic_v40-woman blue dress-before-detailer.png" type=PNG width=512 height=512 size=295411 06:21:08-050265 DEBUG Image resize: input= width=195 height=195 mode="Crop" upscaler="None" context="None" type=image result= time=0.00 fn=process_images_inner:apply_overlay 06:21:08-057803 DEBUG Pipeline class change: original=StableDiffusionInpaintPipeline target=StableDiffusionPipeline device=cuda:0 fn=restore:process_images_inner 06:21:08-064389 INFO Processed: images=1 its=3.22 time=6.21 timers={'init': 0.23, 'prepare': 0.02, 'encode': 0.2, 'move': 0.23, 'callback': 0.2, 'preview': 7.21, 'pipeline': 10.58, 'decode': 1.58, 'post': 0.16} memory={'ram': {'used': 4.32, 'total': 31.83}, 'gpu': {'used': 2.44, 'total': 16.0}, 'retries': 0, 'oom': 0} 06:21:08-068357 DEBUG Detailer processed: models=['yolov8n-face'] 06:21:08-197985 INFO Save: image="outputs\text\2024-11-30\00022-cyberrealistic_v40-woman blue dress.png" type=PNG width=512 height=768 size=656953 06:21:08-203985 INFO Processed: images=1 its=1.55 time=12.93 timers={'init': 0.23, 'prepare': 0.02, 'encode': 0.2, 'move': 0.23, 'callback': 0.2, 'preview': 7.21, 'pipeline': 10.58, 'decode': 1.58, 'post': 0.3} memory={'ram': {'used': 4.32, 'total': 31.83}, 'gpu': {'used': 2.44, 'total': 16.0}, 'retries': 0, 'oom': 0} 06:21:59-527390 DEBUG Server: alive=True requests=1001 memory=4.31/31.83 status='idle' task='' timestamp=None id='' job=0 jobs=0 total=1 step=0 steps=0 queued=0 uptime=462 elapsed=453.45 eta=None progress=0