Skip to content

Commit

Permalink
merge dev, replace resume and lora+ with upstream (untested)
Browse files Browse the repository at this point in the history
Squashed commit of the following:

commit 56bb81c
Author: Kohya S <ykumeykume@gmail.com>
Date:   Wed Jun 12 21:39:35 2024 +0900

    add grad_hook after restore state closes kohya-ss#1344

commit 22413a5
Merge: 3259928 18d7597
Author: Kohya S <52813779+kohya-ss@users.noreply.github.com>
Date:   Tue Jun 11 19:52:03 2024 +0900

    Merge pull request kohya-ss#1359 from kohya-ss/train_resume_step

    Train resume step

commit 18d7597
Author: Kohya S <ykumeykume@gmail.com>
Date:   Tue Jun 11 19:51:30 2024 +0900

    update README

commit 4a44188
Merge: 4dbcef4 3259928
Author: Kohya S <ykumeykume@gmail.com>
Date:   Tue Jun 11 19:27:37 2024 +0900

    Merge branch 'dev' into train_resume_step

commit 3259928
Merge: 1a104dc 5bfe5e4
Author: Kohya S <ykumeykume@gmail.com>
Date:   Sun Jun 9 19:26:42 2024 +0900

    Merge branch 'dev' of https://github.com/kohya-ss/sd-scripts into dev

commit 1a104dc
Author: Kohya S <ykumeykume@gmail.com>
Date:   Sun Jun 9 19:26:36 2024 +0900

    make forward/backward pathes same ref kohya-ss#1363

commit 58fb648
Author: Kohya S <ykumeykume@gmail.com>
Date:   Sun Jun 9 19:26:09 2024 +0900

    set static graph flag when DDP ref kohya-ss#1363

commit 5bfe5e4
Merge: e5bab69 4ecbac1
Author: Kohya S <52813779+kohya-ss@users.noreply.github.com>
Date:   Thu Jun 6 21:23:24 2024 +0900

    Merge pull request kohya-ss#1361 from shirayu/update/github_actions/crate-ci/typos-1.21.0

    Bump crate-ci/typos from 1.19.0 to 1.21.0, fix typos, and updated _typos.toml (Close kohya-ss#1307)

commit 4ecbac1
Author: Yuta Hayashibe <yuta@hayashibe.jp>
Date:   Wed Jun 5 16:31:44 2024 +0900

    Bump crate-ci/typos from 1.19.0 to 1.21.0, fix typos, and updated _typos.toml (Close kohya-ss#1307)

commit 4dbcef4
Author: Kohya S <ykumeykume@gmail.com>
Date:   Tue Jun 4 21:26:55 2024 +0900

    update for corner cases

commit 321e24d
Merge: e5bab69 3eb27ce
Author: Kohya S <52813779+kohya-ss@users.noreply.github.com>
Date:   Tue Jun 4 19:30:11 2024 +0900

    Merge pull request kohya-ss#1353 from KohakuBlueleaf/train_resume_step

    Resume correct step for "resume from state" feature.

commit e5bab69
Author: Kohya S <ykumeykume@gmail.com>
Date:   Sun Jun 2 21:11:40 2024 +0900

    fix alpha mask without disk cache closes kohya-ss#1351, ref kohya-ss#1339

commit 3eb27ce
Author: Kohaku-Blueleaf <59680068+KohakuBlueleaf@users.noreply.github.com>
Date:   Fri May 31 12:24:15 2024 +0800

    Skip the final 1 step

commit b2363f1
Author: Kohaku-Blueleaf <59680068+KohakuBlueleaf@users.noreply.github.com>
Date:   Fri May 31 12:20:20 2024 +0800

    Final implementation

commit 0d96e10
Merge: ffce3b5 fc85496
Author: Kohya S <52813779+kohya-ss@users.noreply.github.com>
Date:   Mon May 27 21:41:16 2024 +0900

    Merge pull request kohya-ss#1339 from kohya-ss/alpha-masked-loss

    Alpha masked loss

commit fc85496
Author: Kohya S <ykumeykume@gmail.com>
Date:   Mon May 27 21:25:06 2024 +0900

    update docs for masked loss

commit 2870be9
Merge: 71ad3c0 ffce3b5
Author: Kohya S <ykumeykume@gmail.com>
Date:   Mon May 27 21:08:43 2024 +0900

    Merge branch 'dev' into alpha-masked-loss

commit 71ad3c0
Author: Kohya S <52813779+kohya-ss@users.noreply.github.com>
Date:   Mon May 27 21:07:57 2024 +0900

    Update masked_loss_README-ja.md

    add sample images

commit ffce3b5
Merge: fb12b6d d50c1b3
Author: Kohya S <52813779+kohya-ss@users.noreply.github.com>
Date:   Mon May 27 21:00:46 2024 +0900

    Merge pull request kohya-ss#1349 from rockerBOO/patch-4

    Update issue link

commit a4c3155
Author: Kohya S <ykumeykume@gmail.com>
Date:   Mon May 27 20:59:40 2024 +0900

    add doc for mask loss

commit 58cadf4
Merge: e8cfd4b fb12b6d
Author: Kohya S <ykumeykume@gmail.com>
Date:   Mon May 27 20:02:32 2024 +0900

    Merge branch 'dev' into alpha-masked-loss

commit d50c1b3
Author: Dave Lage <rockerboo@gmail.com>
Date:   Mon May 27 01:11:01 2024 -0400

    Update issue link

commit e8cfd4b
Author: Kohya S <ykumeykume@gmail.com>
Date:   Sun May 26 22:01:37 2024 +0900

    fix to work cond mask and alpha mask

commit fb12b6d
Merge: febc5c5 00513b9
Author: Kohya S <52813779+kohya-ss@users.noreply.github.com>
Date:   Sun May 26 19:45:03 2024 +0900

    Merge pull request kohya-ss#1347 from rockerBOO/lora-plus-log-info

    Add LoRA+ LR Ratio info message to logger

commit 00513b9
Author: rockerBOO <rockerboo@gmail.com>
Date:   Thu May 23 22:27:12 2024 -0400

    Add LoRA+ LR Ratio info message to logger

commit da6fea3
Author: Kohya S <ykumeykume@gmail.com>
Date:   Sun May 19 21:26:18 2024 +0900

    simplify and update alpha mask to work with various cases

commit f2dd43e
Author: Kohya S <ykumeykume@gmail.com>
Date:   Sun May 19 19:23:59 2024 +0900

    revert kwargs to explicit declaration

commit db67529
Author: u-haru <40634644+u-haru@users.noreply.github.com>
Date:   Sun May 19 19:07:25 2024 +0900

    画像のアルファチャンネルをlossのマスクとして使用するオプションを追加 (kohya-ss#1223)

    * Add alpha_mask parameter and apply masked loss

    * Fix type hint in trim_and_resize_if_required function

    * Refactor code to use keyword arguments in train_util.py

    * Fix alpha mask flipping logic

    * Fix alpha mask initialization

    * Fix alpha_mask transformation

    * Cache alpha_mask

    * Update alpha_masks to be on CPU

    * Set flipped_alpha_masks to Null if option disabled

    * Check if alpha_mask is None

    * Set alpha_mask to None if option disabled

    * Add description of alpha_mask option to docs

commit febc5c5
Author: Kohya S <ykumeykume@gmail.com>
Date:   Sun May 19 19:03:43 2024 +0900

    update README

commit 4c79812
Author: Kohya S <ykumeykume@gmail.com>
Date:   Sun May 19 19:00:32 2024 +0900

    update README

commit 38e4c60
Merge: e4d9e3c fc37437
Author: Kohya S <52813779+kohya-ss@users.noreply.github.com>
Date:   Sun May 19 18:55:50 2024 +0900

    Merge pull request kohya-ss#1277 from Cauldrath/negative_learning

    Allow negative learning rate

commit e4d9e3c
Author: Kohya S <ykumeykume@gmail.com>
Date:   Sun May 19 17:46:07 2024 +0900

    remove dependency for omegaconf #ref 1284

commit de0e0b9
Merge: c68baae 5cb145d
Author: Kohya S <52813779+kohya-ss@users.noreply.github.com>
Date:   Sun May 19 17:39:15 2024 +0900

    Merge pull request kohya-ss#1284 from sdbds/fix_traincontrolnet

    Fix train controlnet

commit c68baae
Author: Kohya S <ykumeykume@gmail.com>
Date:   Sun May 19 17:21:04 2024 +0900

    add `--log_config` option to enable/disable output training config

commit 47187f7
Merge: e3ddd1f b886d0a
Author: Kohya S <52813779+kohya-ss@users.noreply.github.com>
Date:   Sun May 19 16:31:33 2024 +0900

    Merge pull request kohya-ss#1285 from ccharest93/main

    Hyperparameter tracking

commit e3ddd1f
Author: Kohya S <ykumeykume@gmail.com>
Date:   Sun May 19 16:26:10 2024 +0900

    update README and format code

commit 0640f01
Merge: 2f19175 793aeb9
Author: Kohya S <52813779+kohya-ss@users.noreply.github.com>
Date:   Sun May 19 16:23:01 2024 +0900

    Merge pull request kohya-ss#1322 from aria1th/patch-1

    Accelerate: fix get_trainable_params in controlnet-llite training

commit 2f19175
Author: Kohya S <ykumeykume@gmail.com>
Date:   Sun May 19 15:38:37 2024 +0900

    update README

commit 146edce
Author: Kohya S <ykumeykume@gmail.com>
Date:   Sat May 18 11:05:04 2024 +0900

    support Diffusers' based SDXL LoRA key for inference

commit 153764a
Author: Kohya S <ykumeykume@gmail.com>
Date:   Wed May 15 20:21:49 2024 +0900

    add prompt option '--f' for filename

commit 589c2aa
Author: Kohya S <ykumeykume@gmail.com>
Date:   Mon May 13 21:20:37 2024 +0900

    update README

commit 16677da
Author: Kohya S <ykumeykume@gmail.com>
Date:   Sun May 12 22:15:07 2024 +0900

    fix create_network_from_weights doesn't work

commit a384bf2
Merge: 1c296f7 8db0cad
Author: Kohya S <52813779+kohya-ss@users.noreply.github.com>
Date:   Sun May 12 21:36:56 2024 +0900

    Merge pull request kohya-ss#1313 from rockerBOO/patch-3

    Add caption_separator to output for subset

commit 1c296f7
Merge: e96a521 dbb7bb2
Author: Kohya S <52813779+kohya-ss@users.noreply.github.com>
Date:   Sun May 12 21:33:12 2024 +0900

    Merge pull request kohya-ss#1312 from rockerBOO/patch-2

    Fix caption_separator missing in subset schema

commit e96a521
Merge: 39b82f2 fdbb03c
Author: Kohya S <52813779+kohya-ss@users.noreply.github.com>
Date:   Sun May 12 21:14:50 2024 +0900

    Merge pull request kohya-ss#1291 from frodo821/patch-1

    removed unnecessary `torch` import on line 115

commit 39b82f2
Author: Kohya S <ykumeykume@gmail.com>
Date:   Sun May 12 20:58:45 2024 +0900

    update readme

commit 3701507
Author: Kohya S <ykumeykume@gmail.com>
Date:   Sun May 12 20:56:56 2024 +0900

    raise original error if error is occured in checking latents

commit 7802093
Merge: 9ddb4d7 040e26f
Author: Kohya S <52813779+kohya-ss@users.noreply.github.com>
Date:   Sun May 12 20:46:25 2024 +0900

    Merge pull request kohya-ss#1278 from Cauldrath/catch_latent_error_file

    Display name of error latent file

commit 9ddb4d7
Author: Kohya S <ykumeykume@gmail.com>
Date:   Sun May 12 17:55:08 2024 +0900

    update readme and help message etc.

commit 8d1b1ac
Merge: 02298e3 64916a3
Author: Kohya S <52813779+kohya-ss@users.noreply.github.com>
Date:   Sun May 12 17:43:44 2024 +0900

    Merge pull request kohya-ss#1266 from Zovjsra/feature/disable-mmap

    Add "--disable_mmap_load_safetensors" parameter

commit 02298e3
Merge: 1ffc0b3 4419041
Author: Kohya S <52813779+kohya-ss@users.noreply.github.com>
Date:   Sun May 12 17:04:58 2024 +0900

    Merge pull request kohya-ss#1331 from kohya-ss/lora-plus

    Lora plus

commit 4419041
Author: Kohya S <ykumeykume@gmail.com>
Date:   Sun May 12 17:01:20 2024 +0900

    update docs etc.

commit 3c8193f
Author: Kohya S <ykumeykume@gmail.com>
Date:   Sun May 12 17:00:51 2024 +0900

    revert lora+ for lora_fa

commit c6a4370
Merge: e01e148 1ffc0b3
Author: Kohya S <ykumeykume@gmail.com>
Date:   Sun May 12 16:18:57 2024 +0900

    Merge branch 'dev' into lora-plus

commit 1ffc0b3
Author: Kohya S <ykumeykume@gmail.com>
Date:   Sun May 12 16:18:43 2024 +0900

    fix typo

commit e01e148
Merge: e9f3a62 7983d3d
Author: Kohya S <ykumeykume@gmail.com>
Date:   Sun May 12 16:17:52 2024 +0900

    Merge branch 'dev' into lora-plus

commit e9f3a62
Merge: 3fd8cdc c1ba0b4
Author: Kohya S <ykumeykume@gmail.com>
Date:   Sun May 12 16:17:27 2024 +0900

    Merge branch 'dev' into lora-plus

commit 7983d3d
Merge: c1ba0b4 bee8cee
Author: Kohya S <52813779+kohya-ss@users.noreply.github.com>
Date:   Sun May 12 15:09:39 2024 +0900

    Merge pull request kohya-ss#1319 from kohya-ss/fused-backward-pass

    Fused backward pass

commit bee8cee
Author: Kohya S <ykumeykume@gmail.com>
Date:   Sun May 12 15:08:52 2024 +0900

    update README for fused optimizer

commit f3d2cf2
Author: Kohya S <ykumeykume@gmail.com>
Date:   Sun May 12 15:03:02 2024 +0900

    update README for fused optimizer

commit 6dbc23c
Merge: 607e041 c1ba0b4
Author: Kohya S <ykumeykume@gmail.com>
Date:   Sun May 12 14:21:56 2024 +0900

    Merge branch 'dev' into fused-backward-pass

commit c1ba0b4
Author: Kohya S <ykumeykume@gmail.com>
Date:   Sun May 12 14:21:10 2024 +0900

    update readme

commit 607e041
Author: Kohya S <ykumeykume@gmail.com>
Date:   Sun May 12 14:16:41 2024 +0900

    chore: Refactor optimizer group

commit 793aeb9
Author: AngelBottomless <aria1th@naver.com>
Date:   Tue May 7 18:21:31 2024 +0900

    fix get_trainable_params in controlnet-llite training

commit b56d5f7
Author: Kohya S <ykumeykume@gmail.com>
Date:   Mon May 6 21:35:39 2024 +0900

    add experimental option to fuse params to optimizer groups

commit 017b82e
Author: Kohya S <ykumeykume@gmail.com>
Date:   Mon May 6 15:05:42 2024 +0900

    update help message for fused_backward_pass

commit 2a359e0
Merge: 0540c33 4f203ce
Author: Kohya S <52813779+kohya-ss@users.noreply.github.com>
Date:   Mon May 6 15:01:56 2024 +0900

    Merge pull request kohya-ss#1259 from 2kpr/fused_backward_pass

    Adafactor fused backward pass and optimizer step, lowers SDXL (@ 1024 resolution) VRAM usage to BF16(10GB)/FP32(16.4GB)

commit 3fd8cdc
Author: Kohya S <ykumeykume@gmail.com>
Date:   Mon May 6 14:03:19 2024 +0900

    fix dylora loraplus

commit 7fe8150
Author: Kohya S <ykumeykume@gmail.com>
Date:   Mon May 6 11:09:32 2024 +0900

    update loraplus on dylora/lofa_fa

commit 52e64c6
Author: Kohya S <ykumeykume@gmail.com>
Date:   Sat May 4 18:43:52 2024 +0900

    add debug log

commit 58c2d85
Author: Kohya S <ykumeykume@gmail.com>
Date:   Fri May 3 22:18:20 2024 +0900

    support block dim/lr for sdxl

commit 8db0cad
Author: Dave Lage <rockerboo@gmail.com>
Date:   Thu May 2 18:08:28 2024 -0400

    Add caption_separator to output for subset

commit dbb7bb2
Author: Dave Lage <rockerboo@gmail.com>
Date:   Thu May 2 17:39:35 2024 -0400

    Fix caption_separator missing in subset schema

commit 969f82a
Author: Kohya S <ykumeykume@gmail.com>
Date:   Mon Apr 29 20:04:25 2024 +0900

    move loraplus args from args to network_args, simplify log lr desc

commit 834445a
Merge: 0540c33 68467bd
Author: Kohya S <52813779+kohya-ss@users.noreply.github.com>
Date:   Mon Apr 29 18:05:12 2024 +0900

    Merge pull request kohya-ss#1233 from rockerBOO/lora-plus

    Add LoRA+ support

commit fdbb03c
Author: frodo821 <sakaic2003@gmail.com>
Date:   Tue Apr 23 14:29:05 2024 +0900

    removed unnecessary `torch` import on line 115

    as per kohya-ss#1290

commit 040e26f
Author: Cauldrath <bnjmnhanes@gmail.com>
Date:   Sun Apr 21 13:46:31 2024 -0400

    Regenerate failed file
    If a latent file fails to load, print out the path and the error, then return false to regenerate it

commit 5cb145d
Author: 青龍聖者@bdsqlsz <qinglongshengzhe@gmail.com>
Date:   Sat Apr 20 21:56:24 2024 +0800

    Update train_util.py

commit b886d0a
Author: Maatra <ccharest93@hotmail.com>
Date:   Sat Apr 20 14:36:47 2024 +0100

    Cleaned typing to be in line with accelerate hyperparameters type resctrictions

commit 4477116
Author: 青龍聖者@bdsqlsz <qinglongshengzhe@gmail.com>
Date:   Sat Apr 20 21:26:09 2024 +0800

    fix train controlnet

commit 2c9db5d
Author: Maatra <ccharest93@hotmail.com>
Date:   Sat Apr 20 14:11:43 2024 +0100

    passing filtered hyperparameters to accelerate

commit fc37437
Author: Cauldrath <bnjmnhanes@gmail.com>
Date:   Thu Apr 18 23:29:01 2024 -0400

    Allow negative learning rate
    This can be used to train away from a group of images you don't want
    As this moves the model away from a point instead of towards it, the change in the model is unbounded
    So, don't set it too low. -4e-7 seemed to work well.

commit feefcf2
Author: Cauldrath <bnjmnhanes@gmail.com>
Date:   Thu Apr 18 23:15:36 2024 -0400

    Display name of error latent file
    When trying to load stored latents, if an error occurs, this change will tell you what file failed to load
    Currently it will just tell you that something failed without telling you which file

commit 64916a3
Author: Zovjsra <4703michael@gmail.com>
Date:   Tue Apr 16 16:40:08 2024 +0800

    add disable_mmap to args

commit 4f203ce
Author: 2kpr <96332338+2kpr@users.noreply.github.com>
Date:   Sun Apr 14 09:56:58 2024 -0500

    Fused backward pass

commit 68467bd
Author: rockerBOO <rockerboo@gmail.com>
Date:   Thu Apr 11 17:33:19 2024 -0400

    Fix unset or invalid LR from making a param_group

commit 75833e8
Author: rockerBOO <rockerboo@gmail.com>
Date:   Mon Apr 8 19:23:02 2024 -0400

    Fix default LR, Add overall LoRA+ ratio, Add log

    `--loraplus_ratio` added for both TE and UNet
    Add log for lora+

commit 1933ab4
Author: rockerBOO <rockerboo@gmail.com>
Date:   Wed Apr 3 12:46:34 2024 -0400

    Fix default_lr being applied

commit c769160
Author: rockerBOO <rockerboo@gmail.com>
Date:   Mon Apr 1 15:43:04 2024 -0400

    Add LoRA-FA for LoRA+

commit f99fe28
Author: rockerBOO <rockerboo@gmail.com>
Date:   Mon Apr 1 15:38:26 2024 -0400

    Add LoRA+ support
  • Loading branch information
feffy380 committed Jun 21, 2024
1 parent d53b530 commit 0d9a3a4
Show file tree
Hide file tree
Showing 30 changed files with 1,412 additions and 341 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/typos.yml
Original file line number Diff line number Diff line change
Expand Up @@ -18,4 +18,4 @@ jobs:
- uses: actions/checkout@v4

- name: typos-action
uses: crate-ci/typos@v1.19.0
uses: crate-ci/typos@v1.21.0
128 changes: 128 additions & 0 deletions README.md

Large diffs are not rendered by default.

2 changes: 2 additions & 0 deletions _typos.toml
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@
# Instruction: https://github.com/marketplace/actions/typos-action#getting-started

[default.extend-identifiers]
ddPn08="ddPn08"

[default.extend-words]
NIN="NIN"
Expand All @@ -27,6 +28,7 @@ rik="rik"
koo="koo"
yos="yos"
wn="wn"
hime="hime"


[files]
Expand Down
57 changes: 57 additions & 0 deletions docs/masked_loss_README-ja.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
## マスクロスについて

マスクロスは、入力画像のマスクで指定された部分だけ損失計算することで、画像の一部分だけを学習することができる機能です。
たとえばキャラクタを学習したい場合、キャラクタ部分だけをマスクして学習することで、背景を無視して学習することができます。

マスクロスのマスクには、二種類の指定方法があります。

- マスク画像を用いる方法
- 透明度(アルファチャネル)を使用する方法

なお、サンプルは [ずんずんPJイラスト/3Dデータ](https://zunko.jp/con_illust.html) の「AI画像モデル用学習データ」を使用しています。

### マスク画像を用いる方法

学習画像それぞれに対応するマスク画像を用意する方法です。学習画像と同じファイル名のマスク画像を用意し、それを学習画像と別のディレクトリに保存します。

- 学習画像
![image](https://github.com/kohya-ss/sd-scripts/assets/52813779/607c5116-5f62-47de-8b66-9c4a597f0441)
- マスク画像
![image](https://github.com/kohya-ss/sd-scripts/assets/52813779/53e9b0f8-a4bf-49ed-882d-4026f84e8450)

```.toml
[[datasets.subsets]]
image_dir = "/path/to/a_zundamon"
caption_extension = ".txt"
conditioning_data_dir = "/path/to/a_zundamon_mask"
num_repeats = 8
```

マスク画像は、学習画像と同じサイズで、学習する部分を白、無視する部分を黒で描画します。グレースケールにも対応しています(127 ならロス重みが 0.5 になります)。なお、正確にはマスク画像の R チャネルが用いられます。

DreamBooth 方式の dataset で、`conditioning_data_dir` で指定したディレクトリにマスク画像を保存してください。ControlNet のデータセットと同じですので、詳細は [ControlNet-LLLite](train_lllite_README-ja.md#データセットの準備) を参照してください。

### 透明度(アルファチャネル)を使用する方法

学習画像の透明度(アルファチャネル)がマスクとして使用されます。透明度が 0 の部分は無視され、255 の部分は学習されます。半透明の場合は、その透明度に応じてロス重みが変化します(127 ならおおむね 0.5)。

![image](https://github.com/kohya-ss/sd-scripts/assets/52813779/0baa129b-446a-4aac-b98c-7208efb0e75e)

※それぞれの画像は透過PNG

学習時のスクリプトのオプションに `--alpha_mask` を指定するか、dataset の設定ファイルの subset で、`alpha_mask` を指定してください。たとえば、以下のようになります。

```toml
[[datasets.subsets]]
image_dir = "/path/to/image/dir"
caption_extension = ".txt"
num_repeats = 8
alpha_mask = true
```

## 学習時の注意事項

- 現時点では DreamBooth 方式の dataset のみ対応しています。
- マスクは latents のサイズ、つまり 1/8 に縮小されてから適用されます。そのため、細かい部分(たとえばアホ毛やイヤリングなど)はうまく学習できない可能性があります。マスクをわずかに拡張するなどの工夫が必要かもしれません。
- マスクロスを用いる場合、学習対象外の部分をキャプションに含める必要はないかもしれません。(要検証)
- `alpha_mask` の場合、マスクの有無を切り替えると latents キャッシュが自動的に再生成されます。
56 changes: 56 additions & 0 deletions docs/masked_loss_README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
## Masked Loss

Masked loss is a feature that allows you to train only part of an image by calculating the loss only for the part specified by the mask of the input image. For example, if you want to train a character, you can train only the character part by masking it, ignoring the background.

There are two ways to specify the mask for masked loss.

- Using a mask image
- Using transparency (alpha channel) of the image

The sample uses the "AI image model training data" from [ZunZunPJ Illustration/3D Data](https://zunko.jp/con_illust.html).

### Using a mask image

This is a method of preparing a mask image corresponding to each training image. Prepare a mask image with the same file name as the training image and save it in a different directory from the training image.

- Training image
![image](https://github.com/kohya-ss/sd-scripts/assets/52813779/607c5116-5f62-47de-8b66-9c4a597f0441)
- Mask image
![image](https://github.com/kohya-ss/sd-scripts/assets/52813779/53e9b0f8-a4bf-49ed-882d-4026f84e8450)

```.toml
[[datasets.subsets]]
image_dir = "/path/to/a_zundamon"
caption_extension = ".txt"
conditioning_data_dir = "/path/to/a_zundamon_mask"
num_repeats = 8
```

The mask image is the same size as the training image, with the part to be trained drawn in white and the part to be ignored in black. It also supports grayscale (127 gives a loss weight of 0.5). The R channel of the mask image is used currently.

Use the dataset in the DreamBooth method, and save the mask image in the directory specified by `conditioning_data_dir`. It is the same as the ControlNet dataset, so please refer to [ControlNet-LLLite](train_lllite_README.md#Preparing-the-dataset) for details.

### Using transparency (alpha channel) of the image

The transparency (alpha channel) of the training image is used as a mask. The part with transparency 0 is ignored, the part with transparency 255 is trained. For semi-transparent parts, the loss weight changes according to the transparency (127 gives a weight of about 0.5).

![image](https://github.com/kohya-ss/sd-scripts/assets/52813779/0baa129b-446a-4aac-b98c-7208efb0e75e)

※Each image is a transparent PNG

Specify `--alpha_mask` in the training script options or specify `alpha_mask` in the subset of the dataset configuration file. For example, it will look like this.

```toml
[[datasets.subsets]]
image_dir = "/path/to/image/dir"
caption_extension = ".txt"
num_repeats = 8
alpha_mask = true
```

## Notes on training

- At the moment, only the dataset in the DreamBooth method is supported.
- The mask is applied after the size is reduced to 1/8, which is the size of the latents. Therefore, fine details (such as ahoge or earrings) may not be learned well. Some dilations of the mask may be necessary.
- If using masked loss, it may not be necessary to include parts that are not to be trained in the caption. (To be verified)
- In the case of `alpha_mask`, the latents cache is automatically regenerated when the enable/disable state of the mask is switched.
13 changes: 9 additions & 4 deletions docs/train_network_README-ja.md
Original file line number Diff line number Diff line change
Expand Up @@ -102,6 +102,8 @@ accelerate launch --num_cpu_threads_per_process 1 train_network.py
* Text Encoderに関連するLoRAモジュールに、通常の学習率(--learning_rateオプションで指定)とは異なる学習率を使う時に指定します。Text Encoderのほうを若干低めの学習率(5e-5など)にしたほうが良い、という話もあるようです。
* `--network_args`
* 複数の引数を指定できます。後述します。
* `--alpha_mask`
* 画像のアルファ値をマスクとして使用します。透過画像を学習する際に使用します。[PR #1223](https://github.com/kohya-ss/sd-scripts/pull/1223)

`--network_train_unet_only``--network_train_text_encoder_only` の両方とも未指定時(デフォルト)はText EncoderとU-Netの両方のLoRAモジュールを有効にします。

Expand Down Expand Up @@ -181,16 +183,16 @@ python networks\extract_lora_from_dylora.py --model "foldername/dylora-model.saf

詳細は[PR #355](https://github.com/kohya-ss/sd-scripts/pull/355) をご覧ください。

SDXLは現在サポートしていません。

フルモデルの25個のブロックの重みを指定できます。最初のブロックに該当するLoRAは存在しませんが、階層別LoRA適用等との互換性のために25個としています。またconv2d3x3に拡張しない場合も一部のブロックにはLoRAが存在しませんが、記述を統一するため常に25個の値を指定してください。

SDXL では down/up 9 個、middle 3 個の値を指定してください。

`--network_args` で以下の引数を指定してください。

- `down_lr_weight` : U-Netのdown blocksの学習率の重みを指定します。以下が指定可能です。
- ブロックごとの重み : `"down_lr_weight=0,0,0,0,0,0,1,1,1,1,1,1"` のように12個の数値を指定します
- ブロックごとの重み : `"down_lr_weight=0,0,0,0,0,0,1,1,1,1,1,1"` のように12個(SDXL では 9 個)の数値を指定します
- プリセットからの指定 : `"down_lr_weight=sine"` のように指定します(サインカーブで重みを指定します)。sine, cosine, linear, reverse_linear, zeros が指定可能です。また `"down_lr_weight=cosine+.25"` のように `+数値` を追加すると、指定した数値を加算します(0.25~1.25になります)。
- `mid_lr_weight` : U-Netのmid blockの学習率の重みを指定します。`"down_lr_weight=0.5"` のように数値を一つだけ指定します。
- `mid_lr_weight` : U-Netのmid blockの学習率の重みを指定します。`"down_lr_weight=0.5"` のように数値を一つだけ指定します(SDXL の場合は 3 個)
- `up_lr_weight` : U-Netのup blocksの学習率の重みを指定します。down_lr_weightと同様です。
- 指定を省略した部分は1.0として扱われます。また重みを0にするとそのブロックのLoRAモジュールは作成されません。
- `block_lr_zero_threshold` : 重みがこの値以下の場合、LoRAモジュールを作成しません。デフォルトは0です。
Expand All @@ -215,6 +217,9 @@ network_args = [ "block_lr_zero_threshold=0.1", "down_lr_weight=sine+.5", "mid_l

フルモデルの25個のブロックのdim (rank)を指定できます。階層別学習率と同様に一部のブロックにはLoRAが存在しない場合がありますが、常に25個の値を指定してください。

SDXL では 23 個の値を指定してください。一部のブロックにはLoRA が存在しませんが、`sdxl_train.py`[階層別学習率](./train_SDXL-en.md) との互換性のためです。
対応は、`0: time/label embed, 1-9: input blocks 0-8, 10-12: mid blocks 0-2, 13-21: output blocks 0-8, 22: out` です。

`--network_args` で以下の引数を指定してください。

- `block_dims` : 各ブロックのdim (rank)を指定します。`"block_dims=2,2,2,2,4,4,4,4,6,6,6,6,8,6,6,6,6,4,4,4,4,2,2,2,2"` のように25個の数値を指定します。
Expand Down
2 changes: 2 additions & 0 deletions docs/train_network_README-zh.md
Original file line number Diff line number Diff line change
Expand Up @@ -101,6 +101,8 @@ LoRA的模型将会被保存在通过`--output_dir`选项指定的文件夹中
* 当在Text Encoder相关的LoRA模块中使用与常规学习率(由`--learning_rate`选项指定)不同的学习率时,应指定此选项。可能最好将Text Encoder的学习率稍微降低(例如5e-5)。
* `--network_args`
* 可以指定多个参数。将在下面详细说明。
* `--alpha_mask`
* 使用图像的 Alpha 值作为遮罩。这在学习透明图像时使用。[PR #1223](https://github.com/kohya-ss/sd-scripts/pull/1223)

当未指定`--network_train_unet_only``--network_train_text_encoder_only`时(默认情况),将启用Text Encoder和U-Net的两个LoRA模块。

Expand Down
20 changes: 15 additions & 5 deletions fine_tune.py
Original file line number Diff line number Diff line change
Expand Up @@ -310,7 +310,11 @@ def fn_recursive_set_mem_eff(module: torch.nn.Module):
init_kwargs["wandb"] = {"name": args.wandb_run_name}
if args.log_tracker_config is not None:
init_kwargs = toml.load(args.log_tracker_config)
accelerator.init_trackers("finetuning" if args.log_tracker_name is None else args.log_tracker_name, init_kwargs=init_kwargs)
accelerator.init_trackers(
"finetuning" if args.log_tracker_name is None else args.log_tracker_name,
config=train_util.get_sanitized_config_or_none(args),
init_kwargs=init_kwargs,
)

# For --sample_at_first
train_util.sample_images(accelerator, args, 0, global_step, accelerator.device, vae, tokenizer, text_encoder, unet)
Expand Down Expand Up @@ -354,7 +358,9 @@ def fn_recursive_set_mem_eff(module: torch.nn.Module):

# Sample noise, sample a random timestep for each image, and add noise to the latents,
# with noise offset and/or multires noise if specified
noise, noisy_latents, timesteps, huber_c = train_util.get_noise_noisy_latents_and_timesteps(args, noise_scheduler, latents)
noise, noisy_latents, timesteps, huber_c = train_util.get_noise_noisy_latents_and_timesteps(
args, noise_scheduler, latents
)

# Predict the noise residual
with accelerator.autocast():
Expand All @@ -368,7 +374,9 @@ def fn_recursive_set_mem_eff(module: torch.nn.Module):

if args.min_snr_gamma or args.scale_v_pred_loss_like_noise_pred or args.debiased_estimation_loss:
# do not mean over batch dimension for snr weight or scale v-pred loss
loss = train_util.conditional_loss(noise_pred.float(), target.float(), reduction="none", loss_type=args.loss_type, huber_c=huber_c)
loss = train_util.conditional_loss(
noise_pred.float(), target.float(), reduction="none", loss_type=args.loss_type, huber_c=huber_c
)
loss = loss.mean([1, 2, 3])

if args.min_snr_gamma:
Expand All @@ -380,7 +388,9 @@ def fn_recursive_set_mem_eff(module: torch.nn.Module):

loss = loss.mean() # mean over batch dimension
else:
loss = train_util.conditional_loss(noise_pred.float(), target.float(), reduction="mean", loss_type=args.loss_type, huber_c=huber_c)
loss = train_util.conditional_loss(
noise_pred.float(), target.float(), reduction="mean", loss_type=args.loss_type, huber_c=huber_c
)

accelerator.backward(loss)
if accelerator.sync_gradients and args.max_grad_norm != 0.0:
Expand Down Expand Up @@ -471,7 +481,7 @@ def fn_recursive_set_mem_eff(module: torch.nn.Module):

accelerator.end_training()

if is_main_process and (args.save_state or args.save_state_on_train_end):
if is_main_process and (args.save_state or args.save_state_on_train_end):
train_util.save_state_on_train_end(args, accelerator)

del accelerator # この後メモリを使うのでこれは消す
Expand Down
33 changes: 27 additions & 6 deletions finetune/prepare_buckets_latents.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,15 +11,18 @@

import torch
from library.device_utils import init_ipex, get_preferred_device

init_ipex()

from torchvision import transforms

import library.model_util as model_util
import library.train_util as train_util
from library.utils import setup_logging

setup_logging()
import logging

logger = logging.getLogger(__name__)

DEVICE = get_preferred_device()
Expand Down Expand Up @@ -89,7 +92,9 @@ def main(args):

# bucketのサイズを計算する
max_reso = tuple([int(t) for t in args.max_resolution.split(",")])
assert len(max_reso) == 2, f"illegal resolution (not 'width,height') / 画像サイズに誤りがあります。'幅,高さ'で指定してください: {args.max_resolution}"
assert (
len(max_reso) == 2
), f"illegal resolution (not 'width,height') / 画像サイズに誤りがあります。'幅,高さ'で指定してください: {args.max_resolution}"

bucket_manager = train_util.BucketManager(
args.bucket_no_upscale, max_reso, args.min_bucket_reso, args.max_bucket_reso, args.bucket_reso_steps
Expand All @@ -107,7 +112,7 @@ def main(args):
def process_batch(is_last):
for bucket in bucket_manager.buckets:
if (is_last and len(bucket) > 0) or len(bucket) >= args.batch_size:
train_util.cache_batch_latents(vae, True, bucket, args.flip_aug, False)
train_util.cache_batch_latents(vae, True, bucket, args.flip_aug, args.alpha_mask, False)
bucket.clear()

# 読み込みの高速化のためにDataLoaderを使うオプション
Expand Down Expand Up @@ -208,7 +213,9 @@ def setup_parser() -> argparse.ArgumentParser:
parser.add_argument("in_json", type=str, help="metadata file to input / 読み込むメタデータファイル")
parser.add_argument("out_json", type=str, help="metadata file to output / メタデータファイル書き出し先")
parser.add_argument("model_name_or_path", type=str, help="model name or path to encode latents / latentを取得するためのモデル")
parser.add_argument("--v2", action="store_true", help="not used (for backward compatibility) / 使用されません(互換性のため残してあります)")
parser.add_argument(
"--v2", action="store_true", help="not used (for backward compatibility) / 使用されません(互換性のため残してあります)"
)
parser.add_argument("--batch_size", type=int, default=1, help="batch size in inference / 推論時のバッチサイズ")
parser.add_argument(
"--max_data_loader_n_workers",
Expand All @@ -231,18 +238,32 @@ def setup_parser() -> argparse.ArgumentParser:
help="steps of resolution for buckets, divisible by 8 is recommended / bucketの解像度の単位、8で割り切れる値を推奨します",
)
parser.add_argument(
"--bucket_no_upscale", action="store_true", help="make bucket for each image without upscaling / 画像を拡大せずbucketを作成します"
"--bucket_no_upscale",
action="store_true",
help="make bucket for each image without upscaling / 画像を拡大せずbucketを作成します",
)
parser.add_argument(
"--mixed_precision", type=str, default="no", choices=["no", "fp16", "bf16"], help="use mixed precision / 混合精度を使う場合、その精度"
"--mixed_precision",
type=str,
default="no",
choices=["no", "fp16", "bf16"],
help="use mixed precision / 混合精度を使う場合、その精度",
)
parser.add_argument(
"--full_path",
action="store_true",
help="use full path as image-key in metadata (supports multiple directories) / メタデータで画像キーをフルパスにする(複数の学習画像ディレクトリに対応)",
)
parser.add_argument(
"--flip_aug", action="store_true", help="flip augmentation, save latents for flipped images / 左右反転した画像もlatentを取得、保存する"
"--flip_aug",
action="store_true",
help="flip augmentation, save latents for flipped images / 左右反転した画像もlatentを取得、保存する",
)
parser.add_argument(
"--alpha_mask",
type=str,
default="",
help="save alpha mask for images for loss calculation / 損失計算用に画像のアルファマスクを保存する",
)
parser.add_argument(
"--skip_existing",
Expand Down
1 change: 0 additions & 1 deletion finetune/tag_images_by_wd14_tagger.py
Original file line number Diff line number Diff line change
Expand Up @@ -112,7 +112,6 @@ def main(args):

# モデルを読み込む
if args.onnx:
import torch
import onnx
import onnxruntime as ort

Expand Down
Loading

0 comments on commit 0d9a3a4

Please sign in to comment.