Skip to content

Commit 699af41

Browse files
committed
Update readme
1 parent 2a81b6e commit 699af41

9 files changed

+91
-71
lines changed

doc/NeuRIPS.md

+49-15
Original file line numberDiff line numberDiff line change
@@ -2,44 +2,78 @@
22

33
## Step One: Launch a Lambda Cloud instance
44

5+
The hands-on practice requires a Lambda Cloud account and a `1xA100` GPU instance, which costs `$1.10 USD/h`.
6+
7+
Remember to drop by our photo booth if you do not wish to sign up. Our staff will take photos and train a model for you. Leave us your email address, and we will send the results within 12 hours.
8+
9+
If you wish to sign up, go to [https://cloud.lambdalabs.com/](https://cloud.lambdalabs.com/) and follow the signup step.
10+
11+
Once signed in with your Lambda Cloud account, click the `Launch Instance` button.
12+
13+
<img src="./images/lambda_cloud_dashboard.jpg" alt="drawing" style="width:480px;"/>
14+
15+
Lambda Cloud will ask you for payment information when launching the first instance. Just follow the instructions and be aware that Lambda Cloud will place a __temporary__ `$10 USD pre-auth to verify your card, which will disappear within seven days. Once payment information is provided, you can launch an instance. For this workshop:
16+
* Choose 1xA100 instance (40GB SXM4 or 40GB PCIe are both fine).
17+
* Any region will work.
18+
* Don't need to attach a filesystem.
19+
* Follow the guide to add or generate an SSH key -- this step can not be skipped. However, this workshop won't use this key because all practice can be accomplished in the Cloud IDE (so no SSH session is needed).
20+
21+
It takes about two mins to get the instance running (the green tick as shown in the picture below).
22+
23+
<img src="./images/lambda_cloud_dashboard_instance_ready.jpg" alt="drawing" style="width:480px;"/>
24+
25+
You need to click the Cloud IDE `Launch` button (the purple button on the right end) to get access to the Jupyter Hub. If you see a message saying, "Your Jupyter Notebook is down," that means the Jupyter Hub isn't ready and please give it another minute or so. Eventually, it will look like this once it is ready:
26+
27+
<img src="./images/lambda_cloud_jupyter_hub.jpg" alt="drawing" style="width:480px;"/>
528

6-
## Step Two: Download Notebooks
729

8-
In dash board, click `Cloud IDE` - `Launch`, this will bring you to the Jupyter hub pre-installed on the cloud instance.
930

10-
Create a new `Terminal` by clicking the `Terminal` button. In the terminal, run these two commands:
31+
32+
## Step Two: Download Notebooks
33+
34+
Create a terminal by clicking the `Terminal` icon, and run the following command in the terminal to download a few notebooks to your home directory.
1135

1236
```
1337
wget https://raw.githubusercontent.com/LambdaLabsML/dreambooth/neurips/setup.ipynb && \
1438
wget https://raw.githubusercontent.com/LambdaLabsML/dreambooth/neurips/train.ipynb && \
15-
wget https://raw.githubusercontent.com/LambdaLabsML/dreambooth/neurips/test.ipynb && \
39+
wget https://raw.githubusercontent.com/LambdaLabsML/dreambooth/neurips/test_param.ipynb && \
1640
wget https://raw.githubusercontent.com/LambdaLabsML/dreambooth/neurips/test_prompt.ipynb
1741
```
1842

19-
Click the refresh button in the `File Browser` (on the left side of the IDE), you should see `setup.ipynb`, `train.ipynb` and `test.ipynb`.
43+
You need to click the refresh button in the `File Browser` (on the left side of the IDE) to see the notebooks.
2044

21-
## Step Three: Run Notebook
45+
<img src="./images/lambda_cloud_dashboard_download_ipynb.jpg" alt="drawing" style="width:480px;"/>
2246

23-
### Run `setup.ipynb`
47+
Now you are ready to kick off the DreamBooth practice!
2448

25-
This notebook will clone the DreamBooth repo, install a number of python packages needed for this pratice.
49+
## Step Three: Run Notebook
2650

27-
It will create a folder at `/home/ubuntu/data` for you to upload the training photos.
51+
### Run `setup.ipynb`
2852

29-
The last step in thie notebook will ask for a access token for downloading the Stable Diffusion model from Huggingface. You need to
53+
This notebook will clone the DreamBooth repo and install several python packages. It will also create several folders in the home directory:
54+
* `/home/ubuntu/data`: This directory stores training photos.
55+
* `/home/ubuntu/model`: This is the directory where the trained model will be saved.
56+
* `/home/ubuntu/output`: This is the directory where the sampled images will be saved.
3057

58+
The last step in this notebook will ask for an access token for downloading the Stable Diffusion model from Huggingface. You need to:
3159
* Create a [huggingface](https://huggingface.co/) account if you don't have one.
32-
* Create your access token from "Settings - Access Tokens - New Token", and paste the token into the login field at the end of the notebook (see image below).
60+
* Create your access token from "Settings - Access Tokens - New Token," and paste the token into the login field at the end of the notebook (see image below).
3361
<img src="./images/hf_token.jpg" alt="drawing" style="width:480px;"/>
3462
* Accept the [license of Stable Diffusion v1-4 Model Card](https://huggingface.co/CompVis/stable-diffusion-v1-4) if you agree. (Otherwise can't use the model)
3563
<img src="./images/hf_model_card.jpg" alt="drawing" style="width:480px;"/>
3664

65+
### Upload Images
66+
67+
We recommend preparing ~20 photos: ten close-ups of your face with various poses and facial expressions, five photos from your chest and up, and a few full-body shots.
68+
3769
### Run `train.ipynb`
38-
This notebook will train a DreamBooth model using the images you just uploaded.
70+
This notebook trains a DreamBooth model use the images inside of `/home/ubuntu/data`.
3971

40-
Once trained, it will run some validations using the prompts in the `test_dreambooth.py` script.
72+
Once trained, it will also run a few inferences and display the prompts and sampled images at the end of the notebook.
4173

4274
### Run `test_prompt.ipynb` and `test.ipynb`
43-
You can use these notebook to play with the model you just trained.
75+
You can use these notebooks to play with the model you just trained.
76+
77+
* `test_prompt.ipynb`: A notebook for prompt engineering. You will use fixed latent input to conduct controlled experiments for the impact of prompt engineering on the model output.
4478

45-
## Q&A
79+
* `test_param.ipynb`: A notebook for trying different parameters for inference. Again, you will use fixed latent input to conduct controlled experiments for the impact of these parameters on the model output.

doc/images/lambda_cloud_dashboard.jpg

102 KB
Loading
Loading
Loading
99.3 KB
Loading

test_dreambooth.py

+5-7
Original file line numberDiff line numberDiff line change
@@ -77,13 +77,11 @@ def main():
7777
tests = {
7878
"1": ["photo, colorful cinematic portrait of " + token_class_str + ", armor, cyberpunk,background made of brain cells, back light, organic, art by greg rutkowski, ultrarealistic, leica 30mm", args.num_pred_steps, args.guide, "rutkowski"],
7979
"2": ["pencil sketch portrait of " + token_class_str + " inpired by greg rutkowski, digital art by artgem", args.num_pred_steps, args.guide, "rutkowskiartgem"],
80-
"3": ["photo, colorful cinematic portrait of " + token_class_str + ", organic armor, cyberpunk, background brain cells mesh, art by greg rutkowski", args.num_pred_steps, args.guide, "rutkowskibraincells"],
81-
"4": ["photo,colorful cinematic portrait of " + token_class_str + ", " + token_class_str + " with long hair, color lights, on stage, ultrarealistic", args.num_pred_steps, args.guide, "longhair"],
82-
"5": ["photo, colorful cinematic portrait of " + token_class_str + " with organic armor, cyberpunk background, " + token_class_str + ", greg rutkowski", args.num_pred_steps, args.guide, "cyberpunkrutkowski"],
83-
"6": ["photo portrait of " + token_class_str + " astronaut, astronaut, helmet in alien world abstract oil painting, greg rutkowski, detailed face", args.num_pred_steps, args.guide, "astronautrutkowski"],
84-
"7": ["photo portrait of " + token_class_str + " as firefighter, helmet, ultrarealistic, leica 30mm", args.num_pred_steps, args.guide, "firefighter"],
85-
"8": ["photo portrait of " + token_class_str + " as steampunk warrior, neon organic vines, digital painting", args.num_pred_steps, args.guide, "steampunk"],
86-
"9": ["impressionist portrait painting of " + token_class_str + " by Daniel F Gerhartz, (( " + token_class_str + " with painted in an impressionist style)), nature, trees", args.num_pred_steps, args.guide, "danielgerhartz"],
80+
"3": ["photo,colorful cinematic portrait of " + token_class_str + ", " + token_class_str + " with long hair, color lights, on stage, ultrarealistic", args.num_pred_steps, args.guide, "longhair"],
81+
"4": ["photo portrait of " + token_class_str + " astronaut, astronaut, helmet in alien world abstract oil painting, greg rutkowski, detailed face", args.num_pred_steps, args.guide, "astronautrutkowski"],
82+
"5": ["photo portrait of " + token_class_str + " as firefighter, helmet, ultrarealistic, leica 30mm", args.num_pred_steps, args.guide, "firefighter"],
83+
"6": ["photo portrait of " + token_class_str + " as steampunk warrior, neon organic vines, digital painting", args.num_pred_steps, args.guide, "steampunk"],
84+
"7": ["impressionist portrait painting of " + token_class_str + " by Daniel F Gerhartz, (( " + token_class_str + " with painted in an impressionist style)), nature, trees", args.num_pred_steps, args.guide, "danielgerhartz"],
8785
}
8886

8987
if args.ddim:

test.ipynb test_param.ipynb

File renamed without changes.

test_prompt.ipynb

+22-2
Original file line numberDiff line numberDiff line change
@@ -103,13 +103,31 @@
103103
").half()"
104104
]
105105
},
106+
{
107+
"cell_type": "code",
108+
"execution_count": null,
109+
"id": "6a325c88",
110+
"metadata": {},
111+
"outputs": [],
112+
"source": [
113+
"# No additional feature string\n",
114+
"feature_str = \"\"\n",
115+
"fill_placeholders = lambda x: x.replace(\"__token__\", token_name).replace(\"__class__\", class_str).replace(\"__feature__\", feature_str)\n",
116+
"for prompt in prompts:\n",
117+
" prompt = fill_placeholders(prompt)\n",
118+
" image = pipe(prompt, num_inference_steps=num_pred_steps, guidance_scale=guide, latents = latents).images[0]\n",
119+
" print(prompt)\n",
120+
" save_and_display(image, predict_path)"
121+
]
122+
},
106123
{
107124
"cell_type": "code",
108125
"execution_count": null,
109126
"id": "24372f4d-b650-41d1-8717-c0faf4a5c5fa",
110127
"metadata": {},
111128
"outputs": [],
112129
"source": [
130+
"# Use \"detailed face\" as the feature string\n",
113131
"feature_str = \", detailed face\"\n",
114132
"fill_placeholders = lambda x: x.replace(\"__token__\", token_name).replace(\"__class__\", class_str).replace(\"__feature__\", feature_str)\n",
115133
"for prompt in prompts:\n",
@@ -126,6 +144,7 @@
126144
"metadata": {},
127145
"outputs": [],
128146
"source": [
147+
"# Use \"blue Punk Mohawk\" as the feature string\n",
129148
"feature_str = \", blue Punk Mohawk\"\n",
130149
"fill_placeholders = lambda x: x.replace(\"__token__\", token_name).replace(\"__class__\", class_str).replace(\"__feature__\", feature_str)\n",
131150
"for prompt in prompts:\n",
@@ -142,7 +161,8 @@
142161
"metadata": {},
143162
"outputs": [],
144163
"source": [
145-
"# Negative prompt\n",
164+
"# Negative prompt (the feature you want to get rid off)\n",
165+
"# e.g. use \"glasses\" if you want to get rid of the glasses\n",
146166
"negative_prompt=\"glasses\"\n",
147167
"feature_str = \", blue Punk Mohawk\"\n",
148168
"fill_placeholders = lambda x: x.replace(\"__token__\", token_name).replace(\"__class__\", class_str).replace(\"__feature__\", feature_str)\n",
@@ -160,7 +180,7 @@
160180
"metadata": {},
161181
"outputs": [],
162182
"source": [
163-
"# repeat special token\n",
183+
"# Repeat the special token and class string twice\n",
164184
"feature_str = \", blue Punk Mohawk\"\n",
165185
"fill_placeholders = lambda x: x.replace(\"__token__\", token_name + \" __class__, \" + token_name).replace(\"__class__\", class_str).replace(\"__feature__\", feature_str)\n",
166186
"for prompt in prompts:\n",

train.ipynb

+15-47
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,9 @@
4242
"metadata": {},
4343
"outputs": [],
4444
"source": [
45-
"# Run this command to train your DreamBooth Model\n",
45+
"# Run this command to train your DreamBooth Model and generate a few inference results \n",
46+
"\n",
47+
"# Training (results will be saved to MODEL_DIR)\n",
4648
"!(python train_dreambooth.py \\\n",
4749
" --config_file config.yaml \\\n",
4850
" learning_rate=\"$LR\" \\\n",
@@ -54,68 +56,29 @@
5456
" output_dir=\"$MODEL_DIR\" \\\n",
5557
" max_train_steps=\"$MAX_NUM_STEPS\" \\\n",
5658
" use_tf32=true \\\n",
57-
")"
58-
]
59-
},
60-
{
61-
"cell_type": "markdown",
62-
"id": "bf2124e9-c670-4eb0-8eab-c32c31d95038",
63-
"metadata": {},
64-
"source": [
65-
"# Validate the model with prepared prompts"
66-
]
67-
},
68-
{
69-
"cell_type": "code",
70-
"execution_count": null,
71-
"id": "7d6cdc67-44aa-4612-8ff1-abb21e028b43",
72-
"metadata": {
73-
"scrolled": true,
74-
"tags": []
75-
},
76-
"outputs": [],
77-
"source": [
78-
"# Results will be saved to PRED_DIR\n",
59+
")\n",
7960
"\n",
61+
"# Inference (results will be saved to PRED_DIR)\n",
8062
"NUM_PRED=2 # number of predictions per prompt\n",
81-
"\n",
8263
"!( python test_dreambooth.py \\\n",
8364
" --model_path $MODEL_DIR \\\n",
8465
" --pred_path $PRED_DIR \\\n",
8566
" --num_preds $NUM_PRED \\\n",
8667
" --ddim \\\n",
87-
")"
88-
]
89-
},
90-
{
91-
"cell_type": "code",
92-
"execution_count": null,
93-
"id": "ef07089c-910b-4d97-8c08-708fa9d5580f",
94-
"metadata": {
95-
"scrolled": true,
96-
"tags": []
97-
},
98-
"outputs": [],
99-
"source": [
68+
")\n",
69+
"\n",
10070
"import glob\n",
10171
"from IPython.display import Image, display\n",
10272
"for imageName in glob.glob(PRED_DIR +'/*.png'): #assuming JPG\n",
73+
" print(imageName)\n",
10374
" display(Image(filename=imageName))\n",
104-
" print(imageName)"
75+
" "
10576
]
106-
},
107-
{
108-
"cell_type": "code",
109-
"execution_count": null,
110-
"id": "5f0b5535-8eb0-4eec-ad70-7fca3360d570",
111-
"metadata": {},
112-
"outputs": [],
113-
"source": []
11477
}
11578
],
11679
"metadata": {
11780
"kernelspec": {
118-
"display_name": "Python 3",
81+
"display_name": "Python 3.8.10 64-bit",
11982
"language": "python",
12083
"name": "python3"
12184
},
@@ -130,6 +93,11 @@
13093
"nbconvert_exporter": "python",
13194
"pygments_lexer": "ipython3",
13295
"version": "3.8.10"
96+
},
97+
"vscode": {
98+
"interpreter": {
99+
"hash": "e7370f93d1d0cde622a1f8e1c04877d8463912d04d973331ad4851f04de6915a"
100+
}
133101
}
134102
},
135103
"nbformat": 4,

0 commit comments

Comments
 (0)