This is the official repository for the OpenJourney Discord bot.
OpenJourney is a Discord bot that allows you to create your own generated pictures by using StableDiffusion
To use OpenJourney, you need to have a Discord account. If you don't have one, you can create one here.
Once you have a Discord account, you can invite OpenJourney to your server by clicking here (Works for all servers for 1 week from publishing)
After adding bot just type /help
to get started.
Or just to official guide page in Notion: OpenJourney Guide
If you want to contribute to OpenJourney, you can fork this repository and make a pull request. If you want to add a new feature, please open an issue first. Name your forked repository OpenJourney-discord-<feature>
. For example, if you want to add a new command, name your forked repository OpenJourney-discord-new-command
.
- Create a Discord Application & Bot, Invite to your server
- Clone the repository
- Install NVIDIA Runtime
- Install Docker and docker-compose (optional)
- Install Nvidia Docker
- Setup the environment
- Build the image
- Run the container
- Go to Discord Developer Portal and create a new application
- Go to the
Bot
tab and create a new bot - Go to the
OAuth2
tab and selectbot
scope - Select the permissions you want to give to the bot
- Copy the link and paste it in your browser
- Select the server you want to add the bot to
Install git, curl, if you don't have them
sudo apt install git curl
git clone https://github.com/Ar4ikov/OpenJourney-discord.git
cd OpenJourney-discord
Do it like you would do it for any other project (for Windows, Linux, MacOS)
There is a simple example how to install it with Conda for Linux
conda install cudatoolkit=11.6 -c nvidia
sudo apt update && sudo apt install docker.io docker-compose
Or install docker and docker-ce that way:
curl https://get.docker.com | sh \
&& sudo systemctl --now enable docker
Link to: NVIDIA Docker
- Setup a package repository and the GPG key
distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \
&& curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
&& curl -s -L https://nvidia.github.io/libnvidia-container/$distribution/libnvidia-container.list | \
sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
- Update the package repository
sudo apt-get update
- Install the nvidia-docker-2
sudo apt-get install -y nvidia-docker2
- Change the default runtime to nvidia Link
sudo nano /etc/docker/daemon.json
It should looks like:
{
"runtimes": {
"nvidia": {
"path": "/usr/bin/nvidia-container-runtime",
"runtimeArgs": []
}
},
"default-runtime": "nvidia"
}
- Restart the Docker daemon
sudo systemctl restart docker
At this point, you should be able to run the nvidia-docker2 container.
sudo docker run --rm --gpus all nvidia/cuda:11.6.0-base-ubuntu20.04 nvidia-smi
This should output the following:
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.51.06 Driver Version: 450.51.06 CUDA Version: 11.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla T4 On | 00000000:00:1E.0 Off | 0 |
| N/A 34C P8 9W / 70W | 0MiB / 15109MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
Environment variable | Description | Default value |
---|---|---|
DISCORD_TOKEN |
Discord bot token | None |
GUILD_ID |
Discord server ID, -1 if sync commans globally | -1 |
SD_MODEL_ID_1 |
StableDiffusion model ID | dreamlike-art/dreamlike-photoreal-2.0 |
GPT_MODEL_ID |
GPT-2 model ID for Magic Prompt generate | Ar4ikov/gpt2-650k-stable-diffusion-prompt-generator |
NUM_GPUS |
Number of GPUs to use | 1 |
NUM_THREADS_PER_GPU |
Number of threads per GPU | 2 |
NSFW_GENERATE |
Allow NSFW generate images content | True |
You can use multiple models for StableDiffusion, just add SD_MODEL_ID_2
, SD_MODEL_ID_3
and so on
cp .env_example .env
nano .env
source .env && docker-compose build
source .env && docker-compose up -d
docker-compose down
For every GPU that use FP16, there are some calculations:
- ±16-20 GB of RAM (per 1 GPU & 2 threads)
- ±2 GB of VRAM in background (per 1 GPU & 2 threads)
- ±6 GB of VRAM per thread in active stage (per 1 GPU & 1 threads)
- ±12 GB of VRAM per thread in active stage (per 1 GPU & 2 threads)