Skip to content

Latest commit

 

History

History
103 lines (74 loc) · 3.59 KB

README.md

File metadata and controls

103 lines (74 loc) · 3.59 KB

OperateGPT: Revolutionize Your Operations with One-Line Requests

  • Using large language models and multi-agent technology, a single request can automatically generate marketing copy, images, and videos, and with one click, can be sent to multiple platforms, achieving a rapid transformation in marketing operations.

OperateGPT Process

Supported Platforms

Operate Platform Supported API Notes
YouTube Coming soon Coming soon
Twitter Coming soon Coming soon
CSDN Coming soon Coming soon
Bilibili Coming soon Coming soon
Zhihu Coming soon Coming soon
Wechat Coming soon Coming soon
Douban Coming soon Coming soon
TikTok Coming soon Coming soon

Supported LLMs

LLM Supported Model Type Notes
ChatGPT Yes Proxy
Bard Yes Proxy
Claude Coming soon Proxy
Vicuna-13b-v1.5 Coming soon Local Model
ChatGLM2-6B Coming soon Local Model
Qwen-7b-Chat Coming soon Local Model

Supported Embedding Models

LLM Supported Notes
all-MiniLM-L6-v2 Yes
text2vec-large-chinese Coming soon

Installation

Firstly, download and install the relevant LLMs.

mkdir models & cd models

# Size: 522 MB
git lfs install 
git clone https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2

# [Options]
# Size: 94 GB, supported run in cpu model(RAM>14 GB). stablediffusion-proxy service is recommended, https://github.com/xuyuan23/stablediffusion-proxy
git lfs install 
git clone https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0

# [Options]
# Size: 16 GB, supported run in cpu model(RAM>16 GB). Text2Video service is recommended. https://github.com/xuyuan23/Text2Video
git lfs install
git clone https://huggingface.co/cerspense/zeroscope_v2_576w

Then, download dependencies and launch your project.

pip install -r requirements.txt

# copy file `.env.template` to new file `.env`, and modify the params in `.env`.
cp .env.template .env 

[Options]
# deploy stablediffusion service, if StableDiffusion proxy is used, no need to execute it!
python operategpt/providers/stablediffusion.py

[Options]
# deploy Text2Video service, if Text2Video proxy server is used, no need to execute it!
python operategpt/providers/text2video.py

python main.py "what is MetaGPT?"

Configuration

  • By default, ChatGPT is used as the LLM, and you need to configure the OPEN_AI_KEY in .env
OPEN_AI_KEY=sk-xxx

# If you don't deploy stable diffusion service, no image will be generated.
SD_PROXY_URL=127.0.0.1:7860

# If you don't deploy Text2Video service, no videos will be generated.
T2V_PROXY_URL=127.0.0.1:7861
  • More Details see file .env.template

Display

operateGPT_demo_low.mp4