Template for building your own custom ChatGPT style doc search powered by Fresh, Deno, OpenAI, and Supabase.
This starter takes all the .mdx
files in the docs
directory and processes
them to use as custom context within
OpenAI Text Completion
prompts.
cp .env.example .env
Set the required env vars as outlined in the file.
supabase start
deno task embeddings
deno task start
This will watch the project directory and restart as necessary.
- Create a new project on Supabase
- Link your project:
supabase link --project-ref=your-project-ref
- Push up migration:
supabase db push
We're using a GitHub Action to
generate the embeddings whenever we merge into the main
branch.
- Get
SUPABASE_URL
andSUPABASE_SERVICE_ROLE_KEY
from your Supabase Studio and set them as Actions secrets in GitHub. - Set
OPENAI_KEY
as Actions secrets in GitHub. - Push or merge into
main
to kick off the GitHub action.
These steps show you how to deploy your app close to your users at the edge with Deno Deploy.
-
Clone this repository to your GitHub account.
-
Sign into Deno Deploy with your GitHub account.
-
Select your GitHub organization or user, repository, and branch
-
Select "Automatic" deployment mode and
main.ts
as the entry point -
Click "Link", which will start the deployment.
-
Once the deployment is complete, click on "Settings" and add the production environmental variables, then hit "Save"
Voila, you've got your own custom ChatGPT!
Building your own custom ChatGPT involves four steps:
- [⚡️ GitHub Action] Pre-process the knowledge base (your
.mdx
files in yourdocs
folder). - [⚡️ GitHub Action] Store embeddings in Postgres with pgvector.
- [🏃 Runtime] Perform vector similarity search to find the content that's relevant to the question.
- [🏃 Runtime] Inject content into OpenAI GPT-3 text completion prompt and stream response to the client.
Step 1. and 2. happen via a
GitHub Action anytime we make
changes to the main
branch. During this time the
generate-embeddings
script is being executed
which performs the following tasks:
sequenceDiagram
participant GitHub Action
participant DB (pgvector)
participant OpenAI (API)
loop 1. Pre-process the knowledge base
GitHub Action->>GitHub Action: Chunk .mdx files into sections
loop 2. Create & store embeddings
GitHub Action->>OpenAI (API): create embedding for page section
OpenAI (API)->>GitHub Action: embedding vector(1536)
GitHub Action->>DB (pgvector): store embedding for page section
end
end
In addition to storing the embeddings, this script generates a checksum for each
of your .mdx
files and stores this in another database table to make sure the
embeddings are only regenerated when the file has changed.
Step 3. and 4. happen at runtime, anytime the user submits a question. When this happens, the following sequence of tasks is performed:
sequenceDiagram
participant Client
participant Edge Function
participant DB (pgvector)
participant OpenAI (API)
Client->>Edge Function: { query: lorem ispum }
critical 3. Perform vector similarity search
Edge Function->>OpenAI (API): create embedding for query
OpenAI (API)->>Edge Function: embedding vector(1536)
Edge Function->>DB (pgvector): vector similarity search
DB (pgvector)->>Edge Function: relevant docs content
end
critical 4. Inject content into prompt
Edge Function->>OpenAI (API): completion request prompt: query + relevant docs content
OpenAI (API)-->>Client: text/event-stream: completions response
end
The relevant files for this are the
SearchDialog
(Client) component and the
vector-search
(Edge Function).
The initialization of the database, including the setup of the pgvector
extension is stored in the
supabase/migrations
folder which is automatically
applied to your local Postgres instance when running supabase start
.
- Read the blogpost on how we built ChatGPT for the Supabase Docs.
- [Docs] pgvector: Embeddings and vector similarity
- Watch Greg's "How I built this" video on the Rabbit Hole Syndrome YouTube Channel: