Skip to content

Commit

Permalink
fix: make run images to webp, spacing (#270)
Browse files Browse the repository at this point in the history
  • Loading branch information
innnotruong authored May 23, 2024
1 parent 0471593 commit 7607e98
Show file tree
Hide file tree
Showing 9 changed files with 5 additions and 6 deletions.
Binary file removed _memo/assets/ai-eco.png
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file removed _memo/assets/evaluating-prompts.png
Binary file not shown.
Binary file removed _memo/assets/llm-arch.png
Binary file not shown.
Binary file removed _memo/assets/llm-building-stages.png
Binary file not shown.
11 changes: 5 additions & 6 deletions _memo/developing-rapidly-with-generative-ai.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,19 +8,18 @@ description: Generative AI overview and the different stages of building an LLM-
authors:
- antran
menu: memo
type: null
type: ogif
hide_frontmatter: false
---
## Generative AI

![AI overview](assets/ai-eco.png)
![AI overview](assets/developing-rapidly-with-generative-ai_ai-eco.webp)

Generative AI is a subset of artificial intelligence that focuses on creating new content, such as images, text, or audio, based on patterns learned from existing data.

## Stages for Building LLM-powered Features

![Stages for Building LLM-powered Features](assets/llm-building-stages.png)

![Stages for Building LLM-powered Features](assets/developing-rapidly-with-generative-ai_llm-building-stages.webp)

### 1. Identify use cases
The first stage is to identifying where generative AI can make an impact. The common challenges can be:
Expand All @@ -43,10 +42,10 @@ Selecting off-the-shelf LLM which use for the prototype. The general idea is tha

The key step at this stage is to create the right prompt. To do this, a technique known as [AI-assisted evaluation](https://arize.com/blog-course/llm-evaluation-the-definitive-guide/) can help to pick the prompts that lead to better quality outputs by using metrics for measuring performance.

![How AI-assisted evaluation works](assets/evaluating-prompts.png)
![How AI-assisted evaluation works](assets/developing-rapidly-with-generative-ai_evaluating-prompts.webp)

### 4. Deploying at Scale
![A high-level architecture for an LLM application](assets/llm-arch.png)
![A high-level architecture for an LLM application](assets/developing-rapidly-with-generative-ai_llm-arch.webp)
This involves setting up the infrastructure to handle the expected load, monitoring the system's performance, and ensuring that the feature continues to meet the requirements set in the previous stages. There are 2 ways to consider for deploying:

- **Using commercial LLMs**: this is greate to accessing to top-notch models, don't have to worry about setting up the tech, but the expenses can add up quickly.
Expand Down

0 comments on commit 7607e98

Please sign in to comment.