Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RAG-366] Fix Docs Colab links #473

Merged
merged 4 commits into from
Jun 11, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 6 additions & 1 deletion docs/modules/ROOT/pages/quickstart.adoc
Original file line number Diff line number Diff line change
@@ -1,6 +1,11 @@
= Quickstart
:navtitle: Quickstart
:page-layout: tutorial
:page-icon-role: bg-[var(--ds-neutral-900)]
:page-toclevels: 1
:keywords: Machine Learning Frameworks, Embedding Services, Data Warehouses, SDK
mendonk marked this conversation as resolved.
Show resolved Hide resolved
:page-colab-link: https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/quickstart.ipynb
mendonk marked this conversation as resolved.
Show resolved Hide resolved

image::https://colab.research.google.com/assets/colab-badge.svg[align="left",link="https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/quickstart.ipynb"]

This quickstart demonstrates a basic RAG pattern using RAGStack and the vector-enabled {db-serverless} database to retrieve context and pass it to a language model for generation.

Expand Down
8 changes: 6 additions & 2 deletions docs/modules/ROOT/pages/what-is-rag.adoc
Original file line number Diff line number Diff line change
@@ -1,6 +1,10 @@
= What is RAG?

image::https://colab.research.google.com/assets/colab-badge.svg[align="left",link="https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/quickstart.ipynb"]
:navtitle: What is RAG?
:page-layout: tutorial
:page-icon-role: bg-[var(--ds-neutral-900)]
:page-toclevels: 1
:keywords: Machine Learning Frameworks, Embedding Services, Data Warehouses, SDK
:page-colab-link: https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/quickstart.ipynb

Retrieval-Augmented Generation (RAG) is a popular machine learning technique that retrieves prior context from a memory system to construct a prompt that is passed to a model.

Expand Down
10 changes: 6 additions & 4 deletions docs/modules/examples/pages/advanced-rag.adoc
Original file line number Diff line number Diff line change
@@ -1,8 +1,10 @@
= Advanced RAG: MultiQuery and ParentDocument

image::https://colab.research.google.com/assets/colab-badge.svg[align="left",link="https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/advancedRAG.ipynb"]

This page demonstrates two different advanced RAG techniques: MultiQueryRAG and ParentDocumentRAG.
:navtitle: Advanced RAG: MultiQuery and ParentDocument
:page-layout: tutorial
:page-icon-role: bg-[var(--ds-neutral-900)]
:page-toclevels: 1
:keywords: Machine Learning Frameworks, Embedding Services, Data Warehouses, SDK
:page-colab-link: https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/advancedRAG.ipynb

In *MultiQueryRAG*, an LLM is used to automate the process of prompt tuning, to generate multiple queries from different perspectives for a given user input question.

Expand Down
8 changes: 6 additions & 2 deletions docs/modules/examples/pages/colbert.adoc
Original file line number Diff line number Diff line change
@@ -1,6 +1,10 @@
= ColBERT in RAGStack with Astra

image::https://colab.research.google.com/assets/colab-badge.svg[align="left",link="https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/RAGStackColBERT.ipynb"]
:navtitle: ColBERT in RAGStack with Astra
:page-layout: tutorial
:page-icon-role: bg-[var(--ds-neutral-900)]
:page-toclevels: 1
:keywords: Machine Learning Frameworks, Embedding Services, Data Warehouses, SDK
:page-colab-link: https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/RAGStackColBERT.ipynb

Use ColBERT, Astra DB, and RAGStack to:

Expand Down
8 changes: 6 additions & 2 deletions docs/modules/examples/pages/flare.adoc
Original file line number Diff line number Diff line change
@@ -1,6 +1,10 @@
= Forward-Looking Active REtrieval (FLARE)

image::https://colab.research.google.com/assets/colab-badge.svg[align="left",link="https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/FLARE.ipynb"]
:navtitle: Forward-Looking Active REtrieval (FLARE)
:page-layout: tutorial
:page-icon-role: bg-[var(--ds-neutral-900)]
:page-toclevels: 1
:keywords: Machine Learning Frameworks, Embedding Services, Data Warehouses, SDK
:page-colab-link: https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/FLARE.ipynb

FLARE is an advanced retrieval technique that combines retrieval and generation in LLMs.
It enhances the accuracy of responses by iteratively predicting the upcoming sentence to anticipate future content when the model encounters a token it is uncertain about.
Expand Down
34 changes: 18 additions & 16 deletions docs/modules/examples/pages/index.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@
This section contains examples of how to use RAGStack.
We're actively updating this section, so check back often!


<<langchain-astra,LangChain and {db-serverless}>>

<<llama-astra,LlamaIndex and {db-serverless}>>
Expand All @@ -11,40 +12,40 @@ We're actively updating this section, so check back often!

[[langchain-astra]]
.LangChain and Astra DB Serverless
[options="header"]
[cols="3*",options="header"]
|===
| Description | Colab | Documentation

| Perform multi-modal RAG with LangChain, {db-serverless}, and a Google Gemini Pro Vision model.
a| image::https://colab.research.google.com/assets/colab-badge.svg[align="left",link="https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/langchain_multimodal_gemini.ipynb"]
| https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/langchain_multimodal_gemini.ipynb[Open in Colab]
| xref:langchain_multimodal_gemini.adoc[]

| Build a simple RAG pipeline using https://catalog.ngc.nvidia.com[NVIDIA AI Foundation Models].
a| image::https://colab.research.google.com/assets/colab-badge.svg[align="left",link="https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/nvidia.ipynb"]
| Build a simple RAG pipeline using NVIDIA AI Foundation Models.
| https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/nvidia.ipynb[Open in Colab]
| xref:nvidia_embeddings.adoc[]

| Build a hotels search application with RAGStack and {db-serverless}.
a| image::https://gitpod.io/button/open-in-gitpod.svg[align="left",110,link="https://gitpod.io/#https://github.com/hemidactylus/langchain-astrapy-hotels-app"]
| https://gitpod.io/#https://github.com/hemidactylus/langchain-astrapy-hotels-app[Open in Colab]
| xref:hotels-app.adoc[]

| Vector search with the Maximal Marginal Relevance (MMR) algorithm.
a| image::https://colab.research.google.com/assets/colab-badge.svg[align="left",link="https://colab.research.google.com/github/CassioML/cassio-website/blob/main/docs/frameworks/langchain/.colab/colab_qa-maximal-marginal-relevance.ipynb"]
| https://colab.research.google.com/github/CassioML/cassio-website/blob/main/docs/frameworks/langchain/.colab/colab_qa-maximal-marginal-relevance.ipynb[Open in Colab]
| xref:mmr.adoc[]

| Evaluate a RAG pipeline using LangChain's QA Evaluator.
a| image::https://colab.research.google.com/assets/colab-badge.svg[align="left",link="https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/langchain_evaluation.ipynb"]
| https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/langchain_evaluation.ipynb[Open in Colab]
| xref:langchain-evaluation.adoc[]

| Evaluate the response accuracy, token cost, and responsiveness of MultiQueryRAG and ParentDocumentRAG.
a| image::https://colab.research.google.com/assets/colab-badge.svg[align="left",link="https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/advancedRAG.ipynb"]
| https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/advancedRAG.ipynb[Open in Colab]
| xref:advanced-rag.adoc[]

| Orchestrate the advanced FLARE retrieval technique in a RAG pipeline.
a| image::https://colab.research.google.com/assets/colab-badge.svg[align="left",link="https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/FLARE.ipynb"]
| https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/FLARE.ipynb[Open in Colab]
| xref:flare.adoc[]

| Build a simple RAG pipeline using Unstructured and {db-serverless}.
a| image::https://colab.research.google.com/assets/colab-badge.svg[align="left",link="https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/langchain-unstructured-astra.ipynb"]
| https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/langchain-unstructured-astra.ipynb[Open in Colab]
| xref:langchain-unstructured-astra.adoc[]

|===
Expand All @@ -56,11 +57,11 @@ a| image::https://colab.research.google.com/assets/colab-badge.svg[align="left",
| Description | Colab | Documentation

| Build a simple RAG pipeline using LlamaIndex and {db-serverless}.
a| image::https://colab.research.google.com/assets/colab-badge.svg[align="left",link="https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/llama-astra.ipynb"]
| https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/llama-astra.ipynb[Open in Colab]
| xref:llama-astra.adoc[]

| Build a simple RAG pipeline using LlamaParse and {db-serverless}.
a| image::https://colab.research.google.com/assets/colab-badge.svg[align="left",link="https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/llama-parse-astra.ipynb"]
| https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/llama-parse-astra.ipynb[Open in Colab]
| xref:llama-parse-astra.adoc[]

|===
Expand All @@ -72,16 +73,17 @@ a| image::https://colab.research.google.com/assets/colab-badge.svg[align="left",
| Description | Colab | Documentation

| Create ColBERT embeddings, index embeddings on Astra, and retrieve embeddings with RAGStack.
a| image::https://colab.research.google.com/assets/colab-badge.svg[align="left",link="https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/RAGStackColBERT.ipynb"]
| https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/RAGStackColBERT.ipynb[Open in Colab]
| xref:colbert.adoc[]

| Implement a generative Q&A over your own documentation with {db-serverless} Search, OpenAI, and CassIO.
a| image::https://colab.research.google.com/assets/colab-badge.svg[align="left",link="https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/QA_with_cassio.ipynb"]
| https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/QA_with_cassio.ipynb[Open in Colab]
| xref:qa-with-cassio.adoc[]

| Store external or proprietary data in {db-serverless} and query it to provide more up-to-date LLM responses.
a| image::https://colab.research.google.com/assets/colab-badge.svg[align="left",link="https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/RAG_with_cassio.ipynb"]
| https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/RAG_with_cassio.ipynb[Open in Colab]
| xref:rag-with-cassio.adoc[]

|===



7 changes: 5 additions & 2 deletions docs/modules/examples/pages/langchain-evaluation.adoc
Original file line number Diff line number Diff line change
@@ -1,6 +1,9 @@
= Evaluating RAG Pipelines with LangChain

image::https://colab.research.google.com/assets/colab-badge.svg[align="left",link="https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/langchain_evaluation.ipynb"]
:navtitle: Evaluating RAG Pipelines with LangChain
:page-layout: tutorial
:page-icon-role: bg-[var(--ds-neutral-900)]
:page-toclevels: 1
:page-colab-link: https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/langchain_evaluation.ipynb

This notebook demonstrates how to evaluate a RAG pipeline using LangChain's QA Evaluator. This evaluator helps measure the correctness of a response given some context, making it ideally suited for evaluating a RAG pipeline. At the end of this notebook, you will have a measurable QA model using RAG.

Expand Down
7 changes: 5 additions & 2 deletions docs/modules/examples/pages/langchain-unstructured-astra.adoc
Original file line number Diff line number Diff line change
@@ -1,6 +1,9 @@
= RAG with Unstructured.io and {db-serverless}

image::https://colab.research.google.com/assets/colab-badge.svg[align="left",link="https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/langchain-unstructured-astra.ipynb"]
:navtitle: RAG with Unstructured.io and {db-serverless}
:page-layout: tutorial
:page-icon-role: bg-[var(--ds-neutral-900)]
:page-toclevels: 1
:page-colab-link: https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/langchain-unstructured-astra.ipynb

Build a RAG pipeline with RAGStack, {db-serverless}, and Unstructured.io.

Expand Down
7 changes: 5 additions & 2 deletions docs/modules/examples/pages/langchain_multimodal_gemini.adoc
Original file line number Diff line number Diff line change
@@ -1,6 +1,9 @@
= Multi-modal RAG

image::https://colab.research.google.com/assets/colab-badge.svg[align="left",link="https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/langchain_multimodal_gemini.ipynb"]
:navtitle: Multi-modal RAG
:page-layout: tutorial
:page-icon-role: bg-[var(--ds-neutral-900)]
:page-toclevels: 1
:page-colab-link: https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/langchain_multimodal_gemini.ipynb

This notebook demonstrates using LangChain, {db-serverless}, and a Google Gemini Pro Vision model to perform multi-modal Retrieval-Augmented Generation (RAG).

Expand Down
7 changes: 5 additions & 2 deletions docs/modules/examples/pages/llama-astra.adoc
Original file line number Diff line number Diff line change
@@ -1,6 +1,9 @@
= RAG with LlamaIndex and {db-serverless}

image::https://colab.research.google.com/assets/colab-badge.svg[align="left",link="https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/llama-astra.ipynb"]
:navtitle: RAG with LlamaIndex and {db-serverless}
:page-layout: tutorial
:page-icon-role: bg-[var(--ds-neutral-900)]
:page-toclevels: 1
:page-colab-link: https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/llama-astra.ipynb

Build a RAG pipeline with RAGStack, {db-serverless}, and LlamaIndex.

Expand Down
7 changes: 5 additions & 2 deletions docs/modules/examples/pages/llama-parse-astra.adoc
Original file line number Diff line number Diff line change
@@ -1,6 +1,9 @@
= RAG with LlamaParse and {db-serverless}

image::https://colab.research.google.com/assets/colab-badge.svg[align="left",link="https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/llama-parse-astra.ipynb"]
:navtitle: RAG with LlamaParse and {db-serverless}
:page-layout: tutorial
:page-icon-role: bg-[var(--ds-neutral-900)]
:page-toclevels: 1
:page-colab-link: https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/llama-parse-astra.ipynb

Build a RAG pipeline with RAGStack, {db-serverless}, and LlamaIndex.

Expand Down
7 changes: 5 additions & 2 deletions docs/modules/examples/pages/mmr.adoc
Original file line number Diff line number Diff line change
@@ -1,6 +1,9 @@
= VectorStore QA with MMR

image::https://colab.research.google.com/assets/colab-badge.svg[align="left",link="https://colab.research.google.com/github/CassioML/cassio-website/blob/main/docs/frameworks/langchain/.colab/colab_qa-maximal-marginal-relevance.ipynb"]
:navtitle: VectorStore QA with MMR
:page-layout: tutorial
:page-icon-role: bg-[var(--ds-neutral-900)]
:page-toclevels: 1
:page-colab-link: https://colab.research.google.com/github/CassioML/cassio-website/blob/main/docs/frameworks/langchain/.colab/colab_qa-maximal-marginal-relevance.ipynb

This page demonstrates using RAGStack and an vector-enabled {db-serverless} database to perform vector search with the *Maximal Marginal Relevance (MMR)* algorithm.

Expand Down
7 changes: 5 additions & 2 deletions docs/modules/examples/pages/nvidia_embeddings.adoc
Original file line number Diff line number Diff line change
@@ -1,6 +1,9 @@
= Nvidia Embeddings and Models

image::https://colab.research.google.com/assets/colab-badge.svg[align="left",link="https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/nvidia.ipynb"]
:navtitle: Nvidia Embeddings and Models
:page-layout: tutorial
:page-icon-role: bg-[var(--ds-neutral-900)]
:page-toclevels: 1
:page-colab-link: https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/nvidia.ipynb

This notebook demonstrates how to set up a simple RAG pipeline using
https://catalog.ngc.nvidia.com[NVIDIA AI Foundation Models]. At the end
Expand Down
7 changes: 5 additions & 2 deletions docs/modules/examples/pages/qa-with-cassio.adoc
Original file line number Diff line number Diff line change
@@ -1,6 +1,9 @@
= Knowledge Base Search on Proprietary Data powered by {db-serverless}

image::https://colab.research.google.com/assets/colab-badge.svg[align="left",link="https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/QA_with_cassio.ipynb"]
:navtitle: Knowledge Base Search on Proprietary Data powered by {db-serverless}
:page-layout: tutorial
:page-icon-role: bg-[var(--ds-neutral-900)]
:page-toclevels: 1
:page-colab-link: https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/QA_with_cassio.ipynb

This notebook guides you through setting up RAGStack using https://docs.datastax.com/en/astra-serverless/docs/vector-search/overview.html[{db-serverless} Search], https://platform.openai.com[OpenAI], and https://cassio.org/[CassIO] to implement a generative Q&A over your own documentation.

Expand Down
9 changes: 5 additions & 4 deletions docs/modules/examples/pages/rag-with-cassio.adoc
Original file line number Diff line number Diff line change
@@ -1,8 +1,9 @@
= RAGStack with CassIO
:toc: macro
:toc-title:

image::https://colab.research.google.com/assets/colab-badge.svg[align="left",link="https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/RAG_with_cassio.ipynb"]
:navtitle: RAGStack with CassIO
:page-layout: tutorial
:page-icon-role: bg-[var(--ds-neutral-900)]
:page-toclevels: 1
:page-colab-link: https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/RAG_with_cassio.ipynb

Large Language Models (LLMs) have a data freshness problem. The most powerful LLMs in the world, like GPT-4, have no idea about recent world events.

Expand Down