diff --git a/docs/modules/ROOT/pages/quickstart.adoc b/docs/modules/ROOT/pages/quickstart.adoc index 31bfef008..0951595a5 100644 --- a/docs/modules/ROOT/pages/quickstart.adoc +++ b/docs/modules/ROOT/pages/quickstart.adoc @@ -1,6 +1,11 @@ = Quickstart +:navtitle: Quickstart +:page-layout: tutorial +:page-icon-role: bg-[var(--ds-neutral-900)] +:page-toclevels: 1 +:keywords: Machine Learning Frameworks, Embedding Services, Data Warehouses, SDK +:page-colab-link: https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/quickstart.ipynb -image::https://colab.research.google.com/assets/colab-badge.svg[align="left",link="https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/quickstart.ipynb"] This quickstart demonstrates a basic RAG pattern using RAGStack and the vector-enabled {db-serverless} database to retrieve context and pass it to a language model for generation. diff --git a/docs/modules/ROOT/pages/what-is-rag.adoc b/docs/modules/ROOT/pages/what-is-rag.adoc index e3a11d5e1..f123cf1c2 100644 --- a/docs/modules/ROOT/pages/what-is-rag.adoc +++ b/docs/modules/ROOT/pages/what-is-rag.adoc @@ -1,6 +1,10 @@ = What is RAG? - -image::https://colab.research.google.com/assets/colab-badge.svg[align="left",link="https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/quickstart.ipynb"] +:navtitle: What is RAG? +:page-layout: tutorial +:page-icon-role: bg-[var(--ds-neutral-900)] +:page-toclevels: 1 +:keywords: Machine Learning Frameworks, Embedding Services, Data Warehouses, SDK +:page-colab-link: https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/quickstart.ipynb Retrieval-Augmented Generation (RAG) is a popular machine learning technique that retrieves prior context from a memory system to construct a prompt that is passed to a model. diff --git a/docs/modules/examples/pages/advanced-rag.adoc b/docs/modules/examples/pages/advanced-rag.adoc index 8e9ed5dfd..a82d8e263 100644 --- a/docs/modules/examples/pages/advanced-rag.adoc +++ b/docs/modules/examples/pages/advanced-rag.adoc @@ -1,8 +1,10 @@ = Advanced RAG: MultiQuery and ParentDocument - -image::https://colab.research.google.com/assets/colab-badge.svg[align="left",link="https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/advancedRAG.ipynb"] - -This page demonstrates two different advanced RAG techniques: MultiQueryRAG and ParentDocumentRAG. +:navtitle: Advanced RAG: MultiQuery and ParentDocument +:page-layout: tutorial +:page-icon-role: bg-[var(--ds-neutral-900)] +:page-toclevels: 1 +:keywords: Machine Learning Frameworks, Embedding Services, Data Warehouses, SDK +:page-colab-link: https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/advancedRAG.ipynb In *MultiQueryRAG*, an LLM is used to automate the process of prompt tuning, to generate multiple queries from different perspectives for a given user input question. diff --git a/docs/modules/examples/pages/colbert.adoc b/docs/modules/examples/pages/colbert.adoc index 1026bc513..0e20311ca 100644 --- a/docs/modules/examples/pages/colbert.adoc +++ b/docs/modules/examples/pages/colbert.adoc @@ -1,6 +1,10 @@ = ColBERT in RAGStack with Astra - -image::https://colab.research.google.com/assets/colab-badge.svg[align="left",link="https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/RAGStackColBERT.ipynb"] +:navtitle: ColBERT in RAGStack with Astra +:page-layout: tutorial +:page-icon-role: bg-[var(--ds-neutral-900)] +:page-toclevels: 1 +:keywords: Machine Learning Frameworks, Embedding Services, Data Warehouses, SDK +:page-colab-link: https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/RAGStackColBERT.ipynb Use ColBERT, Astra DB, and RAGStack to: diff --git a/docs/modules/examples/pages/flare.adoc b/docs/modules/examples/pages/flare.adoc index 07dcf5cc6..12ab052db 100644 --- a/docs/modules/examples/pages/flare.adoc +++ b/docs/modules/examples/pages/flare.adoc @@ -1,6 +1,10 @@ = Forward-Looking Active REtrieval (FLARE) - -image::https://colab.research.google.com/assets/colab-badge.svg[align="left",link="https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/FLARE.ipynb"] +:navtitle: Forward-Looking Active REtrieval (FLARE) +:page-layout: tutorial +:page-icon-role: bg-[var(--ds-neutral-900)] +:page-toclevels: 1 +:keywords: Machine Learning Frameworks, Embedding Services, Data Warehouses, SDK +:page-colab-link: https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/FLARE.ipynb FLARE is an advanced retrieval technique that combines retrieval and generation in LLMs. It enhances the accuracy of responses by iteratively predicting the upcoming sentence to anticipate future content when the model encounters a token it is uncertain about. diff --git a/docs/modules/examples/pages/index.adoc b/docs/modules/examples/pages/index.adoc index 6a514674e..fcf0a0ad7 100644 --- a/docs/modules/examples/pages/index.adoc +++ b/docs/modules/examples/pages/index.adoc @@ -3,6 +3,7 @@ This section contains examples of how to use RAGStack. We're actively updating this section, so check back often! + <> <> @@ -11,40 +12,40 @@ We're actively updating this section, so check back often! [[langchain-astra]] .LangChain and Astra DB Serverless -[options="header"] +[cols="3*",options="header"] |=== | Description | Colab | Documentation | Perform multi-modal RAG with LangChain, {db-serverless}, and a Google Gemini Pro Vision model. -a| image::https://colab.research.google.com/assets/colab-badge.svg[align="left",link="https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/langchain_multimodal_gemini.ipynb"] +| https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/langchain_multimodal_gemini.ipynb[Open in Colab] | xref:langchain_multimodal_gemini.adoc[] -| Build a simple RAG pipeline using https://catalog.ngc.nvidia.com[NVIDIA AI Foundation Models]. -a| image::https://colab.research.google.com/assets/colab-badge.svg[align="left",link="https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/nvidia.ipynb"] +| Build a simple RAG pipeline using NVIDIA AI Foundation Models. +| https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/nvidia.ipynb[Open in Colab] | xref:nvidia_embeddings.adoc[] | Build a hotels search application with RAGStack and {db-serverless}. -a| image::https://gitpod.io/button/open-in-gitpod.svg[align="left",110,link="https://gitpod.io/#https://github.com/hemidactylus/langchain-astrapy-hotels-app"] +| https://gitpod.io/#https://github.com/hemidactylus/langchain-astrapy-hotels-app[Open in Colab] | xref:hotels-app.adoc[] | Vector search with the Maximal Marginal Relevance (MMR) algorithm. -a| image::https://colab.research.google.com/assets/colab-badge.svg[align="left",link="https://colab.research.google.com/github/CassioML/cassio-website/blob/main/docs/frameworks/langchain/.colab/colab_qa-maximal-marginal-relevance.ipynb"] +| https://colab.research.google.com/github/CassioML/cassio-website/blob/main/docs/frameworks/langchain/.colab/colab_qa-maximal-marginal-relevance.ipynb[Open in Colab] | xref:mmr.adoc[] | Evaluate a RAG pipeline using LangChain's QA Evaluator. -a| image::https://colab.research.google.com/assets/colab-badge.svg[align="left",link="https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/langchain_evaluation.ipynb"] +| https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/langchain_evaluation.ipynb[Open in Colab] | xref:langchain-evaluation.adoc[] | Evaluate the response accuracy, token cost, and responsiveness of MultiQueryRAG and ParentDocumentRAG. -a| image::https://colab.research.google.com/assets/colab-badge.svg[align="left",link="https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/advancedRAG.ipynb"] +| https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/advancedRAG.ipynb[Open in Colab] | xref:advanced-rag.adoc[] | Orchestrate the advanced FLARE retrieval technique in a RAG pipeline. -a| image::https://colab.research.google.com/assets/colab-badge.svg[align="left",link="https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/FLARE.ipynb"] +| https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/FLARE.ipynb[Open in Colab] | xref:flare.adoc[] | Build a simple RAG pipeline using Unstructured and {db-serverless}. -a| image::https://colab.research.google.com/assets/colab-badge.svg[align="left",link="https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/langchain-unstructured-astra.ipynb"] +| https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/langchain-unstructured-astra.ipynb[Open in Colab] | xref:langchain-unstructured-astra.adoc[] |=== @@ -56,11 +57,11 @@ a| image::https://colab.research.google.com/assets/colab-badge.svg[align="left", | Description | Colab | Documentation | Build a simple RAG pipeline using LlamaIndex and {db-serverless}. -a| image::https://colab.research.google.com/assets/colab-badge.svg[align="left",link="https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/llama-astra.ipynb"] +| https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/llama-astra.ipynb[Open in Colab] | xref:llama-astra.adoc[] | Build a simple RAG pipeline using LlamaParse and {db-serverless}. -a| image::https://colab.research.google.com/assets/colab-badge.svg[align="left",link="https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/llama-parse-astra.ipynb"] +| https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/llama-parse-astra.ipynb[Open in Colab] | xref:llama-parse-astra.adoc[] |=== @@ -72,16 +73,17 @@ a| image::https://colab.research.google.com/assets/colab-badge.svg[align="left", | Description | Colab | Documentation | Create ColBERT embeddings, index embeddings on Astra, and retrieve embeddings with RAGStack. -a| image::https://colab.research.google.com/assets/colab-badge.svg[align="left",link="https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/RAGStackColBERT.ipynb"] +| https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/RAGStackColBERT.ipynb[Open in Colab] | xref:colbert.adoc[] | Implement a generative Q&A over your own documentation with {db-serverless} Search, OpenAI, and CassIO. -a| image::https://colab.research.google.com/assets/colab-badge.svg[align="left",link="https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/QA_with_cassio.ipynb"] +| https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/QA_with_cassio.ipynb[Open in Colab] | xref:qa-with-cassio.adoc[] | Store external or proprietary data in {db-serverless} and query it to provide more up-to-date LLM responses. -a| image::https://colab.research.google.com/assets/colab-badge.svg[align="left",link="https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/RAG_with_cassio.ipynb"] +| https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/RAG_with_cassio.ipynb[Open in Colab] | xref:rag-with-cassio.adoc[] - |=== + + diff --git a/docs/modules/examples/pages/langchain-evaluation.adoc b/docs/modules/examples/pages/langchain-evaluation.adoc index 0822e6e41..e038df73b 100644 --- a/docs/modules/examples/pages/langchain-evaluation.adoc +++ b/docs/modules/examples/pages/langchain-evaluation.adoc @@ -1,6 +1,9 @@ = Evaluating RAG Pipelines with LangChain - -image::https://colab.research.google.com/assets/colab-badge.svg[align="left",link="https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/langchain_evaluation.ipynb"] +:navtitle: Evaluating RAG Pipelines with LangChain +:page-layout: tutorial +:page-icon-role: bg-[var(--ds-neutral-900)] +:page-toclevels: 1 +:page-colab-link: https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/langchain_evaluation.ipynb This notebook demonstrates how to evaluate a RAG pipeline using LangChain's QA Evaluator. This evaluator helps measure the correctness of a response given some context, making it ideally suited for evaluating a RAG pipeline. At the end of this notebook, you will have a measurable QA model using RAG. diff --git a/docs/modules/examples/pages/langchain-unstructured-astra.adoc b/docs/modules/examples/pages/langchain-unstructured-astra.adoc index c19dab497..2597e4130 100644 --- a/docs/modules/examples/pages/langchain-unstructured-astra.adoc +++ b/docs/modules/examples/pages/langchain-unstructured-astra.adoc @@ -1,6 +1,9 @@ = RAG with Unstructured.io and {db-serverless} - -image::https://colab.research.google.com/assets/colab-badge.svg[align="left",link="https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/langchain-unstructured-astra.ipynb"] +:navtitle: RAG with Unstructured.io and {db-serverless} +:page-layout: tutorial +:page-icon-role: bg-[var(--ds-neutral-900)] +:page-toclevels: 1 +:page-colab-link: https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/langchain-unstructured-astra.ipynb Build a RAG pipeline with RAGStack, {db-serverless}, and Unstructured.io. diff --git a/docs/modules/examples/pages/langchain_multimodal_gemini.adoc b/docs/modules/examples/pages/langchain_multimodal_gemini.adoc index 61a81c895..9c5c7335b 100644 --- a/docs/modules/examples/pages/langchain_multimodal_gemini.adoc +++ b/docs/modules/examples/pages/langchain_multimodal_gemini.adoc @@ -1,6 +1,9 @@ = Multi-modal RAG - -image::https://colab.research.google.com/assets/colab-badge.svg[align="left",link="https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/langchain_multimodal_gemini.ipynb"] +:navtitle: Multi-modal RAG +:page-layout: tutorial +:page-icon-role: bg-[var(--ds-neutral-900)] +:page-toclevels: 1 +:page-colab-link: https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/langchain_multimodal_gemini.ipynb This notebook demonstrates using LangChain, {db-serverless}, and a Google Gemini Pro Vision model to perform multi-modal Retrieval-Augmented Generation (RAG). diff --git a/docs/modules/examples/pages/llama-astra.adoc b/docs/modules/examples/pages/llama-astra.adoc index b35f5ddd0..02e524bc1 100644 --- a/docs/modules/examples/pages/llama-astra.adoc +++ b/docs/modules/examples/pages/llama-astra.adoc @@ -1,6 +1,9 @@ = RAG with LlamaIndex and {db-serverless} - -image::https://colab.research.google.com/assets/colab-badge.svg[align="left",link="https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/llama-astra.ipynb"] +:navtitle: RAG with LlamaIndex and {db-serverless} +:page-layout: tutorial +:page-icon-role: bg-[var(--ds-neutral-900)] +:page-toclevels: 1 +:page-colab-link: https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/llama-astra.ipynb Build a RAG pipeline with RAGStack, {db-serverless}, and LlamaIndex. diff --git a/docs/modules/examples/pages/llama-parse-astra.adoc b/docs/modules/examples/pages/llama-parse-astra.adoc index f064ad6d8..fd5a5514f 100644 --- a/docs/modules/examples/pages/llama-parse-astra.adoc +++ b/docs/modules/examples/pages/llama-parse-astra.adoc @@ -1,6 +1,9 @@ = RAG with LlamaParse and {db-serverless} - -image::https://colab.research.google.com/assets/colab-badge.svg[align="left",link="https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/llama-parse-astra.ipynb"] +:navtitle: RAG with LlamaParse and {db-serverless} +:page-layout: tutorial +:page-icon-role: bg-[var(--ds-neutral-900)] +:page-toclevels: 1 +:page-colab-link: https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/llama-parse-astra.ipynb Build a RAG pipeline with RAGStack, {db-serverless}, and LlamaIndex. diff --git a/docs/modules/examples/pages/mmr.adoc b/docs/modules/examples/pages/mmr.adoc index f69a47087..d76199868 100644 --- a/docs/modules/examples/pages/mmr.adoc +++ b/docs/modules/examples/pages/mmr.adoc @@ -1,6 +1,9 @@ = VectorStore QA with MMR - -image::https://colab.research.google.com/assets/colab-badge.svg[align="left",link="https://colab.research.google.com/github/CassioML/cassio-website/blob/main/docs/frameworks/langchain/.colab/colab_qa-maximal-marginal-relevance.ipynb"] +:navtitle: VectorStore QA with MMR +:page-layout: tutorial +:page-icon-role: bg-[var(--ds-neutral-900)] +:page-toclevels: 1 +:page-colab-link: https://colab.research.google.com/github/CassioML/cassio-website/blob/main/docs/frameworks/langchain/.colab/colab_qa-maximal-marginal-relevance.ipynb This page demonstrates using RAGStack and an vector-enabled {db-serverless} database to perform vector search with the *Maximal Marginal Relevance (MMR)* algorithm. diff --git a/docs/modules/examples/pages/nvidia_embeddings.adoc b/docs/modules/examples/pages/nvidia_embeddings.adoc index ce727b66d..f9c713183 100644 --- a/docs/modules/examples/pages/nvidia_embeddings.adoc +++ b/docs/modules/examples/pages/nvidia_embeddings.adoc @@ -1,6 +1,9 @@ = Nvidia Embeddings and Models - -image::https://colab.research.google.com/assets/colab-badge.svg[align="left",link="https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/nvidia.ipynb"] +:navtitle: Nvidia Embeddings and Models +:page-layout: tutorial +:page-icon-role: bg-[var(--ds-neutral-900)] +:page-toclevels: 1 +:page-colab-link: https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/nvidia.ipynb This notebook demonstrates how to set up a simple RAG pipeline using https://catalog.ngc.nvidia.com[NVIDIA AI Foundation Models]. At the end diff --git a/docs/modules/examples/pages/qa-with-cassio.adoc b/docs/modules/examples/pages/qa-with-cassio.adoc index 7c7539f79..2b2a05b90 100644 --- a/docs/modules/examples/pages/qa-with-cassio.adoc +++ b/docs/modules/examples/pages/qa-with-cassio.adoc @@ -1,6 +1,9 @@ = Knowledge Base Search on Proprietary Data powered by {db-serverless} - -image::https://colab.research.google.com/assets/colab-badge.svg[align="left",link="https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/QA_with_cassio.ipynb"] +:navtitle: Knowledge Base Search on Proprietary Data powered by {db-serverless} +:page-layout: tutorial +:page-icon-role: bg-[var(--ds-neutral-900)] +:page-toclevels: 1 +:page-colab-link: https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/QA_with_cassio.ipynb This notebook guides you through setting up RAGStack using https://docs.datastax.com/en/astra-serverless/docs/vector-search/overview.html[{db-serverless} Search], https://platform.openai.com[OpenAI], and https://cassio.org/[CassIO] to implement a generative Q&A over your own documentation. diff --git a/docs/modules/examples/pages/rag-with-cassio.adoc b/docs/modules/examples/pages/rag-with-cassio.adoc index 3a1159584..ed5dc30b5 100644 --- a/docs/modules/examples/pages/rag-with-cassio.adoc +++ b/docs/modules/examples/pages/rag-with-cassio.adoc @@ -1,8 +1,9 @@ = RAGStack with CassIO -:toc: macro -:toc-title: - -image::https://colab.research.google.com/assets/colab-badge.svg[align="left",link="https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/RAG_with_cassio.ipynb"] +:navtitle: RAGStack with CassIO +:page-layout: tutorial +:page-icon-role: bg-[var(--ds-neutral-900)] +:page-toclevels: 1 +:page-colab-link: https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/examples/notebooks/RAG_with_cassio.ipynb Large Language Models (LLMs) have a data freshness problem. The most powerful LLMs in the world, like GPT-4, have no idea about recent world events.