Find the current version over on Boston Azure AI here: https://github.com/BostonAzureAI/semantic-kernel-dev-workshop
TO BE ARCHIVED
Welcome the second edition of this workshop (signups are closed).
Visualize the AI prompt as a powerful being, a conductor orchestrating the symphony of LLMs. Create an image where this figure exudes authority and control, while other elements like safety, RAG, SDKs, and programming languages are mere instruments in its grand orchestra. This image should convey the message that the prompt reigns supreme in the realm of LLMs, especially from a developer's standpoint.
These instructions will walk you through the workshop labs.
In this workshop, we provide hands-on experience to help you understand how to AI-enable your applications or create new AI-powered services. The toolbox will be Semantic Kernel (examples and labs in C#) and AI Large Language Models (LLMs) running as services in the Azure AI Foundry. We'll start with the basics of Semantic Kernel, move on to implementing RAG patterns using Azure SQL DB's vector search capabilities, and then have a look at building AI Agents.
The workshop day is a mix of explanatory lectures intermingled with hands-on labs.
After this workshop, you will be able to:
- Create a working AI application using Semantic Kernel and backed by Azure AI Foundry services
- Access π telemetry data (e.g., token usage stats) available through Semantic Kernel's OpenTelemetry support
- Make tools available to your Semantic Kernel application by creating function prompts and native functions
- Apply semantic searching and other modern AI techniques to integrate custom or proprietary data sources backed by Azure SQL DB vector search (π now in public preview)
- Put AI to work for your organization in a more sophisticated model as an π€ AI Agent
Attendees are assumed to have explored using or creating chatbots with LLMs and have a sense of what is a prompt, what is prompt engineering, and what are some of the possibilities for using LLMs in applications.
We of course hope that you attend one of our in person workshops and are using this repo to work through the labs with the other attendees. However, we also want you to be able to do the labs at your own pace and revisit when you are building your own projects in the future. We also recognize you may not be able to attend an in person event - so we've included everything you need from the workshop in this repo.
- Workshop Intro and Semantic Kernel Overview
- Retrieval Augmented Generation (RAG)
- Agentic AI Unlocking the Power of Multi-Agent Systems
- Lab 0: Running a trivial Semantic Kernel where successful
dotnet run
is proof our APIs are working - Lab 1: Getting Started with Semantic Kernel
- Lab 2: Creating Semantic Kernel Plugins
- Lab 3: Using WebRetrieverPlugin to create a RAG application
- Lab 4: Creating a RAG application to Search a PDF
- Lab 5: Putting it all together
- Lab 6: Semantic Kernel Agent Lab
- Focus: Accessing APIs and running a simple SK console app.
- Objectives: Get local copies of API keys, run a simple SK console app.
- Additional Exercises: Experiment with different API endpoints.
- Further Ideas: Explore different API authentication methods.
- Focus: Adding Semantic Kernel to an application, using Azure OpenAI, and creating prompt functions.
- Objectives: Demonstrate how to add Semantic Kernel to an existing application, use Semantic Kernel to chat with the Azure OpenAI LLM, define a prompt function and use it in an application, recognize the need for chat history and how to add it.
- Additional Exercises: Experiment with different Temperature values to see their influence.
- Further Ideas: Explore different prompt engineering techniques.
- Focus: Creating native plugins and using web search plugins.
- Objectives: Implement a plugin with native C# code, use a plugin to give an LLM additional information, create a plugin that uses an LLM to rewrite a user query, utilize a Semantic Kernel plugin to perform a web search.
- Additional Exercises: Experiment with different plugin functions.
- Further Ideas: Explore different ways to integrate plugins with Semantic Kernel.
- Focus: Creating a RAG application using web search results.
- Objectives: Build a plugin to combine the rewriting of a user's query and a web search, write a prompt to perform a basic RAG pattern call to an LLM, implement a simple chatbot loop, demonstrate the usefulness of a RAG implementation.
- Additional Exercises: Experiment with different web search engines.
- Further Ideas: Explore different ways to integrate web search results with Semantic Kernel.
- Focus: Creating a RAG application to search a PDF using a vector store.
- Objectives: Configure a vector store to use with the application, read, chunk and ingest a pdf file, implement logic to perform a similarity search on the vector store, create a plugin to perform RAG using the memory store.
- Additional Exercises: Experiment with different PDF files.
- Further Ideas: Explore different ways to integrate PDF search results with Semantic Kernel.
- Focus: Integrating all previous labs and adding logging and user intent determination.
- Objectives: Use filters to add logging and understand the call flows, have the LLM determine which plugin functions to call, create a plugin to determine the user's intent, dynamically control the functions available to the LLM depending on the user's intent.
- Additional Exercises: Experiment with different logging techniques.
- Further Ideas: Explore different ways to integrate logging and user intent determination with Semantic Kernel.
- Focus: Building agents with Semantic Kernel.
- Objectives: Create an agent with reasoning capabilities to solve domain-specific requests, build an agent with skills to get the current weather of a city by calling a public API, create a team of agents to collaboratively solve more complex problems.
- Additional Exercises: Experiment with different agent skills.
- Further Ideas: Explore different ways to integrate agents with Semantic Kernel.
Please install this software ahead of the workshop:
-
VS Code (Windows, Mac, Linux) or Visual Studio (Windows). See More information on VS Code and VS setup
-
LLM API credentials from Azure AI Foundry (formerly known as Azure AI or Azure OpenAI). The labs use both GPT-4o and text-embedding-ada-002 models.
NOTE: For simplicity, we plan to provide credentials for Azure OpenAI services to use during the workshop, after which they will stop working.
- Database Connection string to a vector search-enabled Azure SQL DB
NOTE: For simplicity, we plan to provide credentials for an Azure SQL database to use during workshop, after which they will stop working.
π£ You are also welcome to create your own resources and use them in the workshop. Since you are paying for them, you can decide when to decommission associated resources. See Instructions for creating LLM Connection String
- You can may recommend additional tools, VS Code extensions, NuGet packages, or code samples as part of the workshop experience.
- Semantic Kernel
- Single Page Semantic Kernel for C# - 2024 .NET version 1.0 concept-to-code mapping
- There is a generic Bing Text Search feature available in Azure.
Bill is an AI consultant with significant Azure and Cybersecurity expertise. Engaged member of local tech community as Boston Azure AI co-organizer, sporadic public speaker, and occassional blogger.
Microsoft MVP. CISSP. Author of book Cloud Architecture Patterns.
Connect on LinkedIn, Twitter/X, or Bluesky.
Jason is an independent Full Stack Solution Architect with a deep focus on Azure and .NET technologies. He is currently focused on helping customers integrate Gen AI functionality into their .NET applications.
LinkedIn | Twitter | Jason's Blog | Email
Juan Pablo an AI Partner Solution Architect working with Microsoft, focusing on AI solutions with our Global Software Company partners group. He has more than 24 years of experience in the tech industry.