Dive deep into the world of generative AI foundation models, exploring their transformative potential across scientific disciplines through a hands-on, accessible approach.
By the end of this workshop, participants will:
- Develop a comprehensive understanding of generative AI foundation models
- Acquire practical skills for integrating AI technologies into research workflows
- Demonstrate proficiency in prompt engineering across multiple disciplines
- Critically evaluate and apply multimodal AI tools to complex research challenges
- Build confidence in navigating and deploying AI technologies
- Create innovative research approaches using generative AI methodologies
- Crafting precise, context-specific prompts
- Extracting maximum value from foundation models
- Developing discipline-specific interaction strategies
- Understanding AI model architectures
- Managing computational resources
- Scaling AI applications from local to HPC environments
- Integrating text, image, and data-based models
- Cross-modal research technique development
- Solving interdisciplinary research challenges
- Implementing AI tools in research workflows
- Performance optimization techniques
- Handling model limitations and biases
- Utilizing AI for research code development
- Debugging and improving computational methods
- Automating repetitive research tasks
- Recognizing and mitigating AI biases
- Ensuring research integrity
- Responsible AI use across disciplines
- Deploying AI models in advanced computing environments
- Resource management and optimization
- Scaling computational research capabilities
- Explore large language models and multimodal AI systems
- Examine key models: GPT, BERT, DALL-E, Stable Diffusion
- Analyze model architectures, capabilities, and limitations
- Understand transfer learning and model adaptability
- Bridging interdisciplinary research challenges
- Transforming data analysis and hypothesis generation
- Expanding computational research capabilities
- Democratizing advanced AI technologies
- Crafting effective prompts across disciplines
- Extracting maximum value from foundation models
- Developing discipline-specific interaction strategies
- Handling complex research queries
- Integrating text, image, and data-based models
- Cross-modal research techniques
- Practical implementation strategies
- Solving interdisciplinary research challenges
- Understanding model biases
- Ensuring research integrity
- Responsible AI deployment
- Ethical considerations in AI-assisted research
- Local to high-performance computing deployments
- Resource management strategies
- Scaling AI model applications
- Performance optimization techniques
- Graduate students across all disciplines
- Researchers seeking AI integration
- Academics exploring computational technologies
- Interdisciplinary innovation seekers
- Confident foundation model utilization
- Advanced research methodology skills
- Computational thinking transformation
- Practical AI deployment capabilities
Empowering researchers to leverage generative AI as a powerful, flexible research companion across scientific domains.
Instructors: Nick Eddy / Carlos Lizárraga / Enrique Noriega
-
Registration to attend in person or online.
-
When: Thursdays at 1PM.
-
Where: Albert B. Weaver Science-Engineering Library. Room 212
(Program not definitive!)
Calendar
Date | Title | Topic Description | Wiki | Instructor |
---|---|---|---|---|
01/30/2025 | Scaling up Ollama: Local, CyVerse, HPC | In this hands-on workshop, participants will learn to deploy and scale large language models using Ollama across various computational environments—from laptops to supercomputing clusters—to master practical AI capabilities. | Enrique Noriega | |
02/06/2025 | Using AI Verde | This practical introduction shows how to effectively use U of A Generative AI Verde for academic research, writing, and problem-solving. Participants will learn to harness AI Verde's capabilities while gaining a clear understanding of its limitations and ethical implications. | Nick Eddy | |
02/13/2025 | Best practices of Prompt Engineering using AI Verde | A hands-on session that teaches practical prompt engineering techniques to optimize U of A Generative AI Verde's performance for academic and professional applications. | Nick Eddy | |
02/20/2025 | Quick RAG application using AI Verde / HPC | A hands-on session demonstrating how to build a basic Retrieval-Augmented Generation (RAG) system with the U of A Generative AI Verde API. Participants will learn to enhance AI responses by integrating custom knowledge bases. | Enrique Noriega | |
02/27/2025 | Multimodal Q&A+OCR in AI Verde | A hands-on technical session exploring U of A Generative AI's multimodal capabilities that combines vision and text processing for enhanced document analysis and automated question-answering with OCR technology. | Nick Eddy | |
03/06/2025 | SQL specialized query code generation | A hands-on session teaching participants how to use Large Language Models to craft, optimize, and validate complex SQL queries, emphasizing real-world database operations and industry best practices. | Enrique Noriega | |
03/13/2025 | NO Session | Spring Break | ||
03/20/2025 | Function calling with LLMs | There are two ways to implement function calling with open-source large language models (LLMs). When an LLM doesn't natively support function calling, you can combine prompt engineering, fine-tuning, and constrained decoding. | Enrique Noriega | |
03/27/2025 | Code generation assistants | Large Language Models (LLMs) now serve as powerful code generation assistants, streamlining and enhancing software development. They generate code snippets, propose solutions, and translate code between programming languages. | Nick Eddy |
Created: 06/10/2024 (C. Lizárraga)
Updated: 01/23/2025 (C. Lizárraga)
DataLab, Data Science Institute, University of Arizona.