Skip to content

josefoviedo/Generative-AI

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

66 Commits
 
 
 
 
 
 
 
 

Repository files navigation


Mastering Generative AI Foundation Models for Research

Workshop Overview

Dive deep into the world of generative AI foundation models, exploring their transformative potential across scientific disciplines through a hands-on, accessible approach.

Learning Objectives

By the end of this workshop, participants will:

  • Develop a comprehensive understanding of generative AI foundation models
  • Acquire practical skills for integrating AI technologies into research workflows
  • Demonstrate proficiency in prompt engineering across multiple disciplines
  • Critically evaluate and apply multimodal AI tools to complex research challenges
  • Build confidence in navigating and deploying AI technologies
  • Create innovative research approaches using generative AI methodologies

Key Skills Developed

Advanced Prompt Engineering

  • Crafting precise, context-specific prompts
  • Extracting maximum value from foundation models
  • Developing discipline-specific interaction strategies

Computational AI Infrastructure Management

  • Understanding AI model architectures
  • Managing computational resources
  • Scaling AI applications from local to HPC environments

Multimodal AI Applications

  • Integrating text, image, and data-based models
  • Cross-modal research technique development
  • Solving interdisciplinary research challenges

Practical AI Deployment Strategies

  • Implementing AI tools in research workflows
  • Performance optimization techniques
  • Handling model limitations and biases

Code Generation and Optimization

  • Utilizing AI for research code development
  • Debugging and improving computational methods
  • Automating repetitive research tasks

Ethical AI Implementation

  • Recognizing and mitigating AI biases
  • Ensuring research integrity
  • Responsible AI use across disciplines

High-Performance Computing (HPC) Integration

  • Deploying AI models in advanced computing environments
  • Resource management and optimization
  • Scaling computational research capabilities

Core Focus: Foundation Models in Research

Understanding Foundation Models

  • Explore large language models and multimodal AI systems
  • Examine key models: GPT, BERT, DALL-E, Stable Diffusion
  • Analyze model architectures, capabilities, and limitations
  • Understand transfer learning and model adaptability

Generative AI as a Research Catalyst

  • Bridging interdisciplinary research challenges
  • Transforming data analysis and hypothesis generation
  • Expanding computational research capabilities
  • Democratizing advanced AI technologies

Key Workshop Modules

1. Prompt Engineering for Research

  • Crafting effective prompts across disciplines
  • Extracting maximum value from foundation models
  • Developing discipline-specific interaction strategies
  • Handling complex research queries

2. Multimodal AI Applications

  • Integrating text, image, and data-based models
  • Cross-modal research techniques
  • Practical implementation strategies
  • Solving interdisciplinary research challenges

3. Ethical AI and Responsible Use

  • Understanding model biases
  • Ensuring research integrity
  • Responsible AI deployment
  • Ethical considerations in AI-assisted research

4. Computational Infrastructure

  • Local to high-performance computing deployments
  • Resource management strategies
  • Scaling AI model applications
  • Performance optimization techniques

Target Audience

  • Graduate students across all disciplines
  • Researchers seeking AI integration
  • Academics exploring computational technologies
  • Interdisciplinary innovation seekers

Learning Outcomes

  • Confident foundation model utilization
  • Advanced research methodology skills
  • Computational thinking transformation
  • Practical AI deployment capabilities

Workshop Vision

Empowering researchers to leverage generative AI as a powerful, flexible research companion across scientific domains.


Instructors: Nick Eddy / Carlos Lizárraga / Enrique Noriega

(Program not definitive!)

Calendar


Spring 2025

Date Title Topic Description Wiki Instructor
01/30/2025 Scaling up Ollama: Local, CyVerse, HPC In this hands-on workshop, participants will learn to deploy and scale large language models using Ollama across various computational environments—from laptops to supercomputing clusters—to master practical AI capabilities. Enrique Noriega
02/06/2025 Using AI Verde This practical introduction shows how to effectively use U of A Generative AI Verde for academic research, writing, and problem-solving. Participants will learn to harness AI Verde's capabilities while gaining a clear understanding of its limitations and ethical implications. Nick Eddy
02/13/2025 Best practices of Prompt Engineering using AI Verde A hands-on session that teaches practical prompt engineering techniques to optimize U of A Generative AI Verde's performance for academic and professional applications. Nick Eddy
02/20/2025 Quick RAG application using AI Verde / HPC A hands-on session demonstrating how to build a basic Retrieval-Augmented Generation (RAG) system with the U of A Generative AI Verde API. Participants will learn to enhance AI responses by integrating custom knowledge bases. Enrique Noriega
02/27/2025 Multimodal Q&A+OCR in AI Verde A hands-on technical session exploring U of A Generative AI's multimodal capabilities that combines vision and text processing for enhanced document analysis and automated question-answering with OCR technology. Nick Eddy
03/06/2025 SQL specialized query code generation A hands-on session teaching participants how to use Large Language Models to craft, optimize, and validate complex SQL queries, emphasizing real-world database operations and industry best practices. Enrique Noriega
03/13/2025 NO Session Spring Break
03/20/2025 Function calling with LLMs There are two ways to implement function calling with open-source large language models (LLMs). When an LLM doesn't natively support function calling, you can combine prompt engineering, fine-tuning, and constrained decoding. Enrique Noriega
03/27/2025 Code generation assistants Large Language Models (LLMs) now serve as powerful code generation assistants, streamlining and enhancing software development. They generate code snippets, propose solutions, and translate code between programming languages. Nick Eddy

Created: 06/10/2024 (C. Lizárraga)

Updated: 01/23/2025 (C. Lizárraga)

DataLab, Data Science Institute, University of Arizona.

CC BY-NC-SA 4.0

About

ua-datalab generative AI workshops repo

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 100.0%