Mastering Generative AI Foundation Models for Research
When
1 – 2 p.m., March 20, 2025
There are two ways to implement function calling with open-source large language models (LLMs). When an LLM doesn't natively support function calling, you can combine prompt engineering, fine-tuning, and constrained decoding.
Workshop Vision
Empowering researchers to leverage generative AI as a powerful, flexible research companion across scientific domains.
Key Skills Developed
| Core Focus: Foundation Models in Research
Target Audience
|
Key Workshop Modules
| Learning Outcomes
|
SERIES: Mastering Generative AI Foundation Models for Research
When: Thursday, 1:00 - 2:00 PM, January 30 - March 27, 2025
Where: REGISTER for Zoom link Weaver Science-Engineering Library, Rm 212 and on Zoom
Instructor: Enrique Noriega
YouTube: UArizona DataLab and session links
Workshop Sessions:
- 1/30 Scaling up Ollama: Local, CyVerse, HPC
- 2/6 Using AI Verde
- 2/13 Best practices of Prompt Engineering using AI Verde
- 2/20 Quick RAG application using AI Verde / HPC
- 2/27 Multimodal Q&A+OCR in AI Verde
- 3/6 SQL specialized query code generation
- 3/20 Function calling with LLMs
Contacts
Enrique Noriega