Training and Testing Models

Functional Open Science Skills for AI/ML Applications

Image
Lambs

When

2 – 3 p.m., March 4, 2025

This session covers critical concepts like data splitting (training, validation, and test sets), evaluating model performance, and hyperparameter tuning. Participants will explore common pitfalls and best practices for achieving reliable results, using concepts and code developed in previous sessions.

This workshop series provides graduate students in public universities with developing skills and learning tools required in today's AI/ML-focused science.

Ranging from covering the basic moving parts to understanding AI's role in Open Science, this workshop aims to lend an understanding where to obtain compute, covering software environments and reproducibility, the role of workflows, and aiming to create an end-to-end Machine Learning (ML) workflow.

SERIES: Functional Open Science Skills for AI/ML Applications
WhereRegister for Zoom Link
Instructor: Michele Cosi and Carlos Lizárraga
YouTube: UArizona DataLab and session links

Register     

Contacts

Michele Cosi