Practical AI for Research: LLMs, RAG & Agentic Systems

When
This session introduces the concept of AI agents—systems where LLMs are a core component that can reason, plan, and execute sequences of actions to achieve goals. It will cover basic agent architectures (e.g., ReAct: Reason + Act), the idea of an agent loop (observe, think, act), and how agents can utilize tools. An overview of simple agent development using frameworks like LangChain Agents or a conceptual design exercise will be included.
This five-session workshop provides comprehensive training in contemporary AI deployment methodologies, instructing participants in local Large Language Model execution using frameworks such as Ollama and LM Studio, while facilitating access to open-source models through platforms like AI VERDE. The curriculum encompasses advanced implementation strategies including Retrieval Augmented Generation (RAG) systems for enhanced factual accuracy and hallucination mitigation (Lewis et al., 2020), tool calling architectures for external API integration, and automated text-to-SQL code generation.
Supplemental methodologies include AI-assisted coding techniques, which leverage language models for code completion, debugging, and optimization workflows, enabling accelerated development cycles and improved code quality. Additionally, participants will explore vibe coding approaches, an emergent paradigm emphasizing intuitive, conversational programming interfaces that facilitate rapid prototyping and iterative development through natural language specifications.
The workshop culminates with comprehensive training in agentic systems architecture, where LLMs demonstrate autonomous multi-step reasoning capabilities, strategic planning algorithms, and complex task execution pipelines. These systems represent the current frontier in artificial intelligence applications, enabling sophisticated problem-solving through iterative agent-environment interactions and goal-oriented behavior optimization.