Practical AI for Research: LLMs, RAG & Agentic Systems

When
This session dives into Retrieval Augmented Generation (RAG) to enhance LLM responses with external, up-to-date information. It will cover the core components: document loading and chunking, creating embeddings (e.g., using Sentence Transformers), setting up a vector store (e.g., FAISS, ChromaDB), and the retrieval-then-generation pipeline. The goal is to show how RAG mitigates hallucinations and grounds LLMs in specific knowledge domains.
This five-session workshop provides comprehensive training in contemporary AI deployment methodologies, instructing participants in local Large Language Model execution using frameworks such as Ollama and LM Studio, while facilitating access to open-source models through platforms like AI VERDE. The curriculum encompasses advanced implementation strategies including Retrieval Augmented Generation (RAG) systems for enhanced factual accuracy and hallucination mitigation (Lewis et al., 2020), tool calling architectures for external API integration, and automated text-to-SQL code generation.
Supplemental methodologies include AI-assisted coding techniques, which leverage language models for code completion, debugging, and optimization workflows, enabling accelerated development cycles and improved code quality. Additionally, participants will explore vibe coding approaches, an emergent paradigm emphasizing intuitive, conversational programming interfaces that facilitate rapid prototyping and iterative development through natural language specifications.
The workshop culminates with comprehensive training in agentic systems architecture, where LLMs demonstrate autonomous multi-step reasoning capabilities, strategic planning algorithms, and complex task execution pipelines. These systems represent the current frontier in artificial intelligence applications, enabling sophisticated problem-solving through iterative agent-environment interactions and goal-oriented behavior optimization.