Locally training and comparing small, task-specific language models: A hands-on guide to fine-tuning and RLHF

In this practical session, we will focus on the local training and comparison of small, task-specific language models. Attendees will learn how to fine-tune LLMs using Reinforcement Learning from Human Feedback (RLHF) on their own machines, enabling them to create and test models tailored to specific tasks. Through a hands-on demo, participants will train two small LLMs using different approaches and compare their performance, gaining insights into the impact of various techniques on model outcomes. We will cover best practices for data preparation and model evaluation, empowering attendees to optimize their locally trained models. The session will also highlight real-world applications and discuss future trends in task-specific LLM development. Whether you’re a developer, researcher, or enthusiast, this session will provide you with the skills and knowledge to train, compare, and deploy small, task-specific LLMs locally, opening up new possibilities for your projects.

Event Timeslots (1)

@GetSparked A
-
By Ben Selleslagh
WORKSHOP