Flyte School: Learn Your Codebase - Fine-Tuning CodeLlama with Flyte... to Learn Flyte
Today, foundation LLMs can only be trained by a handful of organizations possessing the compute resources required to pre-train models with more than a hundred billion parameters over internet-scale data. These foundation models are then fine-tuned by the wider ML community for specific applications. Even though fine-tuning can be more compute and memory efficient than full parameter tuning, a significant challenge to fine-tuning is provisioning the appropriate infrastructure.
In this session, Niels will demonstrate how to use Flyte, a Linux Foundation open-source orchestration platform to fine-tune a LLM on the Flyte codebase itself. Flyte allows for the declarative specification of the infrastructure needed for a broad range of ML workloads, including fine-tuning LLMs with limited resources by leveraging multi-node, multi-gpu distributed training.
What You’ll Learn
Attendees will gain hands-on experience using Flyte to leverage state-of-the-art deep learning tools such as `torchrun` distributed training, LoRA, 4/8-bit quantization, and FSDP, while benefiting from Flyte’s reproducibility, versioning, and cost management capabilities. At the end of this talk, you’ll be able to take the code and adapt it to learn your own code base to help to answer user-support questions, create boilerplate starter code, or whatever downstream task you’re interested in!
Prerequisite Knowledge
- Intermediate Python
- Intermediate Machine Learning
- Familiarity with Command-line Tools