AI Fine Tuning and Data Preparation for Pre-Trained Models
Guaranteed to Run
Price
$2,495.00
Duration
3 Days
Delivery Methods
Virtual Instructor Led Private Group
Delivery
Virtual
ESTDescription
Objectives
Prerequisites
Course Description
You will develop the skills to gather, clean, and organize data for fine-tuning pre-trained LLMs and Generative AI models. Through a combination of lectures and hands-on labs, you will use Python to fine-tune open-source Transformer models. Gain practical experience with LLM frameworks, learn essential training techniques, and explore advanced topics such as quantization. During the hands-on labs, you will access a GPU-accelerated server for practical experience with industry-standard tools and frameworks.
Course Objectives
By the end of this course, participants will be able to:
- Clean, prepare, and curate data for AI fine-tuning
- Establish guidelines and best practices for acquiring raw training data
- Transform large, unstructured datasets into clean, usable training data
- Fine-tune AI models using PyTorch
- Understand core AI architectures, including Transformer models
- Explain tokenization, word embeddings, and their role in model performance
- Install, configure, and use AI frameworks such as Llama 3
- Perform parameter-efficient fine-tuning using LoRA and QLoRA
- Apply model quantization techniques to optimize performance and efficiency
- Deploy fine-tuned models and maximize inference performance
Who Should Attend?
- Project Managers
- Architects
- Developers
- Data Acquisition Specialists
Course Prerequisites
- Python or Equivalent Experience
- Familiarity with Linux
Do You Need Help? Please Fill Out The Form Below