Full text tutorial (requires MLExpert Pro): Learn how to fine-tune the Llama 2 7B base model on a custom dataset (using a single T4 GPU). We'll use the QLoRa technique to train an LLM for text summarization of conversations between support agents and customers over Twitter. Discord: Prepare for the Machine Learning interview: Subscribe: GitHub repository: Join this channel to get access to the perks and support my work: 00:00 - When to Fine-tune an LLM? 00:30 - Fine-tune vs Retrieval Augmented Generation (Custom Knowledge Base) 03:38 - Text Summarization (our example) 04:14 - Text Tutorial on 04:47 - Dataset Selection 05:36 - Choose a M
Hide player controls
Hide resume playing