Myvideo

Guest

Login

Fine-tuning Llama 2 on Your Own Dataset | Train an LLM for Your Use Case with QLoRA on a Single GPU

Uploaded By: Myvideo
1 view
0
0 votes
0

Full text tutorial (requires MLExpert Pro): Learn how to fine-tune the Llama 2 7B base model on a custom dataset (using a single T4 GPU). We'll use the QLoRa technique to train an LLM for text summarization of conversations between support agents and customers over Twitter. Discord: Prepare for the Machine Learning interview: Subscribe: GitHub repository: Join this channel to get access to the perks and support my work: 00:00 - When to Fine-tune an LLM? 00:30 - Fine-tune vs Retrieval Augmented Generation (Custom Knowledge Base) 03:38 - Text Summarization (our example) 04:14 - Text Tutorial on 04:47 - Dataset Selection 05:36 - Choose a M

Share with your friends

Link:

Embed:

Video Size:

Custom size:

x

Add to Playlist:

Favorites
My Playlist
Watch Later