🎯 Загружено автоматически через бота: 🚫 Оригинал видео: 📺 Данное видео принадлежит каналу «Andrej Karpathy» (@AndrejKarpathy). Оно представлено в нашем сообществе исключительно в информационных, научных, образовательных или культурных целях. Наше сообщество не утверждает никаких прав на данное видео. Пожалуйста, поддержите автора, посетив его оригинальный канал. ✉️ Если у вас есть претензии к авторским правам на данное видео, пожалуйста, свяжитесь с нами по почте support@, и мы немедленно удалим его. 📃 Оригинальное описание: The Tokenizer is a necessary and pervasive component of Large Language Models (LLMs), where it translates between strings and tokens (text chunks). Tokenizers are a completely separate stage of the LLM pipeline: they have their own training sets, training algorithms (Byte Pair Encoding), and after training implement two fundamental functions: encode() from strings to tokens, and decode() back from tokens to strings. In this lecture we build from scratch the Tokenizer used in the GPT series from OpenAI. In the process, we will see that a lot of weird behaviors and problems of LLMs actually trace back to tokenization. We'll go through a number of these issues, discuss why tokenization is at fault, and why someone out there ideally finds a way to delete this stage entirely. Chapters: intro: Tokenization, GPT-2 paper, tokenization-related issues tokenization by example in a Web UI (tiktokenizer) strings in Python, Unicode code points Unicode byte encodings, ASCII, UTF-8, UTF-16, UTF-32 daydreaming: deleting tokenization Byte Pair Encoding (BPE) algorithm walkthrough starting the implementation counting consecutive pairs, finding most common pair merging the most common pair training the tokenizer: adding the while loop, compression ratio tokenizer/LLM diagram: it is a completely separate stage decoding tokens to strings encoding strings to tokens regex patterns to force splits across categories tiktoken library intro, differences between GPT-2/GPT-4 regex GPT-2 released by OpenAI walkthrough special tokens, tiktoken handling of, GPT-2/GPT-4 differences minbpe exercise time! write your own GPT-4 tokenizer sentencepiece library intro, used to train Llama 2 vocabulary how to set vocabulary set? revisiting transformer training new tokens, example of prompt compression multimodal [image, video, audio] tokenization with vector quantization revisiting and explaining the quirks of LLM tokenization final recommendations ??? :) Exercises: Advised flow: reference this document and try to implement the steps before I give away the partial solutions in the video. The full solutions if you're getting stuck are in the minbpe code Links: Google colab for the video: GitHub repo for the video: minBPE Playlist of the whole Zero to Hero series so far: our Discord channel: my Twitter: Supplementary links: tiktokenizer tiktoken from OpenAI: sentencepiece from Google
Hide player controls
Hide resume playing