Myvideo

Guest

Login

Inside the LLM: Visualizing the Embeddings Layer of Mistral-7B and Gemma-2B

Uploaded By: Myvideo
1 view
0
0 votes
0

We look deep into the AI and look at how the embeddings layer of a Large Language Model such as Mistral-7B and Gemma-2B actually works. You will learn how tokens and embeddings work and even extract out and load the embeddings layer from Gemma and Mistral into your own simple model, which we will use to visualize the model You will see how an AI clusters terms together and how it can cluster similar words, build connections which cover not just similar words but also grouping of concepts such as colors, hotel chains, programming terms. If you really want to understand how an LLM's works or even build your own LLM then starting with the first layer of a Generative AI model is the best place to start. Github -----------

Share with your friends

Link:

Embed:

Video Size:

Custom size:

x

Add to Playlist:

Favorites
My Playlist
Watch Later