How LLMs give semantic meaning to a prompt| ngrok blog

In summary, embeddings are points in n-dimensional space that you can think of as the semantic meaning of the text they represent. During training, each token gets moved within this space to be close to other, similar tokens. The more dimensions, the more complex and nuanced the LLM’s representation of each token can be.
— Read on ngrok.com/blog/prompt-caching/


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *