In summary, embeddings are points in n-dimensional space that you can think of as the semantic meaning of the text they represent. During training, each token gets moved within this space to be close to other, similar tokens. The more dimensions, the more complex and nuanced the LLM’s representation of each token can be.
— Read on ngrok.com/blog/prompt-caching/
Leave a Reply