Things we learned about LLMs in 2024

Amazing summary including:

The GPT-4 barrier was comprehensively broken
Some of those GPT-4 models run on my laptop
LLM prices crashed, thanks to competition and increased efficiency
Multimodal vision is common, audio and video are starting to emerge
Voice and live camera mode are science fiction come to life
Prompt driven app generation is a commodity already
Universal access to the best models lasted for just a few short months
“Agents” still haven’t really happened yet
Evals really matter
Apple Intelligence is bad, Apple’s MLX library is excellent
The rise of inference-scaling “reasoning” models
Was the best currently available LLM trained in China for less than $6m?
The environmental impact got better
The environmental impact got much, much worse
The year of slop
Synthetic training data works great
LLMs somehow got even harder to use
Knowledge is incredibly unevenly distributed
LLMs need better criticism
— Read on simonwillison.net/2024/Dec/31/llms-in-2024/


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *