Tag: Concepts
-
Liberals and Conservatives See Different Victims: Moral Disagreement Is Explained by Different Assumptions of Vulnerability
all forms of judgment—including moral judgment—is a process of categorization (McHugh et al., 2022). When people wonder whether an act is immoral, they are questioning how well it belongs in the category of “immorality.” For 50 years, we have known that categorization judgments involve the process of template comparison, where our minds compare a potential example…
-
10 things I learned from burning myself out with AI coding agents – Ars Technica
with LLMs, context is everything, and in language, context changes meaning. Take the word “bank” and add the words “river” or “central” in front of it, and see how the meaning changes. In a way, words act as addresses that unlock the semantic relationships encoded in a neural network. So if you put “checkerboard” and…
-
Seven (Or, How to Play a Game Without Rules) by Joanna Kavenna
The hapless narrator’s job is to help facilitate her work in “box philosophy”: “the study of categories, the ways we organise reality into groups and sets […] the ways we end up thinking inside the box, even when we are trying to think outside the box”. https://www.theguardian.com/books/2026/jan/12/seven-by-joanna-kavenna-review-a-madcap-journey-to-the-limits-of-philosophy Sounds right up my alley!
-
The hard part: A quote from Jason Gorman
The hard part of computer programming isn’t expressing what we want the machine to do in code. The hard part is turning human thinking — with all its wooliness and ambiguity and contradictions — into computational thinking that is logically precise and unambiguous, and that can then be expressed formally in the syntax of a…
-
How LLMs give semantic meaning to a prompt| ngrok blog
In summary, embeddings are points in n-dimensional space that you can think of as the semantic meaning of the text they represent. During training, each token gets moved within this space to be close to other, similar tokens. The more dimensions, the more complex and nuanced the LLM’s representation of each token can be. —…
-
To really understand a concept, you have to “invent” it yourself in some capacity.
To really understand a concept, you have to “invent” it yourself in some capacity. Understanding doesn’t come from passive content consumption. It is always self-built. It is an active, high-agency, self-directed process of creating and debugging your own mental models. — Read on simonwillison.net/2025/Oct/30/francois-chollet/