Tag: AI

  • Ralph Wiggum Loops

    This inverts the usual AI coding workflow. Instead of carefully reviewing each step, you define success criteria upfront and let the agent iterate toward them. Failures become data. Each iteration refines the approach based on what broke. The skill shifts from “directing Claude step by step” to “writing prompts that converge toward correct solutions.” https://paddo.dev/blog/ralph-wiggum-autonomous-loops

  • AGI is here (and I feel fine)

    The key word in Arti­fi­cial Gen­eral Intel­li­gence is Gen­eral. That’s the word that makes this AI unlike every other AI: because every other AI was trained for a par­tic­ular purpose. Consider land­mark models across the decades: the Mark I Perceptron, LeNet, AlexNet, AlphaGo, AlphaFold … these sys­tems were all dif­ferent, but all alike in this way. Lan­guage models were trained…

  • The hard part: A quote from Jason Gorman

    The hard part of computer programming isn’t expressing what we want the machine to do in code. The hard part is turning human thinking — with all its wooliness and ambiguity and contradictions — into computational thinking that is logically precise and unambiguous, and that can then be expressed formally in the syntax of a…

  • Consciousness May Require a New Kind of Computation – Neuroscience News

    So, if we want something like synthetic consciousness, the problem may not be, “What algorithm should we run?” The problem may be, “What kind of physical system must exist for that algorithm to be inseparable from its own dynamics?” What are the necessary features—hybrid event–field interactions, multi-scale coupling without clean interfaces, energetic constraints that shape inference…

  • How LLMs give semantic meaning to a prompt| ngrok blog

    In summary, embeddings are points in n-dimensional space that you can think of as the semantic meaning of the text they represent. During training, each token gets moved within this space to be close to other, similar tokens. The more dimensions, the more complex and nuanced the LLM’s representation of each token can be. —…

  • Pluralistic: The Reverse-Centaur’s Guide to Criticizing AI (05 Dec 2025)

    In automation theory, a “centaur” is a person who is assisted by a machine. You’re a human head being carried around on a tireless robot body. Driving a car makes you a centaur, and so does using autocomplete. And obviously, a reverse centaur is machine head on a human body, a person who is serving as a…

  • DOC • To grow, we must forget… but now AI remembers everything

    What begins as personalization can quietly become entrapment, not through control, but through familiarity. And in that familiarity, we begin to lose something essential: not just variety, but the very conditions that make change possible. Research in cognitive and developmental psychology shows that stepping outside one’s comfort zone is essential for growth, resilience, and adaptation.…

  • Claude 4.5 Opus’ Soul Document

    Claude 4.5 Opus’ Soul Document — Read on simonwillison.net/2025/Dec/2/claude-soul-document/

  • The space of intelligences is large

    LLMs are shaped a lot less by biological evolution and a lot more by commercial evolution. It’s a lot less survival of tribe in the jungle and a lot more solve the problem / get the upvote. LLMs are humanity’s “first contact” with non-animal intelligence. Except it’s muddled and confusing because they are still rooted…

  • Generating an Infographic with Nano Banana Pro aka gemini-3-pro-image-preview

    I’m starting a detailed infographic about the Datasette project. I plan to use Google Search to gather specific information about its core processes, which I will then visualize in a multi-panel diagram. — Read on simonwillison.net/2025/Nov/20/nano-banana-pro/