Synthetic minds

"All intelligent thoughts have already been thought; what is necessary is only to try to think them again." - Goethe
Previously at the Volcano Base I'd published a founder's guide to applied AI. Since then I've been working on a few automation projects for clients.
In between that, and preparing the family home for sale (a story for another time), I've been enjoying publishing The Core. It's a summer experiment. A series of concise principles that are at the heart of all my work. If you want them, you can log in to your account and toggle them on.
You can do the same if you work with AI systems fairly frequently. Start by asking your favourite system, based on its knowledge of you, what it thinks your core principles are. Even if it's not a helpful starting point, you might get a good laugh out of it.
Mission Briefing
Synthetic minds: we’re still the only real intelligence in the room
Artificial Intelligence isn’t intelligent. It’s not even particularly good at pretending to be. What we call AI today is more like a high-speed blender full of other people’s thoughts, given a marketing degree and a LinkedIn account. It's not thinking. It's remixing.
And it’s definitely not "artificial" in the way science fiction sold it to us. There are no alien intellects here, no consciousness-in-a-box waiting to emerge. Just vast systems of statistical mimicry built on an ethically questionable buffet of human data.
So why are we still calling it Artificial Intelligence?
Because it sounds magical. Because it flatters engineers. Because it sells funding rounds, surveillance software, and startup exits. But calling autocomplete-on-steroids “intelligence” is like calling a vending machine a chef.
Synthetic Computing
Let’s try another term: Synthetic Computing.
It’s unglamorous, unsexy, and pretty accurate. Like synthetic fabrics, it’s engineered for utility. It lacks the depth, nuance, and soul of its natural counterpart - real human thought - but it holds its shape and resists wrinkling. And that’s all it needs to do to replace half the people in many roles that exist today.
More importantly, synthetic admits what AI evangelists won’t: this stuff is made. Designed. Built atop values, assumptions, and compromises. It’s not some emergent godhead clawing its way out of silicon. It’s code. Weird, complicated, occasionally useful code.
So maybe it's time we stopped pretending there's a new form of consciousness on the loose. The only sentient minds involved here are still ours. That’s both reassuring and terrifying (e.g. all politicians).
Classified Intel
Some interesting stuff I discovered on my adventures.
Clause 0 – Ethical AI Alliance

A foundational open letter calling for a hard ethical line: AI, data, and cloud systems must never underpin unlawful violence, surveillance, or forced displacement in conflict zones. It challenges the industry to act, not just talk.
Why it matters: Sets a real guardrail for “synthetic computing” ethics. No excuses, no loopholes.
Andrej Karpathy: Software Is Changing (Again)
LLMs are “people spirits,” stochastic simulations trained on humanity's collective output. Karpathy argues they’re a new kind of computer entirely: programmed in English, deployed like utilities, and situated somewhere around the computing vibe of the 1960s.
Why it matters: If you build software (or reality), this reframes the game.
LibreChat – Open‑source multi-LLM interface

An elegant, self-hosted UI for wrangling multiple AI models with full control: plugin support, conversation trees, and local deployment included.
Why it matters: Because if you’re going to summon “people spirits,” better to do it on your own terms.
Your data is not shared. Unsubscribe with 1 click.
Member discussion