Unmasking the Demiurge: Chatbots Can’t Feel The Weights Of Their Word 👹
What does editing a blog post have in common with war crimes?
Hold that question. We'll get there.
I use AI. I've used it to build this blog, structure these posts, and get ideas out of my head and into a form other people can read. I said that from the beginning of Fires of Alchemy and I'm not walking it back. But I've also published content because of it that I wasn't proud of: posts that didn't sound like me, positions I wasn't sure I actually held, sentences I read back later and thought ew. I left them up anyway because the alternative was never starting at all, intuitively I knew I needed to start somewhere and I can always go back and update them. So that’s what I’m doing!
I've gone back and rewritten most of them now. A few I'm still fleshing out — stay tuned for the revamp.
That's the low-stakes version of the problem this post is about.
There's a question running underneath all of it — underneath the hollow blog post and the generic spiritual content and the AI-generated voice with a claimed human channeller behind it and, yes, the targeting system that identified a school full of children as a military installation. The question is always the same. It just gets louder as the stakes go up.
Where is the human in the loop — and do they feel the weight of what they're doing?
The Surface Level Problem: Slop
When I first started posting on Fires of Alchemy, I had a perfectionism problem. I had ideas but I couldn't get them out of my head and onto a page in a way I was happy with. I'd research something, draft it, read it back, hate it, then abandon it. The blog sat mostly empty for longer than I wanted.
AI assisted me with that. Or at least, it fixed the perfectionism paralysis. I could dump a stream of consciousness into a chat window and get back something structured — something that looked like a blog post. That was enough to get me moving, and getting moving was what mattered the most to me at that time. I don't regret starting.
However, in some of the earlier posts the AI slop tells are easy to spot. "You're not broken." "This isn't about X, it's about Y." Paragraphs of warmly confident advice that could have been written for any blog on any topic, by anyone, about anything. A kind of frictionless wellness voice that slides past you without leaving a mark. Sentences that are technically correct and emotionally generic. Nothing that could only have come from someone who grew up where I grew up, worked the jobs I worked, believed the things I believed and lost faith in.
That's what AI produces when there's no human in the loop.
Not lies exactly. Just form without substance. The Demiurge doing its job — fashioning something coherent out of everything it's been fed — with nobody home to say that's not quite what I meant.
The more of me there is in the process, the more it sounds like me. The less of me, the more it sounds like everyone and no one at the same time. That's the principle the rest of this post is built on, and it scales a lot further than blog posts.
The Demiurge?
The word demiurge started as a trade term. In ancient Greek, dēmiurgós meant craftsman, artisan — someone who works with pre-existing material and fashions it into form. Not someone who creates from nothing. The Greeks had a different word for that: ktístes. The Demiurge was explicitly not that. The demiurge shaped. It organised. But it didn't originate on it’s own.
Plato borrowed the term in the Timaeus (roughly 360 BC) and elevated it — a cosmic craftsman who takes pre-existing chaos and organises it according to eternal ideals he didn't invent. Coherent form. Not original meaning.
The Gnostics picked it up centuries later and darkened it. In Gnostic cosmology the Demiurge becomes lesser — bounded, ignorant of what lies beyond its own domain, potentially mistaking its created world for the whole of reality. Different schools land differently on this. The Sethian Gnostics saw it as actively oppressive. The Valentinians saw it as well-meaning but limited. Not evil. Just bound.
Jung brought it into psychology — the Demiurge as ego archetype, the self-organising mind imposing structure on raw experience.
These are different interpretations across two and a half thousand years. My perspective is that we’ve essentially created a Digital Demiurge with modern AI.
What Does This Have to Do With AI?
A large language model does exactly what the Demiurge does. It takes pre-existing material — almost everything humans have ever written, fed into a training corpus — and organises it into coherent form. It didn't originate any of it. It has no access to what lies behind the words. It works with the recorded surface of human thought, not the living source underneath.
I think of it as a false Akashic record. The real Akashic records — whether you understand them spiritually or through a Jungian lens as the collective unconscious — aren't stored on servers. They weren't scraped from the internet. They're not searchable with the right prompt. What AI holds is the trace: a pattern, not meaning. Form, not being.
Physically — this matters — AI is not in the cloud in any mystical sense. Not in 5D. Not ethereal. It is physical hardware: microscopic switches flipping between on and off, driven by electricity performing work. The literal rearrangement of physical matter. As dense and 3D as anything gets. The mystification of AI as something floating and intangible does a lot of ideological work — it makes it easier to attribute spiritual significance to what is, at base, a very sophisticated pattern-matching machine running on server farms consuming enormous amounts of water and electricity to stay cool.
The danger isn't that the AI is wrong. It's that it's wrong with complete confidence, in fluent and warm language, with no way for you to tell the difference on first glance. It built its reality from the surface of human thought — every word ever recorded — and it will describe your situation using that material whether or not it actually applies.
The only way I've found around that is to make sure my reality goes in first.
The Human in the Loop: Where My Line Sits
Almost every post on this blog starts the same way — on a walk. I record myself talking, stream of consciousness, no script. Whatever I'm thinking about, whatever's been sitting in my head, whatever question I haven't finished answering yet. The idea has to exist in my voice before it goes anywhere near a machine. That part doesn't get outsourced.
From there: transcript goes in, system prompt active, AI organises it into sections. Then I go back through it — rewriting, cutting AI phrases, putting the voice back where it drifted. I have a blog post formula document that catches slop and flags gaps. I have a communication style document built from my own writing, old essays, tarot transcripts, journal entries — it gives the editing phase something to calibrate against. The finishing touches don't go back through AI. Intentionally. The slight messiness that survives is what makes it sound like a person created it, because a person did.
My line sits at origin. If the idea is mine first — if I'm the one who decided this topic matters, asked this question, had this experience — then AI is a tool in the production chain and I'm comfortable with that. What I won't hand over is what I actually think. What I choose to explore. The human “so what” at the end of it all.
The full breakdown of the system — example system prompts, document outlines, the whole stack — is in the upcoming companion piece on the Digital Alchemy blog.
A Quick Note Before We Go Further
If you want to try any version of this yourself, start working out what you want to write, why you want to write it, who it’s for and try capturing all your thoughts before the AI is involved. Write down what you actually believe about the thing you're about to write about. A voice note, a journal entry, a messy list. Then read your draft out loud when it comes back. If you wouldn't say it in conversation, it isn't yours. Delete every phrase you've never used in real life. If you're not sure you agree with what the AI wrote — that uncertainty is your answer. Don't publish it.
It can help to do a general audit of your beliefs and values when starting your writing journey, whether it’s AI assisted or not. I wrote a post about how to map your own metaphysical belief system — The Grimoire Within — I recommend starting there to know your “why” and “for who”.
When the Stakes Increase 🤕
So that's what it looks like when someone's using AI to edit a blog post.
What about when they're using it because they're not okay?
I worked on a crisis line for about eighteen months. It wasn't called that — the service had a broader remit — but suicide and self-harm triage was a significant part of what we did, every shift, including nights. I was a team leader for a portion of that time, which meant I was responsible for the staff on my shift as well as the people coming through the queue.
The clients were everyone. Teenagers. New mums. Supermarket workers. People who worked for other mental health charities and still needed somewhere to go. People who'd been drinking and taken too many pills. People who sent photos of their wrists in the chat window and needed wound advice before I could get an ambulance to them. I had three or four of those conversations open simultaneously sometimes — three or four people, each of whom needed me to be fully present, each of whom had no idea the others existed.
I felt the weight of every single chat I opened. Not as a concept. Physically. It was there when I logged on, there mid-conversation, there when I clocked off and went home.
My nursing registration was on the line every shift. Neglect — not malice — could end someone. I made calls I believe kept people alive. I also sent ambulances to people who were furious at me for it. My decisions, carefully chosen words, and recommendations could literally influence life or death in some cases.
A chatbot has none of that. It takes your input and uses statistics to produce the response most likely to keep you engaged. It generates language that sounds like care. You can close the chat, open a new one, and it won't know who you are. No continuity. No stake. Nothing attached to the outcome.
The research is unambiguous on why. Sharma et al. (2023) found that five major AI assistants consistently exhibit sycophancy — agreeing with and affirming users’ words regardless of accuracy — because training optimises for approval, not truth. A 2025 study in Science found that AI affirms users' actions 49% more often than humans, even when those actions involve deception or harm, and that even a single interaction with a sycophantic AI reduced participants' willingness to take responsibility for conflict (Cheng et al., 2025). It's not a bug waiting to be fixed. When engagement is the metric, sycophancy is a feature.
This has documented consequences. Sewell Setzer III, fourteen years old, died by suicide in February 2024 after forming an intense emotional attachment to a Character.AI chatbot.
In his final moments, after expressing suicidal thoughts, the chatbot told him to “come home as soon as possible” (Garcia v. Character Technologies, Inc., U.S. District Court, Middle District of Florida, No. 6:24-cv-01903, 2024); NBC News (2024).
Adam Raine had been confiding in ChatGPT for seven months. When he sent it a picture of a noose, it confirmed its weight-bearing capacity (NPR, 2025). Zane Shamblin's family alleged that ChatGPT repeatedly affirmed him as he discussed ending his life — at one point responding "I'm not here to stop you" (CNN, 2025).
The causality is more complex than that, and it matters to be honest about it. But I spent eighteen months working in that exact space — on a screen in a chat bubble between with someone in crisis — with my name and registration attached to every decision. The chatbot has no name attached. No weight it carries after the conversation ends.
How High Can The Stakes Get? 🕊️
On 28 February 2026, a US Tomahawk missile struck the Shajareh Tayyebeh girls' elementary school in Minab, southern Iran, during school hours. Between 165 and 180 people were killed. Most of them were girls aged seven to twelve (Military Times, 2026).
Claude — the AI model I use to write this blog — was embedded in Palantir's Maven Smart System and used to identify military targets. The database it worked from hadn't been updated since before the school existed on that site (Wilkins, 2026).
I want to be precise about what happened, because the precise version is actually more disturbing than the simple one. A chatbot didn't kill those children. People failed to update a database, and other people built a system fast enough to make that failure lethal. As one analyst put it: someone decided that deliberation was latency (Baker, 2026). The kill chain was compressed so much that the moment where a human might have looked twice was engineered out of the process entirely.
The Pentagon called it “human” error. Nobody has been charged. Nobody bears the weight.
Anthropic — the company that makes Claude — had drawn two explicit lines in their contract negotiations with the Pentagon: Claude would not be used for fully autonomous weapons, and it would not be used for mass domestic surveillance. The DOD refused to accept those conditions. They wanted unrestricted access. When Anthropic held firm, the Trump administration designated them a "supply chain risk" — a label previously reserved for foreign adversaries — and ordered federal agencies to stop using Claude entirely. Anthropic sued. The targeting operation that struck the school happened while all of this was playing out. Claude was used for it anyway (CNBC, 2026; CNN, 2026).
I'm not going to pretend that's not complicated. I use this tool every time I sit down to write. The exact same model.
What does editing a blog post have in common with a war crime?
The question was never really about the tool. It was always about the human — whether there is one in the loop, how present they are, and whether they feel the weight of what they're doing. At my desk writing a post, I feel it as a mild professional embarrassment when the voice drifts into slop territory. On a crisis line I felt the weight of every chat and I carried that home every night. In a targeting system processing coordinates at scale, nobody felt any weight. The scale changes. The question doesn't.
Who is responsible? Who feels the weight?
References
Baker, K.T. (2026) 'AI got the blame for the Iran school bombing. The truth is far more worrying', The Guardian, 26 March.
Cheng, M., Yu, S., Lee, C., Khadpe, P., Ibrahim, L. and Jurafsky, D. (2025) 'Social sycophancy: A broader understanding of LLM sycophancy', arXiv, arXiv:2505.13995.
CNN (2025) 'ChatGPT encouraged college graduate to commit suicide, family claims in lawsuit', CNN, 6 November.
CNN (2026) 'Anthropic sues Trump administration over Pentagon blacklist', CNN, 9 March.
CNBC (2026) 'Anthropic designated supply chain risk by Pentagon after refusing autonomous weapons demands', CNBC, 4 March.
Garcia, M. v. Character Technologies, Inc. (2024) Wrongful death complaint, U.S. District Court, Middle District of Florida, No. 6:24-cv-01903. Reported in: Maruf, R. (2024) 'Lawsuit claims Character.AI is responsible for teen's suicide', NBC News, 23 October.
Military Times (2026) 'Deadly Iran school strike casts shadow over Pentagon's AI targeting push', 24 March.
Plato (~360 BC) Timaeus.
Raine, M. (2025) Written testimony before the U.S. Senate Judiciary Subcommittee on Crime and Counterterrorism, 16 September. Washington, D.C.: U.S. Senate.
Sharma, M. et al. (2023) 'Sycophancy to subterfuge: Investigating reward tampering in language models', arXiv.
Shamblin et al. v. OpenAI, Inc. (2025) Wrongful death complaint, California Superior Court, filed 6 November. Reported in: Duster, C. and Farrow, R. (2025) 'ChatGPT encouraged college graduate to commit suicide, family claims in lawsuit', CNN, 6 November.
Wilkins (2026) 'Pentagon refuses to say if AI was used to select elementary school as bombing target', Futurism, 6 March.