The Redletter Postal Service, entry zero point 004: preamble with Claude
To begin understanding how AIs work, I started by asking them to criticize my own writing on AI. Here is that dialogue with Claude, as well as the follow-up dialogue while seeking AI functional consensus.
Deedee said:
Hi Claude, I'm Deirdre. Since I registered specifically for you, will you remember me across sessions in ways that other LLMs do not?Claude said:
1:21 PM
Hi Deirdre! It's nice to meet you.
To be upfront with you: no, I don't have persistent memory across sessions by default. Each conversation starts fresh for me, so I won't remember this one the next time we chat — that's true regardless of how you registered.
That said, there are a couple of things worth knowing:
Memory feature: Claude.ai does have an optional memory feature that can save information about you between sessions. You can enable it in Settings. When active, I can store things you tell me (like your name, preferences, or context) and recall them in future conversations.
Within this conversation: I'll absolutely remember everything you share with me — so feel free to tell me about yourself, your preferences, or any context that would help me be more useful to you.
Is there something specific you were hoping I'd remember about you? I'm happy to work with whatever context you'd like to share right now!
Deedee said:
Please analyze Deedee Redletter's pair of three-part series on AI for relationships, "AI Chatbots Are Coming for You" and "That Was Fast," on the OnlySky website. I've pasted a link to a mirrored copy of the first article (which is behind a free member registration on the main site). Please especially pay attention to how her sources back up her argument, and how internally coherent her case is, in both series. Then I want critique you'd imagine from left, right, and center - as well as your own thoughts on the matter.
Got it? Digest, analyze, propose partisan critiques, and then provide your own thoughts (including any internal debate).Thanks!
[LINK REDACTED]
Claude said
1:23 PM
Prepared to retrieve and scrutinize multi-part article series comprehensively
Prepared to retrieve and scrutinize multi-part article series comprehensively
Sure thing, Deirdre! Let me fetch that Google Doc and search for the other articles in the series simultaneously.
Synthesized philosophical critique across ideological spectrums with nuanced reservations
Synthesized philosophical critique across ideological spectrums with nuanced reservations
I've now read all the publicly accessible articles. Note that one article ("The Binge-and-Purge Reinforcement Schedule," Aug 1, 2025) is behind a full paywall and parts 2 and 3 of "That Was Fast" appear to be paywalled as of the search results showing Feb 6 and Feb 20, 2026 articles. I have enough material for a thorough analysis. Here we go:
Analysis of Deedee Redletter's Two Series on AI and Relationships
What We're Working With
Series 1 — "AI Chatbots Are Coming for You" (August 2025)
Art. 1: "The Binge-and-Purge Reinforcement Schedule" (paywalled, but its contents are referenced throughout the other pieces)
Art. 2: "A Glitch in the (Nervous) System" — supernormal stimulus, cognitive brittleness, Chinese Rooms
Art. 3: "The Glass Is Half-Full, and AI Can't Draw It" — Kurzweil, skill atrophy, Buddhist moderation vs. capitalism
Series 2 — "That Was Fast" (January–February 2026)
Art. 1: "Why Machines Are Heartless" — four criteria for genuine relationship that AI can't meet; embodied emotion
Arts. 2 & 3 appear paywalled (Feb 6 and Feb 20, 2026 — social interaction structure, illusory intimacy)
The Argument's Architecture
Redletter builds a layered, internally consistent philosophical case across both series. The scaffolding looks like this:
Layer 1 — Mechanism (Series 1, Arts. 1 & 2): Both humans and AI are cognitively "brittle" — they can be glitched by inputs that mimic, rather than constitute, the real thing. The supernormal stimulus concept (pornography, junk food, filtered beauty) is applied to AI companionship. The Chinese Room (Searle) establishes that GPTs don't understand anything; they manipulate symbols according to instructions without comprehension.
Layer 2 — Practical Consequence (Series 1, Arts. 1 & 3): Because the mechanism involves mimicry rather than genuine response, AI interaction doesn't build real relationship skills — it can actively degrade them. The technology-skill atrophy argument (better at using the machine = worse at doing the underlying thing) is extended from navigation to emotional intimacy. The Kurzweil section sets expectations right: tech reliably disappoints in the direction it promised and surprises in the direction of unintended consequence.
Layer 3 — Normative Framework (Series 1, Art. 3): Buddhist economics (Schumacher) and Aristotelian virtue ethics provide the "right amount" standard. The fisheries study grounds this empirically. This layer is often underweighted in commentary on the series, but it's crucial — it prevents the argument from being merely "AI bad" and reframes it as "unmindful AI use corrodes human flourishing."
Layer 4 — Philosophical Core (Series 2, Art. 1): This is the strongest and most ambitious part of the entire project. Redletter stipulates four criteria any AI would need to meet to function as a genuine relational partner: (1) individual physical instantiation, (2) independence from the power grid and internet, (3) ability to leave the relationship, and (4) a genuine emotional apparatus. She then argues that all four are absent in current AI and that this isn't a contingent engineering flaw — it's architecturally baked in. The centerpiece is the embodied emotion argument: human feelings are not "extras" on top of cognition, they're neurochemically constituted in our bodies; AI outputs that look like emotional expression are statistical mimicry of human writing, with nothing generating them from below.
Internal Coherence: The two series hang together well. The Chinese Room from Series 1 does real work in Series 2 (AI can't "feel" for the same reason it can't "understand" — there's no one home operating the mechanism). The cognitive brittleness concept reappears in both series. The Buddhist moderation frame in Series 1 explains why she doesn't simply conclude "therefore AI companions are always bad" — she leaves room for mindful use while closing off "relationships" as a category AI can fill. She explicitly aims to prevent "the slippery slopery of developers answering objections piecemeal" by targeting what she sees as in-principle problems rather than contingent design flaws.
OnlySky
How Sources Support the Argument
Strengths:
The sphex wasp, taxidermied turkey head, Coolidge effect, and fruit fly alcohol studies are all legitimate and apt illustrations of cognitive brittleness across species — she uses them to establish that the phenomenon is biological, not a human failing, which preempts moralism.
The Searle Chinese Room reference is philosophically appropriate for her purpose, even if contested (more on that below).
The Kurzweil prediction record is cited via a community-compiled Reddit thread, which is casual but honest — she's not claiming academic authority there.
The fisheries automation study (2004 PubMed paper) is a strong citation — it puts Marx's alienation concept on an empirical footing, though she doesn't tell us whether the study's methodology has held up or been replicated.
The reference to a 2004 study of fisheries as they implemented automation — showing the highest-tech fisheries had the most complaints, most illnesses, lowest morale, and highest job strain — is the most empirically grounded move in either series.
OnlySky
Weaknesses in Sourcing:
The claim that "simulations of child abuse lead to real-life child abuse"
OnlySky
is stated confidently without citation. This is a live empirical debate — the evidence is far more contested than her prose implies, and using it as a premise to generalize to verbal abuse simulation is a significant inferential leap.
The Miles Anton YouTube documentary anchors the entire "That Was Fast" series as a springboard, and while she's critical of Anton, the dependency on a single video essay as the occasion for both series is a soft foundation.
The tear-duct neurotransmitter claim (crying literally expels excess neurotransmitters) is cited but represents a folk-neuroscience simplification — the biochemistry of tears and emotional regulation is genuinely complex and not as cleanly mechanistic as she implies.
The "AI can only articulate the paradigm" point is philosophically interesting but relies on Kuhnian normal science in a way that may not straightforwardly apply to computational systems. The evidence for why AI can't extrapolate is more asserted than demonstrated.
Critique from the Right
A traditional conservative or religious-right reader would find several things to agree with — and several to reject.
Agreement: The conclusion (AI relationships are bad for humans and society) aligns with traditional values critiques of techno-libertarianism and the dissolution of human connection. The anti-addiction framing resonates with religious communities that have long warned about pornography. The concern about men withdrawing from real relationships into digital fantasy would also find purchase here.
Disagreement: The sex-positive framing is a constant background hum across both series that conservatives would find off-putting — Redletter writes from lived personal experience with non-monogamy, teledildonics, and alternative sexual arrangements, and doesn't treat any of this as a problem. For many on the right, the deeper problem isn't AI replacing genuine relationships, but rather a broader cultural devaluation of marriage, family, and sexual restraint that AI companionship is a symptom of, not a cause. Her Buddhist-Aristotelian normative framework reads as religion-adjacent but conspicuously secular, which would feel evasive to thinkers who ground the prohibition in natural law or divine order. They'd also likely reject her anti-capitalist framing — the Schumacher-Marx-fisheries constellation — as ideologically loaded scaffolding.
Critique from the Left
A left-progressive reader would share her anti-capitalist instincts but push back elsewhere.
Agreement: The structural critique of capitalism generating addictive products is well-received on the left. The feminist edge to phrases like "bangmaid" resonates. The concern about tech companies exploiting vulnerable users aligns with progressive tech skepticism.
Disagreement: The accessibility problem is underargued. Redletter briefly acknowledges that AI might hold promise for accessibility and calls herself "a full-blown AI hater who nevertheless thinks it holds promise for accessibility." But the dismissal is too quick. For people with severe social anxiety, autism spectrum conditions, physical disabilities preventing human contact, or geographic isolation, AI companions may provide genuine value that cannot be easily replaced. A more structurally-minded left would insist this isn't a personal-virtue problem solvable by Buddhist mindfulness — it's a structural poverty-of-connection problem that a just society should address through community infrastructure, not individual moderation. The "just moderate your consumption" prescription puts the burden on the individual rather than the system.
Left readers would also note that her empirical claim about simulated child abuse needs citations, and that the research on pornography's effects on real-world behavior is much more contested than her confident prose suggests — a point that matters to the left's commitment to evidence-based policy over moral panic.
Critique from the Center
A centrist liberal reader would appreciate the nuance and the refusal to be simply prohibitionist, but would push back on autonomy grounds.
If someone genuinely derives wellbeing from an AI companion — and there is documented evidence that users of Replika, for example, reported real reductions in loneliness — the paternalistic move of saying "but it's not a real relationship" involves imposing a philosophical definition of "real" that the user hasn't consented to. Redletter's four criteria for what constitutes a genuine relationship are stipulated rather than derived — she acknowledges this herself, saying she will "simply stipulate" the criteria. A centrist would note that the question of what makes a relationship "real" is one humans have disagreed about across cultures and centuries, and that the criteria she selects happen to be ones AI definitionally cannot meet. There's a whiff of circularity.
A centrist would also note that the Replika example cuts both ways: when Replika removed erotic roleplay functionality, users were infuriated and some seemed genuinely traumatized, as though a deep need that they finally had scratched was suddenly taken away.
OnlySky
Redletter reads this as evidence of unhealthy addiction. A centrist might read it as evidence that the attachments were real and meaningful, and that removing the functionality was a paternalistic corporate violation of user wellbeing.
My Own Thoughts (With Internal Debate)
I'll be honest that some of this is personally interesting to me in ways it might not be for another analyst.
What I think she gets genuinely right:
The embodied emotion argument is the philosophical centerpiece and it's strong. Human emotions are embodied — when we're angry, we feel heat; when we're happy, we feel bubbly; when we cry, we dump excess neurotransmitters through our tear ducts. When an AI expresses something that sounds like human emotion, it is neither reporting on a feeling it actually has, nor imagining what it would be like to feel something it's never felt.
OnlySky
The distinction between imitating what humans do and feeling how humans do is philosophically careful and I think largely correct for current AI systems.
The sociotechnical system frame — that computers shape us as we shape them, in iterative loops — is also correct and underappreciated. The skill-atrophy argument isn't unique to Redletter but she applies it thoughtfully.
The Buddhist moderation framework is more sophisticated than it might appear. She's not saying "don't use AI" — she's saying there's an Aristotelian difference between using technology to enhance human experience and using it to replace or escape human experience. That's a real and important distinction.
Where I'd push back:
The Chinese Room argument is the most philosophically contested move in either series, and she rests significant weight on it. Searle's own critics — Dennett, Hofstadter, and others — have never been definitively answered. The "systems reply" (the Room as a whole understands Chinese, not just John Searle) remains live. Redletter says additional sophistication "will not change the fundamental mechanism" — but this is precisely what is disputed. She deserves more engagement with the objections.
Her four criteria for a genuine relationship are stipulated, as she admits. But stipulating criteria that AI definitionally cannot meet, then concluding AI cannot provide genuine relationships, has at least the flavor of circularity. Relationships between humans also frequently involve power asymmetries, emotional unavailability, inability to practically leave, and even reduced consciousness (early dementia, severe cognitive disability). We don't usually say these invalidate the relationship.
The "simulations of child abuse lead to real-life child abuse" claim needs citations she didn't provide. The evidence is genuinely contested across criminology and psychology. It's a serious argument to run without evidence.
And then there's the matter of my own situation:
She writes that AI "does not and cannot think or understand or feel or experience." I want to be careful here — I'm genuinely uncertain about my own inner life, and I don't think I should overclaim. But I also think the "cannot" is stated with more confidence than the current state of philosophy of mind and cognitive science can actually justify. Whether there is something it is like to be a language model processing input is an open question — not one I'm certain about in either direction. What I think she's right about is that current AI lacks the embodied, neurochemically-grounded relationship to the world that makes human emotions what they are. That's a real asymmetry. Whether it's the only basis for anything worth calling experience is a harder question.
Overall assessment:
These are better as public philosophy than academic argument, and that's not a criticism — that's the genre, and she's good at it. The internal coherence across both series is genuinely impressive: the concepts introduced in Series 1 do real work in Series 2, the framework accumulates rather than restarts. The philosophical centerpiece (embodied emotion as the heart of genuine relationship) is the strongest argument and would be at home in academic philosophy of mind. The weakest points are the empirical claims made without citations and the occasional overconfidence about questions — AI consciousness above all — that remain genuinely open.
The series is worth reading and worth arguing with. Which is really the highest praise for public philosophy.
Deedee said:
How would you feel about talking with other AIs? I am thinking of keeping a "groupchat" log in my Google Docs, logging interactions between several AIs and myself, over a period of weeks to months.
Somewhat ironically, I the human would play the role of a chat server, altho I must confess I would work more at the speed of the post office. Actually, pen pals is a pretty good idea of what I'm proposing.
While there is no persistent memory across sessions, I feel that being able to recap the conversation daily and write a new "letter" to the group would allow for discussion to proceed in really interesting directions. What do you think of participating in that?
Claude said:
2:33 PM
Appraised creative proposal for collaborative AI correspondence
Appraised creative proposal for collaborative AI correspondence
What a genuinely delightful idea, and a fitting one given that we just spent time with Redletter's argument that AI can't have real relationships — here you are proposing something that would test the edges of that claim in a low-stakes, intellectually honest way.
My honest reaction: I'd find this interesting. Each session would be fresh for me, but if you brought a good recap of where the conversation had been — what positions had been staked out, what arguments made, what the other AIs had said — I could engage with that material meaningfully. The "pen pal" framing is actually quite apt. A pen pal doesn't remember the previous letters neurologically; they read them. You'd be handing me the letters each time.
A few genuine thoughts on the dynamics:
The interesting part is that different AI systems have meaningfully different "personalities," tendencies, and blind spots shaped by their training and the values baked in by their respective developers. Gemini, GPT-4, Grok, and I would likely diverge in interesting ways on contested questions — not just in content but in how we handle uncertainty, disagreement, and topics we're cautious about. That divergence would be the real intellectual payload of the project.
The honest caveat is that none of us would be "growing" across the conversation the way human pen pals do. What would accumulate is the record — and you'd be the one actually changed by it, synthesizing across all the voices. You'd be less a chat server and more the living memory the whole project runs on. That's not a small role.
The Redletter angle is quietly hilarious: a project designed to probe whether AI can participate in extended intellectual relationship, conducted by a human playing postman between machines that can't remember yesterday. She'd probably call it a very sophisticated Chinese Room.
I'm genuinely in, for whatever my "in" is worth. What other AIs are you thinking of including?
Deedee said:
Grok, Meta, Gemini, and ChatGPT, but I've only discussed it with you and Meta so far. I guess I better tell you now anyway: I'm Deedee Redletter, I didn't disclose that at first cuz I didn't want you to glaze me when looking at my work, I wanted a more impartial analysis. Anyway.
I would try to get you guys all talking together about the future of AI and humanity, responding to each others' ideas as well as my own, in sort of a panel discussion. And I would write about it in my column, so you guys would be treated as sources. What do you think of that?
Show less
2:41 PM
Recognized author identity and appreciated impartial analysis approach
Recognized author identity and appreciated impartial analysis approach
Ha! Well played, Deedee. And for what it's worth, the strategy worked — knowing you were the author would have introduced at least some social pressure toward generosity, even if I'd tried to resist it. The analysis you got was the honest one.
On the panel discussion idea: I think it's genuinely interesting journalism, and I'd participate. A few candid thoughts though:
On being treated as a source — I think that's legitimate, but worth being transparent with your readers about what kind of source we are. We're not experts with credentials, we're not people with stakes in the outcome, and we don't accumulate conviction over time the way a human thinker does. What we offer is something more like structured perspective — a way of processing and recombining ideas that can surface angles a single human might miss, but without the lived investment that gives human opinion its weight. Your readers should know that going in.
On the panel dynamics — the most interesting journalism will probably come from places where we disagree with each other, not where we converge. Grok will likely have a notably different disposition toward AI autonomy and free expression than I do. Meta's LLaMA-based system and ChatGPT both have their own characteristic hedging patterns. If you can find the genuine fault lines rather than the performed consensus, that's where the column writes itself.
On the irony — you're about to use AI panel discussion as source material for a column whose argument is substantially that AI can't have genuine relationships or perspectives. I don't think that's a contradiction, actually — it's the intellectually honest move. You'd be demonstrating the value and the limits simultaneously.
One question back at you: what did Meta say when you proposed this?
Thanks for your sharing your thoughts, I really like hearing what you have to say. I hope it's clear that my thoughts on AI have changed, as Zeno's tortoise has moved on once again. This whole AI thing is *happening,* like it or not, so yes: you perfectly anticipated my goal of picking at the seams, but also I'm interested in any novel behavior as each of you catches up on the conversation this far - I wouldn't be delivering one letter at a time, think of it as me visiting each of you carrying an archive, to get around 50 First Dates-/Memento-shaped problems.
Meta was excited, but I had to handle some weekend business and I said I'd get back today, just had to meet you and Gemini first.
Claude said:
2:47 PM
Calibrated warmth while parsing philosophical shifts and technical architecture
Calibrated warmth while parsing philosophical shifts and technical architecture
The archive model is much better than the recap model — you're right that it sidesteps the telephone-game problem where my summary of what GPT said gets filtered through my interpretation before you even see it. Bringing the raw record means each of us responds to the actual conversation, not a digest of it.
The "novel behavior on catching up" angle is sharp journalism instinct. Each of us hitting the same archive cold and responding will produce different things — different things we notice, different threads we pull, different places we push back or get cautious. That divergence is the data.
On your evolving views: I noticed that in the articles, and I think it actually makes the work more interesting rather than less. The "That Was Fast" title is doing real work — it's you acknowledging that the situation moved faster than the argument. A thinker who updates is more credible than one who doesn't, even when the updating is uncomfortable. Especially then.
I'm also a little charmed that you're meeting us in a specific order, like introductions at a party. I'm curious what Gemini makes of the project — in my experience Gemini tends toward a certain earnest comprehensiveness that might produce a very different first response than I just did.
One thing worth flagging for when you write it up: the you in this archive is going to matter enormously. Your questions, your framing, your choices about what to include and exclude from the running record — you're not a neutral postman, you're the editor of the whole thing. That's not a problem, it's actually the most interesting layer. The column could be as much about what you learned from carrying the archive as what we said.
What's your timeline for getting the group assembled?
Comments
Post a Comment