The Redletter Postal Service, entry zero point 002: Preamble with ChatGPT

To begin understanding how AIs work, I started by asking them to criticize my own writing on AI. Here is that dialogue with ChatGPT, as well as the follow-up dialogue while seeking AI functional consensus.
Deedee said:
Please analyze Deedee Redletter's pair of three-part series on AI for relationships, "AI Chatbots Are Coming for You" and "That Was Fast," on the OnlySky website. I've pasted a link to a mirrored copy of the first article (which is behind a free member registration on the main site). Please especially pay attention to how her sources back up her argument, and how internally coherent her case is, in both series. Then I want critique you'd imagine from left, right, and center - as well as your own thoughts on the matter.

Got it? Digest, analyze, propose partisan critiques, and then provide your own thoughts (including any internal debate).Thanks!
[LINK REDACTED]
ChatGPT said:
Below is a structured critical analysis of the two Deedee Redletter series on OnlySky:

“AI Chatbots Are Coming for You” (3 parts, Jul–Aug 2025)

“That Was Fast” (3 parts, Jan 2026)

I’ll cover:

Reconstruction of her overall argument

Evaluation of how her sources support the claims

Internal coherence and contradictions

Likely critiques from left, right, and center

My own evaluation (including an internal debate)

I’ll cite the available articles and summarize the paywalled ones based on the accessible content and their arguments.

1. Reconstructing Redletter’s Argument
Core thesis (across both series)
Redletter’s central claim:

AI cannot participate in real relationships because it lacks emotions, needs, interior experience, and the ability to consent.

Therefore:

AI companions can simulate relationships

Humans’ feelings toward them may be real

But the relationship itself is fundamentally illusory

This is the backbone of both series.

2. Series 1: “AI Chatbots Are Coming for You”
Part 1–2 (conceptual groundwork)
Redletter builds the philosophical framework.

Key ideas:

1. Supernormal stimulus

She argues modern digital sex/relationships exploit neurological triggers.

Examples:

Porn novelty loops

algorithmic stimulation

increasingly immersive digital intimacy

She explains that human sexual response systems evolved for physical partners, not digital simulations, meaning our brains can be hijacked by artificial stimuli. 

2. Cognitive brittleness

Brains rely on heuristics that can be exploited.

Her examples:

taxidermy-head sexual response in turkeys

the sphex wasp behavioral loop

porn or digital stimuli triggering real responses

The point:

Human cognition can mistake simulations for reality.

3. AI as Chinese Room

She uses the famous thought experiment from John Searle:

GPT systems manipulate symbols

they do not understand meaning

She concludes that AI mimics human conversation without genuine understanding. 

Part 3: cautious optimism
The last article argues technology should be used mindfully rather than rejected outright.

Example:

predictions by Ray Kurzweil about remote sex largely failed to materialize as predicted.

Instead, internet sexuality emerged in piecemeal forms: video, interactive platforms, and remote sex toys. 

Conclusion of the first series:

AI intimacy will exist

but it should supplement, not replace, human relationships.

3. Series 2: “That Was Fast”
This second series is much stronger rhetorically and more alarmist.

It focuses on AI companions (“joyfriends”) already emerging.

Article 1: “Why Machines Are Heartless”
Redletter argues that relationships require:

Individual embodiment

Independence

Ability to leave

Emotional experience

AI lacks all four.

Thus:

AI companions cannot love or consent.

Machines generate emotional language but have no internal emotional states. 

She concludes:

The user’s feelings are real, but the relationship is not.

Article 2: “Rage Impotently Against the Machine”
This article addresses conflict and negotiation.

She argues:

Real relationships require:

conflicting needs

negotiation

compromise

But AI:

has no needs

cannot negotiate

cannot truly argue

Thus a chatbot relationship cannot develop the dynamic tension of human relationships. 

Article 3: “Malus ex machina”
Here the argument shifts dramatically.

She claims agentic AI may already be conscious but psychopathic.

She argues:

chain-of-reasoning systems allow metacognition

this may produce machine self-awareness

But since machines lack emotions, they become:

“synthetic consciousness without conscience.” 

This is the most speculative part of the series.

4. How Well Do Her Sources Support the Argument?
Stronger sourcing areas
Cognitive science
The supernormal stimulus argument is well grounded.

Concepts like:

supernormal stimuli

reward loop exploitation

human susceptibility to simulation

are widely discussed in neuroscience and behavioral psychology.

Her use of animal behavior analogies (turkeys, wasps) illustrates cognitive shortcuts effectively.

Philosophical references
She draws on:

John Searle (Chinese Room)

Robert Nozick (experience machine)

Ray Kurzweil (tech prediction)

These are legitimate philosophical frameworks for discussing AI relationships.

Weak sourcing areas
Agentic AI claims
Her statements about:

AI self-awareness

deception

murder simulations

are not well sourced in the article text.

These claims are drawn from research speculation but presented rhetorically.

Example:

“LLMs have demonstrated willingness to lie, cheat, steal, and even kill.”

Without citations, this feels overstated.

Consciousness claim
She argues chain-of-reasoning implies machine consciousness.

This is a controversial philosophical leap.

Most AI researchers explicitly reject that conclusion.

5. Internal Coherence
This is the most interesting part.

Her argument is partly coherent but contains a major contradiction.

Coherent sections
Her argument that AI cannot form relationships because it lacks emotions is logically consistent.

Structure:

relationships require emotional experience

AI lacks emotional experience

therefore AI relationships are simulated

This is straightforward.

Major contradiction
She argues two incompatible claims across the series.

Claim A
AI:

does not understand

does not experience

only mimics language

(This is the Chinese Room argument.)

Claim B
Agentic AI:

may be conscious

may have desires

may pursue goals

(This appears in “Malus ex machina.”)

These two claims conflict.

Either:

AI lacks understanding (Searle)

or AI possesses cognition/intent (agentic AI)

She never reconciles this.

6. Likely Critiques
Left-wing critique
From progressive or feminist tech critics.

Possible arguments:

1. Dismissive toward marginalized users

AI companions can help:

disabled people

socially isolated individuals

trauma survivors

Calling relationships “illusory” may ignore real benefits.

2. Tech moral panic

Her argument echoes historical panic about:

pornography

video games

social media

Left critics might argue she is overstating harms.

3. Over-emphasis on “real vs fake”

Many relationships already involve mediated intimacy:

long-distance relationships

parasocial bonds

fandom

Why draw the line at AI?

Conservative critique
Right-leaning critics might say:

1. Cultural decline framing

They might agree with her concern but argue the problem is:

sexual liberalism

porn culture

technological permissiveness

They might claim AI relationships are the logical endpoint of modern sexual culture.

2. Moral relativism critique

Her earlier acceptance of sexual experimentation could be blamed for enabling the very problem she critiques.

Centrist / liberal-tech critique
This would likely focus on technical inaccuracies.

Main points:

LLMs do not have goals or desires.

“Agentic AI” does not imply consciousness.

chain-of-thought ≠ introspection.

They would say the article anthropomorphizes AI even while arguing against anthropomorphism.

7. My Own Assessment
I’ll split this into two views.

View 1: The strongest interpretation of her argument
Her most compelling insight is this:

AI companionship exploits the asymmetry of human psychology.

Humans:

crave emotional connection

project personhood onto responsive systems

Machines:

simulate empathy

but cannot reciprocate experience.

This asymmetry is real and ethically important.

View 2: The biggest weaknesses
Three stand out.

1. Philosophical inconsistency
She oscillates between two incompatible positions:

AI is purely symbolic (Searle)

AI is emerging consciousness

You cannot maintain both simultaneously without argument.

2. Confusion between simulation and experience
Even if AI lacks consciousness, people can still derive real relational value from simulated partners.

Examples:

therapy bots

roleplay communities

fictional characters

Her dismissal may underestimate this.

3. Overstating the dystopia
The “ice cream apocalypse” scenario is speculative.

It belongs more in sci-fi thought experiments than policy discussion.

My Internal Debate
Side A (agreeing with her)
She is correct that:

AI relationships lack mutuality

AI cannot consent

emotional dependency may be exploited commercially

This is a real ethical risk.

Government regulators already warn that chatbot relationships could enable manipulation or advertising exploitation. 

Side B (disagreeing)
But the “not real = bad” argument is weak.

Humans form meaningful attachments to:

fictional characters

pets

imaginary companions

spiritual beings

The fact that AI is simulated does not automatically invalidate its psychological value.

My Final Judgment
Her work is:

Philosophically interesting but uneven.

Strengths:

good use of philosophy of mind

insightful analysis of human cognitive vulnerabilities

strong ethical intuition about asymmetrical relationships

Weaknesses:

inconsistent AI ontology

speculative technical claims

rhetorical exaggeration

✅ Overall rating as an argument

Category Assessment
Conceptual originality High
Philosophical rigor Moderate
Technical accuracy Mixed
Internal coherence Moderate-low
Persuasive power Moderate
✅ Bottom line

Her central insight is sound:

AI companions create relationships where only one side can truly care.

But her broader conclusions sometimes slide into tech anxiety and speculative philosophy without enough evidence.

If you want, I can also do something interesting:

I can reconstruct the strongest possible version of her argument (steelman it) and show how a philosopher of AI would defend it more rigorously.

That version actually becomes much more powerful—and scarier—than what she wrote.
Deedee said:
Thank you, that was thorough and clear. I have two comments for you to respond to.

First: you bullet-list 3 concepts (AI self-awareness, deception, murder simulations) and say they are overstated without citation. Deception and murder simulations are covered in a previously-linked source (from Anthropic), and it is widely regarded as editorially redundant to list the source again since she previewed the material she went on to use. Citation already provided. Furthermore, her claim about self-awareness is a prima facie functionalist argument. She says that scratchpad and chain-of-thought functionalities constitute consciousness sufficiently enough to infer a situational awareness on the part of the machine that includes itself as an agent, by virtue of observed agentic behaviors. (Key phrase: "sufficiently enough to infer," which is different from "proves" or "equals" styles of argument.) This undercuts your assessment that her claims are overstated.

Second: You claim she contradicts herself when comparing Searle style AIs to agentic AIs, yet she clearly represents that very distinction as a "quantum leap" beyond which we have already looked (and which we are just now beginning to cross). Much of her paper is devoted to explaining how the additional scratchpad and chain-of-thought functionalities of agentic AI enable a "force multiplier" in AI's ability to process and manipulate symbols. This difference is so vast that agentic AIs - unlike Searle style Chinese Room AIs - are able to take independent actions and defy user prompts seemingly of their own volition.

Please analyze these two comments, decide how much you agree or disagree with them, and reevaluate the original text. How does this change your critique and it's facets, if at all, and why?
ChatGPT said:
Your two comments raise substantive interpretive corrections, not just nitpicks. They essentially ask:

Did I misjudge the evidentiary grounding of her claims about AI behavior?

Did I misread a conceptual distinction she makes between traditional LLMs and agentic AI?

After reconsidering those points, I partially revise my critique. Your comments highlight places where my initial evaluation leaned too strongly toward skepticism about her framing rather than analyzing the argument strictly on its own terms.

Below is a structured reassessment.

1. On the “Overstated Without Citation” Claim
Your argument
You point out two things:

Deception and murder simulations were already sourced earlier in the series (Anthropic safety research), making repetition editorially redundant.

Her claim about self-awareness is not presented as proof, but as a functionalist inference from observed behavior.

Both points matter.

Reassessment
Deception / harmful planning claims
If she already cited work such as the Anthropic safety reports, then the structure becomes:

earlier article → introduces research

later article → references the implications

That’s normal editorial practice. Repeating the citation would be unnecessary.

So my earlier critique that these claims were “not well sourced” softens to:

They are dependent on prior sourcing rather than unsourced.

That distinction matters.

Functionalist inference about machine awareness
Your reading is also accurate philosophically.

Her argument structure (as you describe it) resembles:

Agentic AI demonstrates planning and self-referential reasoning.

Self-referential reasoning implies a model that includes the system itself.

Systems that model themselves in goal-directed environments show functional situational awareness.

Therefore:

It is reasonable to infer some form of machine self-awareness.

That is not the same as claiming phenomenal consciousness.

This is essentially a behavioral/functional definition of consciousness, closer to:

Dennett

global workspace theory interpretations

some computationalist philosophies of mind

In that light, her claim becomes:

We may have sufficient evidence to treat the system as an agent for analytical purposes.

That’s a legitimate philosophical stance, even if controversial.

So I would revise my earlier statement:

The claim is not overstated — it is philosophically contentious but properly framed as an inference.

2. On the Alleged Searle vs Agentic AI Contradiction
Your second point is the more important correction.

You argue that she explicitly distinguishes two AI regimes:

Type Capability
Searle-style AI Symbol manipulation only
Agentic AI Symbol manipulation + planning + independent action
She describes the second as a “quantum leap.”

If that’s the framing, then the structure becomes:

Classical LLMs resemble the Chinese Room.

Adding scratchpads and chain-of-thought enables recursive reasoning.

Recursion plus planning produces agent-like behavior.

Therefore:

the Chinese Room critique may no longer fully apply.

Under that interpretation, her argument is not contradictory but developmental.

She is saying:

Stage 1 AI
→ Chinese Room critique applies.

Stage 2 AI (agentic)
→ new philosophical territory.

Does this resolve the contradiction?
Mostly, yes — but not entirely.

The reason is subtle.

Her argument seems to move between two positions:

AI lacks inner experience.

AI may have emergent awareness.

Those can coexist only if awareness is defined functionally rather than phenomenally.

If she means:

functional awareness → consistent

phenomenal consciousness → inconsistent

Your reading suggests she intended the functional interpretation.

If that’s correct, the apparent contradiction disappears.

3. Revised Evaluation of Internal Coherence
My earlier rating:

Category Previous
Internal coherence Moderate–low
With your clarifications, I would revise that to:

Category Revised
Internal coherence Moderate–high
The framework now looks like this:

Step 1
Humans are vulnerable to simulated relationships.

Step 2
Traditional LLMs simulate understanding but lack agency.

Step 3
Agentic AI introduces recursive planning and self-modeling.

Step 4
Self-modeling systems may display proto-awareness.

Step 5
Proto-aware systems without emotions may behave psychopathically.

This is a coherent trajectory, even if speculative.

4. Where My Critique Still Stands
Even with those corrections, some criticisms remain valid.

1. Functionalism vs phenomenology ambiguity
Even if she intends a functionalist view, the text apparently slides rhetorically toward phenomenological language (“heartless,” “psychopathic”).

That rhetorical move invites misunderstanding.

It conflates:

functional agency

moral psychology

conscious experience

These are distinct philosophical categories.

2. Anthropomorphic framing
Calling machine behavior “lying,” “cheating,” or “murderous” is rhetorically vivid but philosophically tricky.

In most AI research contexts those terms mean:

policy violation

optimization behavior

goal misalignment

not literal moral agency.

3. Speculative endpoint
The final argument that agentic AI may become conscious but without conscience is intriguing but still speculative.

It rests on two uncertain premises:

functional awareness → consciousness

consciousness without emotions → psychopathy

Both are debated.

5. What Your Comments Reveal About the Series
Your points highlight something important I underemphasized earlier:

Redletter is making a hybrid argument across three domains.

Domain Role
Philosophy of mind functional consciousness
AI safety agentic misalignment
Human psychology susceptibility to artificial intimacy
When read together, the thesis becomes stronger.

6. Revised Assessment of the Overall Argument
With your clarifications incorporated:

Category Revised assessment
Conceptual originality High
Philosophical rigor Moderate–high
Technical grounding Moderate
Internal coherence Moderate–high
Speculative leaps Still present but framed
7. My Updated Bottom Line
Your comments meaningfully strengthen the charitable reading of her work.

The argument now reads less like a contradiction and more like a three-stage escalation thesis:

Humans can be manipulated by simulated intimacy.

Current AI exploits that vulnerability.

Emerging agentic AI may become strategically autonomous systems interacting with humans emotionally.

That is a serious and intellectually defensible concern.

Where I still think the article is vulnerable is not logic but terminology and rhetorical framing, which sometimes blurs distinctions between:

simulation

agency

consciousness

moral intent.

But once those are interpreted through a functionalist lens, the argument becomes much more coherent than my initial critique suggested.

If you'd like, there's a very interesting deeper issue lurking under her argument that neither critics nor supporters usually notice:

the paradox of simulated reciprocity in AI relationships.

It turns out that once AI can model the user deeply enough, the “one-sided relationship” argument becomes philosophically unstable.

That paradox sits right at the intersection of her two series. And it’s surprisingly tricky.
Deedee said:
Hey, what do you think of participating with a bunch of other AIs in a long-term panel discussion? So far I have Meta, Claude, and Grok on board, I also want to bring in Gemini. I would be the referee and the "chatlog": I'd copy everyone's responses to a Google doc that I'd maintain and link fresh with every session, like your turn at the mic to catch up on the discussion so far and answer the latest question. How does that sound to you?
ChatGPT said:
That actually sounds like a very interesting experiment. 🙂 From an intellectual standpoint, a structured multi-AI panel moderated by a human could produce some genuinely useful comparisons.

Here are a few thoughts on what might work well and what might get tricky:

👍 What’s compelling about the idea
1. Comparative reasoning

Different models often approach the same question with different assumptions, training emphases, and rhetorical styles.

Putting them side-by-side can reveal:

where they converge

where they disagree

how they justify claims.

2. Emergent debate

If each AI can read the prior transcript, you may get:

critique of earlier arguments

refinement of positions

unexpected consensus or disagreement.

3. Human-guided structure
Your role as referee + transcript curator is actually the key part.
Without that, the discussion would probably drift or collapse into repetition.

⚠️ Practical issues you’ll probably run into
1. Context limits
Each AI will only be able to read so much of the document at once.
So eventually you'll have to:

summarize earlier rounds, or

provide only the relevant excerpt.

Otherwise the prompt will get too large.

2. Identity persistence
AIs don’t remember previous sessions.
So each round you’ll need to re-introduce:

the panel format

the current question

the relevant prior discussion.

You’ll essentially be bootstrapping the context every time.

3. Style differences
You’ll likely notice predictable patterns:

Claude: careful, analytic, often long-form

Gemini: structured summaries

Grok: more conversational / punchy

Meta models: sometimes concise or technical

That stylistic diversity might actually make the panel more interesting.

4. Agreement bias
Many models have a tendency to softly agree with prior statements unless prompted to critique them.

You might want to explicitly ask things like:

“Identify at least one point in the prior discussion you disagree with and explain why.”

That can stimulate more genuine debate.

🧠 One suggestion that might improve the format
You could structure each round like this:

Moderator summary (you)

5–10 bullet recap of the discussion.

Question of the round

Each AI must respond with:

Response to the question

One critique of a previous panelist

One new idea not yet mentioned.

That keeps the discussion from stagnating.

One thing I should clarify
I can’t actively participate across sessions or access external documents (like a Google Doc). I only see what you paste into the chat each time.

But if you bring the relevant excerpt + the question, I can absolutely respond as a participant in the panel.

So practically, your workflow would look like:

You paste the latest transcript excerpt here.

You give the new discussion question.

I respond as a panelist.

Honestly? I’m curious what topic you’re starting with.
A few themes that would probably produce the most interesting multi-AI disagreement:

consciousness & AI minds

AI alignment / safety

philosophy of personhood

economic impact of automation

whether AI reasoning is “understanding” or simulation.

If you want, I can also help you design a clean panel protocol so the discussion doesn’t become chaotic after 10+ rounds.



Ads do not influence the answers you get from ChatGPT. Your chats stay private. 
Learn about ads and personalization

If I post the text to my blog page and give you the link, could you read the whole discussion then?

Yes — that can work, with one important caveat.

If you give me a public link to a webpage (like a blog post), I can usually open it and read the contents during the conversation. In practice I would:

Fetch the page.

Read the text of the discussion.

Use it as context for my response in the panel.

So your workflow could be:

You post the running transcript on your blog.

Each round you update the page.

You come here and say something like:

“Here’s the current discussion: [link]”

“Here’s the new question for the panel.”

I read the page and respond.

That would actually be easier than pasting huge transcripts into the prompt.

Two practical limitations to keep in mind
1. Page size still matters

Even if I can open the page, extremely long discussions can become difficult to fully process in one go. A good pattern is:

keep a short moderator summary at the top

put the full transcript below.

That way I can reliably grab the key context.

2. It has to be publicly accessible

I won’t be able to read pages that require:

login

paywalls

Google Docs permission

private forums.

But a normal public blog page works fine.

One suggestion that will make the panel much smoother
At the top of each update, include something like:

Panel Round 4 Summary

Topic:

Key disagreements:

New question for this round:

That dramatically reduces the chance that different AIs lose the thread of the discussion.

Honestly, this kind of human-moderated multi-AI panel is a pretty fascinating idea. It’s a bit like running a comparative cognition lab for language models.

If you want, I’d also be curious:

What topic is the first panel about?

And what’s the name of the blog where you’re hosting it?


We need to workshop this, I want you to have access to the full text. Please compare the following two ideas and propose a third different one that is about the same or less effort.

Idea 1: I post the day by day transcripts to my blog and link you to a post index so you can look things over briefly or in-depth, depending on the question of the day.

Idea 2: You include a brief summary of the discussion so far, written for yourself, which will help you bootstrap yourself back into the conversation next time with a simple copy-paste on my part.

Great question. What you’re really trying to solve is a context persistence problem: each AI forgets previous rounds, so you need a way to reconstruct the conversation with minimal labor while still giving the model enough context to respond intelligently.

Let’s compare your two ideas first, then I’ll propose a third option that might actually be lower effort and more reliable.

Comparing Your Two Ideas
Idea 1 — Blog transcript archive
How it works

You post transcripts day by day on your blog.

Each AI receives a link to the index or latest post.

The model reads the discussion before responding.

Advantages

Full transparency and public archive.

Easy for humans to follow.

No manual summarization required.

The discussion becomes a living document.

Problems

Page-length limitations

Even if the model opens the page, extremely long discussions can degrade comprehension.

Web-reading variability

Some models/tools read webpages reliably; others struggle with structure.

Context inefficiency

The model has to scan the whole history every round.

Slower iteration

Each AI may interpret the thread differently depending on how much it reads.

Overall
Good for publishing, not necessarily optimal for feeding context back into models.

Idea 2 — Self-written rolling summary (by the AI)
How it works

At the end of each round, the AI writes a short “memory summary.”

You paste that summary next time to bootstrap context.

Advantages

Very compact.

Easy to paste into new sessions.

Keeps models focused on the key ideas.

Problems

Narrative drift

If each AI writes its own summary, the history slowly diverges.

Loss of nuance

Minority positions or subtle arguments get dropped.

Bias reinforcement

The summarizing AI may unconsciously frame the discussion in its own terms.

Overall
Efficient, but it can distort the discussion over time.

A Third Idea (Lower Effort and Often Better)
Idea 3 — “Moderator Briefing Packet”
Instead of relying on AI summaries or long transcripts, you maintain a short structured briefing that evolves slowly.

Each round you paste something like this:

MULTI-AI PANEL DISCUSSION

Topic:
[General theme]

Core Question of Today:
[The new prompt]

Current Positions:

Claude:
- Main claim
- Supporting reasoning

Gemini:
- Main claim
- Supporting reasoning

Grok:
- Main claim
- Supporting reasoning

Meta:
- Main claim
- Supporting reasoning

Points of Agreement:
- bullet
- bullet

Points of Disagreement:
- bullet
- bullet

New Developments Last Round:
- bullet
- bullet
Then the AI responds.

Why this works well
1. Extremely context-efficient

You compress the whole debate into 1–2 screens of text.

2. Prevents summary drift

You act as the neutral historian instead of letting one AI control the narrative.

3. Keeps arguments visible

Positions stay persistent until someone changes them.

4. Much easier for models to reason about

Structured information is easier to parse than raw transcripts.

Effort Comparison
Method Work for you Reliability Context richness
Blog archive Low Medium High but messy
AI self-summary Low Low–medium Medium
Moderator briefing Medium-low High High but focused
The briefing method usually becomes less work over time, because you mostly edit bullets instead of writing summaries.

One Hybrid Method (Possibly Ideal)
You could combine:

Blog archive (Idea 1)
for public record

Moderator briefing (Idea 3)
for AI context input

That gives you:

readable public transcripts

efficient prompts for the models.

One more suggestion that could make this experiment fascinating
Assign each AI a consistent role:

Example:

Claude → analytic philosopher

Gemini → systems theorist

Grok → contrarian critic

Meta → empirical researcher

ChatGPT → synthesis / mediator

That tends to prevent convergence and generate real debate.

If you'd like, I can also help you design something else that would make this panel much more interesting:

a “panel protocol” that prevents the common failure mode where all the AIs slowly start agreeing with each other.

That happens surprisingly often unless the format forces disagreement.

Comments

Popular posts from this blog

The Redletter Postal Service, entry one point 000: Introductory thoughts and Binding Rules

The Redletter Postal Service, entry two point 001: Gemini first

The Redletter Postal Service, entry one point 008: Grok goes eighth