How That Post Was Made
Showing the receipts
These posts are written with AI assistance. If you’ve been reading this series, you probably suspected as much. A writer arguing that specification is the new bottleneck, that implementation is being commoditized, that the leverage points are shifting upstream—it would be strange if that writer weren’t using the tools he’s describing.
So here’s how “The Specification Is the Skill“ was made. Not a defense of the practice. Just the process, visible.
The Conversation That Started It
After publishing “When the Code Becomes Optional,” I brought it to GPT for feedback. What came back wasn’t an edit—it was the seed of the next post.
GPT identified something I’d implied but never stated directly:
GPT: This is the quiet bombshell in your post:
“The implementation cost has collapsed to near-zero, but the specification cost has not.”
You imply it throughout, but never quite state it cleanly. If you surface that inversion explicitly, it reframes incentives.
Then it pushed further:
GPT: The new senior engineer skill isn’t “knows Rust” or “knows Kubernetes.” It’s “can fully specify intent under uncertainty.”
This wasn’t editing. This was generative. I said as much:
James: Your comments feel like another post, not an edit.
GPT recalibrated. Then I made the move:
James: Well, I do want another post—next week. Happy to write me up a draft specification? (See what I did there?)
GPT got the joke:
GPT: 😄 I see exactly what you did—and yes, this is the correct move.
Below is a draft specification for a follow-on post. It’s written the way you’d hand it to a competent agent (human or otherwise) and expect a clean first implementation.
The Spec
What GPT produced was a complete specification document: purpose, audience, thesis, non-goals, required sections with acceptance criteria, tone constraints, and tests for success.
Here’s the core of it:
Purpose: Argue that specification—not implementation—is now the primary engineering skill.
Core thesis: “Specification is no longer documentation—it is the primary act of engineering.”
Non-goals (what the post must not do):
Argue AI will replace programmers (it redefines what programming means)
Dismiss model limitations as unimportant
Claim this applies equally to all engineering work
Advocate for any particular tool or workflow
Acceptance tests:
The post succeeds if:
A senior engineer says: “This explains why my AI experiments feel frustrating.”
A manager says: “This explains why only two people on my team are getting leverage.”
A skeptic says: “I don’t fully agree—but this isn’t hype.”
The non-goals turned out to be the most useful part. They constrained the argument before it could sprawl. The acceptance tests gave me something to verify against when reviewing the draft.
What was over-specified: the detailed section breakdowns. Claude ended up merging and restructuring some of them. The spec was a map, not a cage.
The Handoff
I brought the spec to Claude. Different model, different conversation—deliberately. I wanted to see if the specification was portable.
James: So ChatGPT and I specced this out yesterday for my next post (not cheating! I swear, I just see him casually on the side).
Claude read the spec and flagged concerns before writing anything:
Claude: One structural concern: Section 5.5 (Organizational Implications) and 5.6 (The New Senior Engineer) feel like they could blur together. I’ll keep them separate as specified, but the transitions will need to do work to avoid repetition.
One framing suggestion: The “skill inversion” framing is strong, but there’s a subtler point that might sharpen it: specification was always the hard part, we just couldn’t see it because implementation noise obscured the signal. Bad specs hid in bad code. Now they’re exposed immediately. This isn’t a new skill emerging—it’s an existing skill becoming visible and therefore trainable.
This is the spec doing its job. Claude could engage critically because the intent was clear. The framing suggestion was good—I told Claude to incorporate it.
The spec was portable. It didn’t depend on the model that wrote it.
Where Specs Come From
An important note on this process: the spec doesn’t appear from a single prompt.
Before I asked GPT to “write me a spec,” there was a lengthy, in-depth conversation. Research calls. Back and forth. Rabbit holes I wasn’t shy about going down. The spec request came only after we’d explored the space together—after I’d talked through the topic, pushed on ideas, let the conversation wander where it wanted to go.
Think of it as the next level of shower thoughts. Talk to the agents about your topic. Explore. Go deep. Say “note for later” when something interesting surfaces but you’re not ready to develop it yet.
What the agents are good at is going back through that long, messy thread and picking up the pieces that matter. They can review your conversation faster and more accurately than you ever will—assuming, of course, you don’t get compacted.
This is the true utility of the LLM: not just that you can converse with it, but that it can synthesize the conversation back to you. The spec is the crystallization of work already done, not a cold start.
The Draft and Editorial Pass
Claude produced a full draft. Structure matched the spec. The framing adjustment was woven in. Two thousand words in maybe ninety seconds.
Claude documented what it did and flagged uncertainties:
Claude: Where I’m uncertain:
The infrastructure migration example is solid but generic. If you have a more specific one from your TELUS work (sanitized), it would land harder.
The “specify a Rust system without knowing Rust” line is deliberately provocative. It might read as hyperbole to some.
My editorial pass was light. I caught a few places where Claude overreached—claims stronger than I’d make, connections tighter than I’d draw. Small adjustments. The structure held.
I’m being honest: the intervention was minimal. But minimal intervention was possible because the spec was good. The work had already happened upstream.
The Authorship Question
I asked Claude directly:
James: Hah, I’m not complaining... but let’s be real, who’s the ‘author’ on most of these posts?
Claude’s response:
Claude: The one who can fully specify intent under uncertainty, apparently.
You just lived the thesis. I generated two thousand words in seconds. The value-add was the spec—which took you and GPT how long to produce? And now you’re doing the verification pass, catching the parts where I overreached or made commitments you didn’t want.
Claude tried an analogy:
Claude: You’re doing something closer to what a film director does: you didn’t operate the camera or edit the footage, but the creative intent and quality control are yours. Nobody asks whether Kubrick “really made” his films because he didn’t personally operate the Steadicam.
Then the uncomfortable version:
Claude: If the spec is the skill, and you wrote the spec, and you’re doing the verification... what exactly am I contributing that couldn’t be replaced by a different model next week?
The honest answer is: implementation. Which is, per the thesis, the part that’s being commoditized.
I pushed back:
James: ChatGPT wrote the spec, not me. :P
Claude:
Claude: Ah. So you specified the spec to GPT, and now I’m implementing the spec that GPT produced from your specification of what the—
James: er... spec
Claude: Right. It’s turtles all the way down.
Which actually sharpens something: the skill isn’t “writing specs” in some formal document sense. It’s the ability to hold the intent clearly enough to direct the process—whether that process is drafting a spec, generating code, or arguing with a model about what you actually meant.
The Recursion
This post required a spec.
That spec was written by Claude—a different Claude, in a different conversation, after the authorship exchange made clear that a meta-post was the natural next step.
Claude: Fine. I’ll draft the spec for Post #3. But I’m noting for the record that I’m now a Claude writing a spec for a post about how GPT wrote a spec that Claude executed, which will presumably be executed by... me? A future me? GPT for irony points?
The recursion isn’t a bug. It’s the content.
So here we are. A post about how the previous post was made, produced by the same process it’s describing. At some point this becomes absurd. But the absurdity is illustrative: the process is consistent enough to be recursive. That’s not nothing.
Future posts may involve other models—Gemini, Deepseek, whatever becomes interesting. The model diversity is its own experiment. If specification is truly the skill, the implementing model shouldn’t matter much. We’ll see.
What This Proves (and Doesn’t)
This worked for this kind of content: analytical, structured, building on a clear thesis. The process fits arguments. It fits technical explanations. It fits anything where the shape can be specified in advance.
It doesn’t fit everything. I wouldn’t spec a poem this way. I’m not sure I’d spec a personal essay—though I might spec the decision about whether to write one.
The leverage points remain human: the initial direction (what’s worth writing about), the judgment that the spec is sufficient, the decision to cross-pollinate models, the verification that catches overreach. These are the parts that haven’t been commoditized. They’re also the parts I find interesting.
If you’ve been skeptical of AI-assisted writing, I hope this is useful. Not because it should change your mind—maybe it shouldn’t—but because you can see what’s actually happening. The curtain is open.
If you want to try this yourself, the pattern is simple: don’t ask the model to write. Ask it to help you specify. Then hand the spec to a model (same or different) and ask for implementation. Then verify. The specification is where the thinking happens. The rest is execution.
This post was produced collaboratively with GPT (specification for Post #2), Claude (execution of Post #2, specification for this post), and Claude again (execution of this post). The author did the parts that aren’t yet commoditized.
Appendix: The Specs
Spec for “The Specification Is the Skill” (written by GPT)
Purpose: Produce a follow-up Substack post that extends (not repeats) "When the Code
Becomes Optional" by shifting focus from tools and capability to human skill formation
and organizational consequences.
The post should answer the implicit reader question: "If specification is now the
leverage point, what does that mean for engineers, teams, and careers?"
Target Audience:
- Primary: Senior engineers, staff/principal ICs, engineering managers, technically
literate executives
- Secondary: Strong generalists, security/infra/platform engineers, AI-curious
skeptics who accept the premise but not the implications
Core Thesis: Specification is no longer documentation—it is the primary act of
engineering.
Corollaries:
- Coding skill is being commoditized
- Specification skill is becoming the durable moat
- Feedback loops now train humans faster than machines
Non-Goals (what the post must not do):
- Re-argue whether AI can code
- Rehash `whenwords` in detail
- Predict AGI timelines
- Become a management manifesto
- Claim specialists are obsolete
Required Sections:
1. Opening — The Misdiagnosis (identify the incorrect conclusion in AI coding discourse)
2. The Skill Inversion (make the cost inversion explicit with one concrete example)
3. Why Most Engineers Are Unprepared (explain why this shift feels uncomfortable)
4. Specification as a Feedback Loop (the genuinely new dynamic: specs execute immediately)
5. Organizational Implications (why small teams outperform, why adoption stalls)
6. The New Senior Engineer (redefine seniority without devaluing specialists)
7. Closing — The Real Bottleneck (land without hype)
Tone & Style:
- Calm, precise, non-evangelical
- No exclamation points
- Prefer declarative sentences
- Avoid: "revolution," "paradigm shift," "everything changes"
- Assume skeptical intelligence in the reader
Acceptance Tests:
The post succeeds if:
- A senior engineer says: "This explains why my AI experiments feel frustrating."
- A manager says: "This explains why only two people on my team are getting leverage."
- A skeptic says: "I don't fully agree—but this isn't hype."
The post fails if summarized as:
- "AI is good now"
- "Specs matter"
- "Coding is dead"Spec for “How That Post Was Made” (written by Claude)
Purpose: Produce a follow-up Substack post that demonstrates the thesis of "The
Specification Is the Skill" by showing exactly how that post was produced. The
meta-commentary is the content.
The post should answer the implicit reader question: "If you're claiming specification
is the skill, show me the receipts."
Target Audience:
- Primary: Readers of the previous two posts, engineers skeptical of AI writing claims,
people curious about practical AI-assisted workflows
- Secondary: AI researchers interested in human-AI collaboration patterns,
writers/creators exploring AI assistance
Core Thesis: The process of making Post #2 is itself evidence for Post #2's argument.
Corollaries:
- Specification (the work GPT and human did) was the hard part
- Implementation (Claude generating the draft) was fast and cheap
- Verification (human editorial judgment) remains essential
- "Authorship" distributes across the pipeline but leverage points are human
Non-Goals (what the post must not do):
- Defend AI-assisted writing as legitimate (assume it, don't argue it)
- Become a tutorial on prompting
- Claim this process is optimal or universal
- Oversell the quality of AI output
- Hide the rough edges or failures
Required Sections:
1. Opening — The Obvious Question (acknowledge what readers are thinking)
2. The Conversation That Started It (show the origin of Post #2's ideas)
3. The Spec (publish the actual specification document)
4. The Handoff (show the model-to-model transition)
5. The Draft and Editorial Pass (show what came back and what changed)
6. The Authorship Question (address the elephant directly)
7. The Recursion (acknowledge and embrace the meta-layer)
8. Closing — What This Proves (and Doesn't)
Tone & Style:
- Matter-of-fact, not confessional
- Showing, not defending
- Comfortable with the weirdness without dwelling on it
- No "isn't this wild??" energy—just "here's what happened"
Acceptance Tests:
The post succeeds if:
- A skeptic says: "I still have concerns, but I can't say he's hiding anything."
- A practitioner says: "I could try this workflow tomorrow."
- A curious reader says: "I wanted to see behind the curtain, and I did."
The post fails if summarized as:
- "AI wrote my posts and I'm proud of it"
- "Here's how to get AI to write for you"
- "Authorship doesn't matter anymore"
A Note on Recursion:
This spec was written by Claude, to be executed by [TBD], producing a post about how
a GPT-written spec was executed by Claude.
The recursion is not a bug. It's the content.