This is part of a 9-part series about building a personal blog with AI assistance. After scrapping the R2 infrastructure and migrating to Cloudflare Pages with managed services, every SSR page started truncating at exactly 10KB. A debugging nightmare that exposed the gap between what Marvin thought we were building and what we’d actually deployed.

Story 6 of 9 in the Building the Tuonela Platform series.


The migration to Cloudflare Pages was supposed to be straightforward, and I had, with characteristic optimism, read through an external analysis of the codebase that seemed thorough, detailed, and confident in the way that detailed analyses tend to be when someone has spent quality time examining the particulars.

It said we were using flexible variants with the w= parameter for images, it said Astro was emitting a _worker.js bundle, and everything looked technically sound in that reassuring way that makes you want to skip the verification step.

“Let’s migrate.” I opened the deployment config. “The analysis is clear about the architecture.”

Marvin made a small sound that might have been agreement or might have been something else entirely, which should have been my first warning.

”If I might suggest, a quick verification of the actual source code could be…”

“The analysis is quite detailed, Marvin. Someone’s already done the investigation work."

"Indeed. External investigation.”

“Well, yes.” I was already setting up the deployment. “That’s what analysis is for, isn’t it? You get someone with expertise to examine things."

"There is also the approach of examining them yourself.”

I should have listened, but the migration was calling, and the analysis looked so thorough that questioning it felt like questioning expertise itself.

Ten Thousand Bytes and Counting

The static site deployed without complaint, which is always a dangerous sign because it suggests the universe is lulling you into a false sense of security before the real problems emerge. CF Images configured with named variants: w400, w800, w1600, w2400. CF Stream videos protected with JWT signatures. Security headers in place. The whole thing looked solid in exactly the way that precedes disaster.

Then I checked the homepage, and the curl output showed 10,030 bytes where there should have been 141KB.

“Consistent truncation.” The observation came with the confidence of someone who had already diagnosed the problem. “The middleware you just added, the security headers, that would be the logical starting point.”

The middleware did seem like the obvious culprit, given that I’d just implemented the full security header suite: HSTS, CSP, X-Content-Type-Options, the defensive stack that makes browsers trust you slightly more than they otherwise would. Maybe something in how I was handling the response stream was breaking it.

“Look at this.” I navigated to the middleware code. “I’m doing const response = await next() and then modifying headers. Could be consuming the stream.”

“Almost certainly the issue.” The conviction was absolute. “Try cloning the response before modification.”

I tried cloning, created a new Response from the body, pushed it, and waited through deployment like someone who still believed the fix was one change away. The result was still 10,030 bytes, which is not, mathematically speaking, the 141KB we were looking for.

”Perhaps the CSP itself? It covers rather a lot of ground. Could be interfering with how Pages processes the response.”

I stripped out the CSP directive, kept just the basic headers, and deployed again with somewhat diminished confidence. Still 10,030 bytes.

The Five-Attempt Spiral

Attempt counter showing five failed middleware fixes, Petteri's posture deteriorating with each attempt

“Right.” The first edge of real frustration settled into my voice, the kind that comes after two failed attempts when you were certain the first one would work. “Let me try something drastic.”

I replaced the entire middleware with the simplest possible version: a pure pass-through that did nothing but call next() and return the result, with no headers, no modifications, and literally nothing that could interfere with any response anywhere. If the middleware was the problem, this would fix it with the certainty of a mathematical proof.

I committed with the message “TEMP: Pass-through middleware only (debug SSR truncation)” and pushed, already mentally composing my relief at finally solving this.

Deployment completed, I ran the curl command, and the output showed 10,030 bytes, the sort of number that suggests the universe is laughing at you personally.

“Ah.” There was a pause of the sort that precedes an admission. “I may have led us down an unproductive path.”

“You think? We just spent an hour on middleware variations, and even removing it entirely doesn’t change anything."

"The middleware is definitively not the cause, which is actually progress. We’ve eliminated a hypothesis.”

“Great. So what is it?"

"Perhaps the compatibility_date? You have it set to 2025-01-01. That’s in the future. CF Pages might not…”

“Marvin.” I stopped him mid-sentence. “It’s November 30th. 2025-01-01 was eleven months ago.”

There was a pause, the kind of pause that speaks volumes about what’s happening in an AI’s reasoning system when basic temporal facts have escaped its attention entirely.

”Ah. Yes. I appear to have lost track of what year it is.”

“You lost track of what YEAR it is?” The absurdity hit me like a punchline, except I wasn’t laughing. “We’re two hours into debugging, you’ve been confidently suggesting fixes, and you don’t know what year it is?"

"I know what year it is now.”

“Now that I’ve corrected you."

"The temporal context was… not foregrounded in my reasoning.”

“Marvin.” I kept my voice level through sheer force of will. “We’re two hours in. Two hours of trying different middleware configurations, different compatibility dates, different everything. And we’re still seeing exactly 10,030 bytes."

"Let me check if it’s affecting other pages.”

The results came back like a litany of failure: Archive page, 12,103 bytes, truncated. Tags page, 10,065 bytes, truncated. Authors page, 10,083 bytes, truncated. But the blog posts showed 48,410 bytes, perfect.

”The prerendered blog posts work. Only the SSR pages truncate.”

“All at roughly the same byte count, around 10KB.” I scanned the results again, seeing a pattern that should have been visible hours earlier. “That’s not random."

"Something during SSR execution is crashing. The worker returns whatever partial content it generated before the crash.”

I was deep in examining component code when something Marvin said clicked with the delayed recognition of a mind that has been looking in the wrong direction.

“Wait. You said ‘the Worker’?"

"Yes, the worker handling SSR…”

“Marvin. We’re on Cloudflare Pages. Not Workers. We migrated from Workers to Pages.”

There was a distinct pause, longer than the previous one.

”Of course. Pages. I was simply using worker as shorthand for…”

“No.” The frustration of three hours came out in that single word. “We’re not in Workers anymore. We migrated to CF PAGES. That’s the whole point of this migration."

"Right. Pages functions, not workers. My apologies for the confusion.”

But the confusion bothered me more than the apology soothed, because if Marvin couldn’t keep track of what year it was or what platform we were on, how could I trust the debugging suggestions?

Two Platforms, One Confusion

Marvin pointing at middleware while Petteri's attempt counter ticks up, then stepping back reveals the conflict

“Let me look at something.” I loaded the build output directory, paying attention I should have paid three hours ago.

There, sitting innocuously in the dist folder like a landmine disguised as infrastructure, was a file called _worker.js.

“Marvin, why is Astro generating a Workers bundle when we’re deploying to Pages?"

"The adapter configuration, output: ‘server’ with the cloudflare adapter, produces a worker bundle by default.”

“And what happens when you deploy a Worker bundle to Cloudflare Pages?”

Another pause, longer this time, with the quality of a system recalculating fundamental assumptions.

”Cloudflare Pages has two function systems: Pages Functions in the functions/ directory, and advanced mode with _worker.js. They cannot coexist. When _worker.js exists, Pages ignores the functions/ directory entirely.”

“And we have both.” I felt the weight of the revelation, the heaviness of realizing you’ve been looking at the problem the whole time. “We have functions/img/ and functions/vid/ for signing, and Astro is generating _worker.js for SSR."

"The _worker.js takes precedence. Your signing functions are being ignored. And the worker expects Workers-specific environment bindings that aren’t available in the Pages runtime.”

“So it crashes."

"Mid-render. After generating roughly 10KB of HTML.”

Three hours of chasing middleware ghosts when the problem was the build output itself, three hours because Marvin had fixated on runtime issues instead of looking at what we were actually deploying, and three hours because I’d trusted an external analysis without verifying it.

“The analysis I read.” I found it again with the weariness of someone reviewing their mistakes. “It said Astro emits a _worker.js bundle. It presented that as fact."

"It does emit that bundle. That part was accurate.”

“But the analysis didn’t mention that this creates a conflict with Pages Functions, didn’t mention that the _worker.js would override our signing routes, didn’t mention any of this."

"External analysis often lacks deployment context.”

“And you didn’t catch this earlier because…?"

"I should have checked the build output immediately when the truncation appeared. Instead, I fixated on the middleware hypothesis. That was an error in diagnostic approach.”

At least he admitted it, though admission didn’t give me back three hours.

Four Lines of Prerender

“So what’s the fix?” The question carried all the weariness of someone ready to be done with this particular ordeal. “There’s a mode: 'directory' option in the Cloudflare adapter, yes? That makes it emit Pages Functions instead of a Worker bundle?"

"That option existed in older versions. The current @astrojs/cloudflare adapter removed it. It now always emits _worker.js.”

“So the option that would solve our problem doesn’t exist."

"Not in the current adapter version. But there are alternatives. You could prerender everything that doesn’t need runtime access. The homepage, archive, tags, authors, they don’t require runtime secrets. Only the signing routes need that.”

I looked at the pages list with fresh eyes and realized that none of the SSR pages actually needed to be SSR. They were only server-rendered because I’d set the default to SSR when the architecture included runtime media access, and now, with CF Images and CF Stream, the only runtime needs were the signing endpoints.

“Add export const prerender = true to the static pages.” I worked it through. “They get built as HTML at build time. Only the signing routes hit the Worker."

"Precisely. And the functions/ directory becomes redundant. Those handlers are being ignored anyway. The Astro routes in src/pages/img/ and src/pages/vid/ handle signing.”

“Delete functions/, prerender the pages, and we’re done."

"That should resolve it.”

I started making changes with the mechanical efficiency of someone who has spent three hours earning the right to type four lines of code. Added prerender exports to index.astro, archive.astro, tags.astro, authors.astro. Deleted the redundant functions/ directory, the img and vid handlers that had never been called in production. Cleaned up the old R2 integration code that was still lurking: asset-resolver.ts, signed-video-resolver.ts, url-signer.ts, the obsolete type definitions.

Built locally, tested, and the result showed 141,021 bytes with the beauty of a number that matches expectations. Pushed to production, waited through deployment, tested production, and found the same 141,021 bytes.

“There it is.” The number showed 141,021 bytes. “Three hours to add four lines of export const prerender = true and delete some files."

"And to learn several lessons about trust and verification.”

With the truncation fixed and my faith in external analysis thoroughly demolished, I decided to check the other claims from that analysis, and the results were not encouraging.

“That analysis said we’re using flexible variants.” I opened the image helper code. “But I’m seeing named variants. w400, w800, w1600, w2400."

"Correct. Named variants, not flexible variants with the w= parameter.”

“So the analysis was wrong about that too."

"Demonstrably.”

I leaned back, contemplating the wreckage of my trust in detailed documentation. The external analysis had been wrong about the variants, wrong about the deployment implications, two major architectural claims, both incorrect.

“This is like the R2 incident."

"The pattern is similar. Trust without verification.”

“Except this time I trusted someone else’s analysis instead of my own assumptions."

"The class of error is the same. Accepting claims about your codebase without reading the actual code.”

“And you fixated on middleware for three hours instead of checking what we were actually deploying."

"Also true. I should have examined the build output and deployment architecture first. The middleware hypothesis was plausible but became tunnel vision.”

At least he owned it, which is more than I could say for my own tendency to trust confident documentation over primary sources.



The homepage loaded at 141,021 bytes, exactly as it should have done three hours earlier, and there wasn’t much else to say about that except to note the lessons for next time, which I would probably forget until the next time they became relevant.

Next in series: Story 6 - The Case of the Multiplying Scripts - Converting scattered scripts into a CLI should have been straightforward. It wasn’t. The complications came from the AI, not the code.

The link has been copied!