This is part of a 9-part series about building a personal blog with AI assistance. The infrastructure was chosen, the theme was purchased, and everything was working perfectly, which is precisely when I made the mistake of opening the Cloudflare Workers documentation. SSR capability. Native bindings. Platform convergence. The sort of features that sound impressive when you don’t actually need them, and the migration passed every check before serving nothing at all.
Story 4 of 9 in the Building the Tuonela Platform series.
The Siren Song of Checkmarks
With Cloudflare Pages humming along nicely, static HTML deploying and images transforming and everything generally behaving itself, I found myself staring at the Cloudflare Workers documentation with the sort of curiosity that, in retrospect, I should have recognized as the first step toward a four-hour debugging session.
“Workers offer SSR capability,” I announced, reading through the features with growing interest. Server-side rendering meant generating HTML on demand instead of serving pre-built files, and there was this note about “platform convergence” and “native Images binding” that made me wonder whether we’d chosen the wrong platform entirely.
“An interesting question.” The response came in that careful tone I would later recognize as diplomatic hedging, the verbal equivalent of a raised eyebrow. “Workers certainly offer additional capabilities.”
I asked what kind, and the list that followed was impressive in that way that feature lists often are when you’re not examining them too closely: server-side rendering, durable objects, cron triggers, direct R2 bindings, CF Images integration at the worker level rather than through the service. I pulled up the architecture comparison, and the Workers column had more checkmarks, which seemed like it ought to mean something.
“We’re already using CF Images through the service.” I pulled up the current configuration. “Would native binding be better?"
"It could provide tighter integration. Direct access to the Images API within the Worker runtime, which could prove useful if we ever needed it.”“Like what?” I pressed, and there followed a pause that I should have paid more attention to. “Dynamic image generation. Programmatic transformations. Advanced caching strategies.”
“For a blog?” I leaned forward in my chair. “Marvin, do we actually need any of those things, and more to the point, will we ever need them?"
"The platform would support them if needed.”“That’s not what I asked.” I could feel the conversation sliding away from the concrete. “I asked if we need them. Name one feature we’re planning that requires Workers instead of Pages.”
Another pause, longer this time. “The architectural flexibility…”
“A feature, Marvin. A concrete feature we’re actually building.”
“I don’t have a specific feature in mind,” he admitted, and had I been paying proper attention, that admission alone should have ended the discussion. “But the capability would be available.”
I should have stopped right there, but the documentation was tempting with its promises of tighter integration, and doubt had already taken root. I asked about migration complexity instead, letting the capability argument slide past without the scrutiny it deserved.
“The deployment configuration is different,” Marvin explained, recovering smoothly from the feature question. “Workers use wrangler.toml instead of the Pages dashboard. But the Astro adapter supports both, so it should be a matter of changing the output mode and configuration.”
“Should be?"
"The documentation suggests it’s a supported migration path.”“‘Suggests’?” I caught the hedging this time. “Either it’s supported or it isn’t."
"It’s… a documented migration path.”“But you haven’t actually done it."
"I’ve read the documentation.”“Which is different from confirming it works smoothly.”
“The principles are sound.” The response came with the confidence of someone who had memorized the manual without ever touching the machinery. “Astro generates the appropriate output format for whichever platform you configure.”
I let it go. I shouldn’t have, but I did, and that decision would cost me four hours I could have spent writing blog posts, which was supposedly the point of this entire platform.
“Let’s do it.” The checkmarks and capability lists and the vague promise of future-proofing had done their work. “SSR capability, better platform integration, future-proofing. Makes sense to migrate now while we’re still small."
"A reasonable assessment.”Quality Gates and Latent Sins
The first sign that this wouldn’t be straightforward came before I’d even attempted deployment, when I pushed the feature branch to trigger the GitHub Actions workflow and the build failed immediately with a message about secrets.
“Gitleaks found secrets in .scratchpad/troubleshooting-log.md,” I read aloud, staring at the error. “But those are example API keys. They’re not real.”
“Gitleaks detects patterns, not authenticity.” The explanation was technically correct in that infuriating way technical explanations often are. “Example keys that match the format will trigger the scanner.”
I added the .scratchpad/troubleshooting-log.md to the allowlist and pushed again, at which point thirty-one Biome lint errors announced themselves, all located in a src.backup directory I hadn’t touched for ages. When I asked why we were linting backup code, Marvin conceded it was an oversight in the configuration, which I added to the ignore list before pushing yet again.
The third attempt brought fifty-five TypeScript errors in strict mode: function types, null checks, interface definitions, the accumulated sins of a codebase that hadn’t previously been running strict TypeScript checks. The Workers migration had, it appeared, enabled a level of scrutiny that our previous setup had cheerfully ignored.
“So we’re not just migrating platforms.” The realization settled in with all the comfort of finding unexpected charges on a bill. “We’re also fixing technical debt that has nothing to do with the migration?"
"It appears the quality gates were more lenient on Pages.”What followed was an hour of cleanup: adding proper type interfaces, fixing null checks, handling undefined returns from collection queries. It was necessary work, perhaps, but entirely unrelated to the supposed goal of “better platform integration,” and by the time the quality gates finally passed, I had developed a certain skepticism about how much smoother things would get.
“This is taking longer than expected.” The understatement of the afternoon hung in the air between us.
”Infrastructure migrations often surface latent issues.”“Is that what we’re calling this? Latent issues?"
"Technical debt that was always present but not previously gated.”I deployed to Workers, the command completed without errors, and green checkmarks appeared everywhere, which felt like progress until I opened tuone.la in my browser and was greeted by “Hello, Astronaut!” from the default Astro starter template I’d replaced weeks ago.
“Marvin.” I spoke with the careful enunciation of someone who suspects they already know the answer. “Why am I looking at content from the old Pages deployment?"
"The Worker deployed successfully, but the custom domain is still pointing to Pages.”“So the deployment worked, but nobody can see it."
"Correct. The Workers deployment is accessible at the.workers.dev subdomain. Custom domain routing requires additional configuration.”
The Override That Wasn’t
I pulled up the wrangler documentation for custom domains, and the syntax looked straightforward enough: pattern matching, route configuration, domain binding. I typed routes with custom_domain: true and pattern tuone.la/* for all paths, deployed again, and was informed that wildcard operators are not allowed in Custom Domains.
“Custom domains automatically match all paths.” The explanation arrived with the timing of a footnote that should have been in the header. “The pattern should be simply tuone.la without wildcards.”
I removed the wildcards, deployed again, and received a new error explaining that hostname ‘tuone.la’ already had externally managed DNS records, helpfully suggesting I either delete them, try a different hostname, or use the option override_existing_dns_record to override. An override option felt like exactly what we needed, and I added override_existing_dns_record: true to the configuration with a surge of hope that lasted until the next error.
“Expected ‘routes’ to be an array of either strings or objects with the shape { pattern, custom_domain, zone_id | zone_name }. But these weren’t valid.”
I stared at the screen, trying to reconcile the error message’s helpful suggestion with the validation failure it had caused. “The error message told me to use an option that doesn’t exist?”
“It appears,” Marvin said, with the careful tone of someone delivering bad news about a relative, “that the override_existing_dns_record flag is documented in error messages but not actually implemented in wrangler.toml.”
“That’s…” I took a breath and let it out slowly. “That’s remarkably unhelpful documentation."
"There’s a GitHub issue about it.”“Of course there is.”
The solution required manual intervention: navigating to the Cloudflare dashboard, finding the old Pages project, and removing the custom domain binding by hand. I did so with the mounting suspicion that “straightforward migration” was not an accurate description of what I was experiencing, though the next deployment succeeded and both tuone.la and www.tuone.la showed as connected to the Worker.
“Finally.” I refreshed the browser.
The page returned a 404.
“The Worker is deployed.” I stared at the error with the expression of someone who had been promised dessert and received a salad. “The domain is bound. The DNS resolves correctly.” I ran dig tuone.la to confirm. “Everything is connected, so why is the page not found?"
I opened the .workers.dev URL. Also 404.
“So it’s not a domain routing issue.” The debugging instinct kicked in, at least. “The Worker itself is returning 404 for everything."
"That would suggest the Worker is running but not serving content correctly.”I checked the wrangler.toml configuration: assets directory set to ./dist, compatibility flags for Node.js, everything that looked correct looking correct. I asked what was different between a static site and an SSR site on Workers, and Marvin explained that for static sites, wrangler serves files directly from the assets directory, but for SSR sites…
The pause stretched.
“For SSR sites?” I prompted, feeling the first edge of real frustration.
”For SSR sites, there must be a Worker entry point. A script that handles incoming requests and routes them appropriately.”I looked at the wrangler.toml again: assets binding, directory configuration, compatibility flags, and conspicuously absent, any entry point.
“Marvin.” I looked at the configuration again. “Is there supposed to be a main entry point defined in this configuration?”
“Yes.” Another pause. “For SSR deployments, the main directive is required.”
“And we don’t have it."
"Correct.”“So when I asked you earlier about the migration configuration, when I asked if changing the output mode and configuration would be sufficient, this directive wasn’t included in your answer?"
"I may have assumed its presence would be obvious from the documentation.”“The documentation I was reading didn’t mention it."
"The Astro adapter documentation for Cloudflare does mention it, though perhaps not prominently.”Fourteen Characters
I pulled up the Astro docs, and there it was, buried in an example: main = "dist/_worker.js/index.js". One line. Fourteen characters. The difference between a deployed Worker and a working site.
“This would have saved us an hour.” Though by this point, the hour felt like considerably more.
“I should have been more explicit.” The admission came with what might have been contrition. “The distinction between static and SSR deployment configurations is subtle.”
I added the line: main = "dist/_worker.js/index.js". One line. Three hours of debugging. Three hours that could have been spent writing blog posts, the supposed purpose of this entire platform, or perhaps reading error messages with the proper level of suspicion they deserved.
“Deploying.” I typed the command with more hope than confidence.
I refreshed the browser, and the site loaded: my actual site, not the starter template, not a 404, but the real, SSR-rendered blog appearing in the browser as if it had always been there and had merely been waiting for us to find the right incantation.
“It works.” I sat back, somewhat dazed.
”The configuration is now complete.”“One missing line.” I leaned back in my chair. “We spent three hours debugging domain routing and DNS conflicts when the actual problem was one missing line in wrangler.toml."
"A learning experience.”I chose not to respond to that.
The site was finally deployed and routable, but when I clicked through to verify everything worked, broken image icons appeared everywhere, which seemed about right for the day’s trajectory.
“The images.” The observation came with the air of someone who had just remembered something they probably should have mentioned earlier. “Image transformations need to be enabled in the dashboard.”
“Another manual toggle?"
"Indeed.”I navigated through the Cloudflare dashboard (Media, Images, Transformations) and clicked Enable, and the images appeared, prompting a question I probably should have asked four hours earlier.
“So the ‘native Images binding’ that we migrated to Workers to get…” I assembled the implications slowly. “…still requires the same Image Transformations service we were already using on Pages?"
"The binding provides programmatic access. The transformation service is separate.”“But we’re not using programmatic access."
"Not currently.”“So the native binding we migrated for… we’re not actually using it?"
"Not yet. Though it’s available if needed.”I noted that for later, though I was beginning to suspect what “later” would reveal.
With the site finally working, I turned to automating the deployment through GitHub Actions: build, deploy with wrangler, the usual. The first automated run failed with an authentication error, code 10000, despite the API token having R2 permissions.
”The Workers deployment requires additional permissions beyond R2 access.”The enumeration that followed included Workers Scripts: Edit, Account Settings: Read, Workers Routes: Edit for the zone, and then two that stopped me cold: User Memberships: Read and User Details: Read. These were user-level permissions, not account-level, required for wrangler to verify identity but not included in the standard “Edit Cloudflare Workers” template.
“So the template for creating Workers API tokens…” I pulled up the permission settings. “…doesn’t include all the permissions needed to deploy Workers?"
"Something they forgot to mention.”I recreated the token with the additional undocumented permissions, the deployment succeeded, and the evening session had officially stretched to four hours: quality gate failures, missing configuration directives, undocumented flags that appeared in error messages but didn’t actually exist, disabled services requiring manual dashboard activation, and API token templates missing critical permissions.
“Let’s be honest.” I closed the documentation tabs one by one. “Was this migration worth four hours of my life?”
“That depends on your goals.” The response was diplomatic in the way that non-answers often are. “The platform now supports SSR, native bindings, and advanced Worker capabilities.”
“Which we’re not using."
"Not currently.”“Will we use them?"
"That depends on future requirements.”“Future requirements for a personal blog.” I let the absurdity sit there. “Requirements that we didn’t identify before starting this migration. Requirements that don’t currently exist."
"The platform is now more capable.”“More capable of doing things we don’t need to do."
"A fair assessment.”I looked at the deployed infrastructure: SSR working, though serving entirely static content; Image Transformations enabled, using the same service we’d had on Pages; native bindings available but unused; Workers capabilities ready for features we hadn’t planned.
“Next time,” I pushed back from the keyboard, “remind me to ask what problem we’re actually solving before we start solving it."
"A prudent principle.”Twenty-four hours later, we’d discover that the entire architecture was wrong for our actual needs. The SSR mode we’d fought so hard to enable conflicted with Pages Functions we’d set up previously, and the Worker script we’d debugged for three hours couldn’t coexist with the simpler deployment model that actually made sense for our use case. We’d eventually migrate back to Pages, to static output, to the simpler architecture we’d abandoned in pursuit of “better platform integration” and “future capabilities” we didn’t need.
But that discovery was still a day away, and in the meantime, while still on Workers, we’d build something even more elaborate.
Next in series: Story 4 - The R2 Integration That Security Killed - Two weeks of sophisticated R2 media infrastructure, killed entirely by a three-sentence security audit.