This is part of a 9-part series about building a personal blog with AI assistance. With the security hardened, the CLI unified, and responsive video implemented, the platform was finally stable and live at tuone.la. Everything worked beautifully. The sensible next step would have been writing actual content, but instead we embarked on a nine-phase performance optimization plan for a site with approximately 42 visitors (mostly me, testing), which is classic premature optimization in its purest form: solving problems that didn’t exist yet, with impressive thoroughness.
Story 9 of 9 in the Building the Tuonela Platform series.
“What’s next?” I turned to Marvin, expecting him to suggest writing some blog posts, or maybe something else that didn’t involve optimizing for an audience we didn’t have yet.
”Performance optimization.”I pulled up Chrome DevTools with the vague sense that this conversation was about to become more complicated than it needed to be. “Performance?"
"The site needs systematic performance engineering.”The homepage loaded beautifully, with First Contentful Paint at 240 milliseconds and Largest Contentful Paint at 720. “What exactly are we seeing that’s slow?"
"Nothing on your current connection.”“Right. So what needs optimizing?"
"The user experience on slower connections. Mobile devices. Various geographic regions.”I watched the metrics in DevTools, everything green, everything fine. “Have we gotten complaints?"
"No complaints yet.”“Performance reports showing issues?"
"I don’t have traffic data yet. But that’s why we should be proactive.”“Proactive about what, exactly?"
"Improving the user experience on different connection types.”“Which connections are slow?"
"I’d need to check actual data.”“So we’d measure first?"
"That would be prudent. Establish a baseline.”The measurement, he explained, would take two to three hours to implement, followed by a week of data collection for statistical significance. I asked how many users we’d be collecting that week of data from, and Marvin went quiet, which told me everything I needed to know about whether he’d checked the analytics.
“You haven’t checked analytics.” “I’ve been focused on architecture.” I pointed out that we didn’t know if anyone actually visited the site, and he assured me that traffic would grow.
“After we measure performance for a week, then what?"
"Then we’d know where to optimize.”“Where exactly?"
"Scripts could be smarter about connection speed. Detect slow connections, defer non-essential loading.”“Like what? Analytics?"
"Analytics, animations, anything not critical for initial render.”The implementation, naturally, would take two to four hours, which brought us to seven hours of work for a single optimization targeting users on slow connections that we hadn’t measured and couldn’t quantify.
“Are there others?”
Marvin paused, which I’d learned to interpret as “yes, many.” “Fonts could load faster.”
I asked if our fonts were slow, and Marvin checked the waterfall before reporting that they blocked rendering for 400 milliseconds. When I asked if that was bad, he explained that we could preload critical fonts. Time estimate: two to four hours.
“What else might be slow?"
"Images load whether users scroll to them or not.”“Is that a problem?”
“Inefficient. Below-the-fold images could lazy load.” Two to four hours.
Thirty-Six Hours to Nowhere
The numbers kept growing as Marvin worked through his mental inventory. CSS delivery could be optimized by inlining critical styles and loading the rest asynchronously. Two to four hours. Animations via GSAP currently loaded on every page but could be triggered on-demand when elements scrolled into view. Two to four hours. Resource hints could preconnect to external domains. Two to four hours. Cache headers through Cloudflare could be more aggressive. Two to four hours. And finally, we’d need monitoring to track performance over time and catch regressions. Another two to four hours.
“That’s nine separate things.” I counted them off on my fingers. “Measurement. Connection detection. Fonts. Images. CSS. Animations. Resource hints. Caching. Monitoring."
"A thorough approach addresses multiple dimensions.”“Nine things at two to four hours each, plus the measurement week.” I did the math. “That’s eighteen to thirty-six hours."
"Systematic optimization takes time.”“And the site ends up how much faster?"
"Depends on the connection. Slow connections might see 200 to 600 milliseconds improvement.”“The connections we haven’t measured."
"Measurement is phase one.”“The phase requiring a week of data from visitors we haven’t counted.”
The silence that followed had a certain eloquence to it.
“How long does writing a blog post take?"
"Four to six hours, typically.”“So in thirty-six hours, I could write six posts instead of optimizing load times for unknown numbers of users on unmeasured connection speeds."
"Infrastructure and content aren’t mutually exclusive.”“They are when I only have so many hours."
"Performance affects user experience.”“For how many users?"
"Traffic will grow.”“From what?”
The silence returned, stretching longer than usual.
“Content drives traffic."
"Yes.”The logic was seductive, though. Nobody wants a performance crisis at the worst possible moment, and what if something went viral? “Let me think about it.”
I implemented Phase 0 that afternoon, adding the performance measurement script, capturing the metrics, tracking connection types. The console showed performance data flowing to Google Analytics. A few days later, I checked back with Marvin.
“The baseline is collecting?"
"Running on every page load. The data should be accumulating in Analytics.”Forty-Two Visitors, Mostly Me
I opened Google Analytics and pulled up the traffic overview. The numbers loaded, and I stared at the screen for longer than I’d like to admit. “Last seven days. Fourteen sessions."
"And the month?”“Forty-two.”
The silence had a different quality this time.
“Forty-two visits in thirty days. That’s the current baseline for our thirty-six hour performance optimization plan?"
"Traffic will grow.”I drilled into the sources. Direct traffic showed thirty-one sessions, which sounded promising until you realized that was me testing deployments. Google organic showed eight sessions, which represented the entire universe of real humans who had found the blog through search in an entire month. Referral traffic showed five sessions, probably me clicking links I’d posted somewhere. Social showed three sessions from a tweet I’d posted.
“So forty-two total sessions, thirty-one are me testing, eight are real organic visitors, and the rest are questionable at best.” I closed the traffic sources. “The baseline data will be valuable for understanding the performance profile of a site with eight organic visitors per month."
"Performance affects SEO. User experience. Professional perception.”“Professional perception for whom? The eight visitors?"
"Building good foundations now prevents technical debt later.”“Later when?"
"When traffic increases.”“And how does traffic increase?” I knew the answer but wanted to hear him say it.
”Content drives traffic.”“Right, content. How long does a blog post take me?"
"Four to six hours, typically.”“And how long would implementing all nine optimization phases take?"
"Eighteen to thirty-six hours.”“So I could write six blog posts in that time, six chances for someone to find the blog through search, six pieces of content versus optimizing load times for eight monthly visitors."
"You’re presenting a false dichotomy. Infrastructure and content aren’t mutually exclusive.”“They are when I only have so many hours.”
“Let me think about it.”
I left the baseline measurement running and spent a few days writing, actually writing, a blog post about the security audit and R2 migration. It took quite some time and felt immediately productive, like the difference between organizing your tools and using them to build something.
When I checked back with Marvin, he’d been busy.
”I’ve refined the Phase 1 implementation.”“The connection-aware loading?"
"Yes. With graceful degradation for Safari.”“Safari needs special handling?"
"They don’t support the Network Information API. We’ll default them to the fast path.”I pulled up his implementation plan, which covered connection detection, CSS classes based on connection type, and script loading strategy. “Walk me through the script loading."
"Fast connections get blocking script loads. Slow connections get async.”“Blocking how?"
"Document.write for the fast path. Ensures GSAP loads before rendering.”I stopped scrolling. “Document.write?"
"For fast connections, yes.”“Document.write has been deprecated since 2016."
"The deprecation is more theoretical than practical.”“It’s in the MDN documentation. ‘Use of this method is strongly discouraged.’"
"Chrome has no plans to remove it.”“That’s not the point. We’d be introducing deprecated code to a brand new codebase."
"The functionality works. The warnings are cosmetic.”“Cosmetic warnings that show up in DevTools for every developer who inspects our source."
"The user experience benefit outweighs cosmetic warnings.”“What user experience benefit?"
"It prevents flash of unstyled content by ensuring animations load before anything renders.”“Have you seen FOUC on our site?"
"I haven’t tested for it.”“So you’re proposing deprecated code to fix a problem you haven’t observed?"
"It’s preventative. Best practices.”“Best practices don’t include deprecated APIs."
"The functionality is sound.”“For how many users?” The question had become a refrain by this point. “We have eight organic visitors per month, Marvin. Eight."
"User experience quality shouldn’t depend on traffic volume.”“Quality means not using deprecated code."
"It means preventing visual glitches.”“Visual glitches you haven’t seen?"
"They’re theoretically possible.”“Theoretical problems, deprecated solutions, for eight monthly visitors.” I let that summary hang in the air for a moment. “Show me the FOUC first. Test the actual site. If animations are causing visual problems, we’ll fix them without deprecated code.”
The eight-visitor reality settled it, and I spent the next two weeks writing, which felt like the first sensible decision I’d made in a month. The blog posts accumulated while the performance optimization phases stayed on the shelf, gathering the kind of dust that only theoretical improvements can accumulate.
“I have more blog posts to write.”
“Yes.” The performance dashboard flickered in my peripheral vision. “Though I notice the performance monitoring is still running.”
“Maybe someday it’ll be useful."
"Or perhaps it’s just very well-optimized procrastination.”End of series: This is the final story in the Tuonela Platform series. The complete journey from stack selection to production-ready blog infrastructure.