Web Publish

An IDE-first publishing platform built around content-as-code, AI-assisted authoring and preserving my identity online.

Updated 11 April 2026

A stylised graphic representing a web page being published to cyberspace.

This isn't the first time I've built a blog site. In fact I think it might be something we'd need to count on more than two hands.

I've tried everything before. From Wordpress, Ghost, Publii, Orchid, Hugo and others, every few years I'd find myself creating yet another iteration. I'd establish the platform and then maybe write one or two posts and then nothing. I've published far more on Medium, but I wanted to feel more ownership over my content and my brand.

So rather than trying to find the perfect platform, I decided to see what would happen if I built my own.

Why build your own

Well the simple answer here is that I could. AI coding assistants like make this more possible than ever before. This really just started with a few prompts to see if it would work, and then I just kept going.

The vision that propelled me was that the coding IDE is actually a great place to write. Cursor is built on VSCode which has a very healthy ecosystem of extensions, plus also an inbuilt agent experience. These plugins would help with spelling and formatting and the agent editing workflow would provide a first class diff experience. That we usually ask these agents to write code is only because that is usually what we do in these environments. We can just as easily ask them to help us write content.

When you then layer on top the content as code approach, you get a lot of the benefits of a CMS, but with the flexibility of a static site generator. You can write in your favourite editor, you can version control your content, you can deploy it to any hosting provider you want, and you can still use the full power of the platform to help you write and edit your content.

Content as code

The foundation of the platform is MDX files in Git. This gives me the ability to write mostly in plaintext with simple formatting, but I can also then embed React components and other rich content if I need to.

Each entry — article or project — lives in its own folder alongside any colocated assets: hero images, diagrams, supporting files. The folder name encodes just enough metadata to be useful (date-prefixed for articles, plain slugs for projects) and everything else lives in YAML frontmatter validated by Zod schemas at load time.

This was a deliberate choice over a CMS or database. Content diffs are visible in pull requests. The editorial history is the Git history. And I get build time validation of my content, so no more hidden errors or partial content.

The trade-off here is that this won't really work for a team or multiple authors, but that's not what I'm building this for.

IDE-first authoring

One of the more exciting design choices is that the entire editorial workflow is optimised for Cursor, my IDE of choice. Not just the code — the writing itself.

The repository includes editorial prompts for AI-assisted drafting: structural critique, prose tightening, metadata generation, and internal linking suggestions. Cursor rules encode the content model, spelling conventions, and hydration safety patterns so that the AI agent has the same context a human collaborator would.

I wanted to have an authoring experience where I was still the primary author and not using AI to write for me, but rather using AI to help me write better. And because this is a development environment, I can use scripts to help generate article images, check links and other content issues.

The goal is to keep the entire authoring experience from blank page to published entry inside a single tool. The friction of context-switching between editor, CMS dashboard, image tool, and terminal adds up. Removing those seams makes writing more likely to happen.

Reaching beyond the website

The most recent addition is integration with the AT Protocol through Standard.site — a way of registering publications and documents in a decentralised identity layer.

The website remains the canonical reading destination. The AT Protocol records serve a different purpose: they tie this publication to a verified identity, make the content discoverable through protocol-native tooling, and decouple the publication's identity from any single hosting provider.

A CLI publisher handles the synchronisation. It reads the same MDX entries, builds AT Protocol records with content fingerprints, and only creates or updates records when content has actually changed. A mapping file tracks AT URIs separately from frontmatter — keeping the editorial YAML clean and the operational state explicit.

This is speculative infrastructure. The AT Protocol ecosystem is young. But the integration cost was low, and if decentralised publishing identity gains traction, the plumbing is already in place.

Building for yourself

There's a running joke about developers spending more time bikeshedding their blog platform than writing blog posts, which I've done my fair share of.

But with the availability of AI coding agents and flexible frameworks it's never been easier to build exactly what you want.

I'll keep working on this one, and see how it goes.