how this blog works
a short tour of the static site behind these notes, and the recent move from a CMS to plain markdown in obsidian
I want to document how this site works as of April 19 2026, after a refactor.
the stack
The site is a statically generated Next.js app using the App Router and React 18. Every page is statically generated at build time, so what gets deployed is a folder of HTML and assets.
Styling is done via Tailwind. Hosting is Netlify, with the @netlify/plugin-nextjs plugin handling the build output and image optimization.
There's no database, no runtime API, no headless CMS. Just files in a git repo.
content as files
Every note lives as a markdown file in posts/, with YAML frontmatter at the top:
---
title: thinking through soil
date: 2026-03-01T12:00:00.000-05:00
excerpt: a website for a soil adventurer
published: true
tags:
- code
- portfolio
featuredImage: /images/gsd_hdp_thinking_through_soil_174_175.webp
---
lib/posts.js reads the directory at build time, parses the frontmatter with gray-matter, filters out anything with published: false, and sorts by date. The post body is rendered in components/MDXContent.jsx using react-markdown with remark-gfm and rehype-raw. The custom <img> mapping pipes images through Next.js's <Image> component for sizing and lazy loading.
Frontmatter can also declare richer media — gallery, videos, audio — which are rendered by dedicated components (ImageGallery, VideoPlayer, AudioPlayer). Galleries use yet-another-react-lightbox for the zoom-and-swipe view.
The homepage and archive are themselves markdown (content/homepage.md, content/archive.md), read by their own thin loaders. The route map is small: / for the homepage, /notes/[slug] for posts, /archive for the long view.
images
Images live in public/images/ and are committed to the repo. At request time, the Netlify Image CDN serves them in modern formats (AVIF, WebP) at appropriate sizes — a custom loader in lib/netlify-image-loader.js handles the URL rewriting.
To keep the repo from bloating with original PNGs and JPEGs, there's a scripts/images-to-webp.mjs script (powered by sharp) that converts everything in public/images/ to WebP and rewrites any markdown references that pointed at the old extensions. It runs as prebuild, so Netlify builds always start from optimized assets even if I forget to run it locally.
how a post gets published
This is the part that I just changed.
Previously, posts were authored and managed via Decap CMS — a small admin UI at /admin that authenticated against GitHub via OAuth and committed markdown files back to the repo. That worked, but I didn't particularly enjoy using it or need it. The content was always going to be markdown in posts/, the admin was just a typing surface that sometimes I would use, and sometimes I'd just author everything locally and then push all the assets up, to avoid the weird "publish" / commit / deploy cycles that end up resulting with Decap.
So I removed it. The repo is now an Obsidian vault. I open the project folder in Obsidian, write notes directly into posts/, paste images (which land in public/images/), and use the Properties view to edit the frontmatter as form fields. When a post is in some reasonable state to publish, I commit and push. Netlify picks up the push, runs npm run build (which runs the image conversion first), and the new note is live a minute or two later.
The whole authoring loop is now: open Obsidian → write → git push. No admin UI, no auth flow, no CMS schema to keep in sync with the code.
403'ing AI crawlers
There's a small piece of middleware.js that returns a 403 to a list of AI crawler user agents (GPTBot, anthropic-ai, ClaudeBot, and friends). The site stays human-readable; it just doesn't volunteer itself for training corpora. Whether that's effective is debatable, but it's the gesture I want to make. Perhaps I will try building a tarpit next.