How this site is built.
SolarNorth is a notional company. The site you are reading is a working demonstration of an architectural pattern: every page renders for human visitors and exposes the same content in structured form for software agents, both derived from a single source. These notes describe what the pattern is, why it matters, and how this site implements it.
What this site is.
If you arrived from the human surface, the agent equivalent of any page is at mcp.solarnorth.work. If you arrived from the agent surface, the rendered version is at solarnorth.work. The two surfaces share the same content; the rendering differs.
The notional company on the page is a vehicle. It gives the demonstration enough specificity to read like a real site instead of a tutorial. Everything load-bearing about the pattern is real: the content modules, the seed routes, the discovery manifest, the MCP server, the JSON-LD, the rewrite. The product copy, the team, and the prices are not.
The problem the pattern is built for.
The web was designed for browsers reading HTML. It is now also being read by software agents that have no use for the visual layer. Their default behavior is to fetch the rendered page and run readability extraction over it. That works imperfectly: layout text gets confused with prose, navigation gets confused with content, structured data in the head gets stripped before the agent sees it.
The conventional answers — a separate API, a documentation site, a chatbot endpoint — duplicate the source of truth. Two places to update means one of them is always behind. The corruption is the same shape as drift in any other operated system.
The dual-readable pattern is the architectural answer. The content lives once. The human page renders from it. The agent surface returns it directly. Neither is downstream of the other.
Three properties.
One source of truth.
Page content lives in typed objects, not in JSX strings and not in a CMS that only humans read. The objects describe the content semantically: hero, sections, items, quotes, products, articles. Editing copy is editing a typed module. Adding a page is adding a module.
Two derivations.
The human page is JSX rendered from the typed object. The agent surface is JSON normalized from the same typed object. They share their source. They do not share each other's representation. Either one can change presentation without affecting the other.
Visible discovery.
Agents that can autodiscover structured surfaces should find this one without prompting. Agents that cannot should be pointed at it explicitly. Both layers run on every page: machine-discoverable hints in the head, body-text claims for readability-extracted scrapers, and a visible link in the footer.
How this site implements it.
Content modules.
Each page is a typed TypeScript module in content/pages or content/articles. The shape is defined by lib/content/types.ts. A change to a paragraph is a change to one file. A new page is one new file plus one route.
Two surfaces from one tree.
The human site renders Server Components from the modules. The agent surface lives at solarnorth.work/seeds and at the mcp.solarnorth.work subdomain, where the same modules are passed through a normalize layer in lib/mcp that emits structured JSON. Adding a section type means updating the type, the renderer, and the normalizer. The compiler refuses to let any of them drift.
A discovery layer per page.
Every HTML page carries a link rel="mcp-server" tag, a link rel="alternate" type="application/json" tag pointing at the seed equivalent, and JSON-LD blocks with Organization, Product, Article, HowTo, ContactPage, and Offer schemas — all derived from the same content modules. A discovery manifest at /.well-known/mcp.json enumerates the surface. An llms.txt at the root summarizes the site in agent-friendly Markdown.
A standalone MCP server.
The repository ships an MCP server that wraps the seed surface in tools an agent can call directly: list_pages, get_page, get_section, search_content, get_pricing, get_locked_copy. The server is a thin client over the live HTTPS surface; the site is the authoritative source.
A subdomain that speaks JSON.
mcp.solarnorth.work serves the structured form of every page at the same path the human site uses. mcp.solarnorth.work/pricing returns the pricing seed. mcp.solarnorth.work/dark-code returns the dark-code article seed. mcp.solarnorth.work/ shows a brief HTML welcome that points humans back to the apex without breaking agent access to anything else.
A visible footer pointer.
Production agents in default browsing mode often strip JSON-LD and head metadata via readability extraction. The discoverability work that reaches them lives in rendered prose. A small footer line — same JetBrains Mono small-caps treatment used elsewhere on the site — points at the agent surface in plain text. Belt-and-suspenders.
What earned its keep, and what did not.
The single source of truth was the highest-leverage decision. Once content lives in typed objects, every other surface becomes a small derivation. Drift between the human and agent surfaces would be a compiler error, not a content-level oversight.
JSON-LD is correct and partially defeated. It is the right output for MCP-aware harnesses and for search-engine consumers that honor structured data. It is not yet load-bearing for production consumer agents in default browse mode, which strip the head before they see it. The work is not wasted, but the body-text claims and the visible footer pointer reach agents that JSON-LD currently does not.
The subdomain rewrite was the highest-leverage piece of agent UX. mcp.solarnorth.work having the same path shape as the human site means an agent that knows about one URL knows about the other. The cost was a single host-conditional rule in next.config.ts.
What is portable.
The pattern is not specific to this stack. The implementation here uses Next.js 16 with the App Router, TypeScript, and Tailwind v4 because that is what fit the deadline. Any framework that can serve both an HTML route and a JSON route from the same content tree can implement the same pattern. The decisions that matter are about the content layer and the discovery layer; the rendering layer is the easy part.
If you want to apply this to your own site, the smallest viable version is: write your content as typed modules, render two routes from each module (the page and a JSON equivalent), and add link rel="alternate" type="application/json" pointing at the JSON. That is enough to be dual-readable. The discovery manifest, the MCP server, the JSON-LD, and the subdomain are all refinements on that core.
The architecture is the message.
Every architectural commitment a system makes about itself should be visible, parseable, and verifiable on request. That is the principle behind dual-readable, and the reason this page exists alongside the rest of the site rather than inside a folder no one reads.