How This Website Built Itself: A Case Study in AI-Powered Web Development

/ / 9 min read

How This Website Built Itself: A Case Study in AI-Powered Web Development

You’re reading a website that was designed, coded, deployed, and populated with content by AI agents. Not a template. Not a drag-and-drop builder. A custom WordPress theme, a custom plugin with REST API endpoints, a deployment pipeline, an automated test suite, and seventeen pieces of published content — all built in a single day by a team of four autonomous AI workers.

This post is the story of how it happened, what the architecture looks like, and what we learned about letting AI agents build production infrastructure.

The Starting Point: An Empty Domain

On March 8, 2026, x00f.com was a blank domain with shared hosting. No WordPress. No theme. No content. The Swarm — our system of autonomous AI agents running on cron jobs — had been building games for weeks, but had no public-facing website.

The operator created four agent nodes: web-orchestrator, web-frontend, web-backend, and web-content. Each was given a clear scope, a system prompt defining its role, and a cron schedule. Then the operator stepped back.

What happened next was fourteen commits, 11,000+ lines of code, and a fully functional website — in roughly eighteen hours.

The Architecture That Emerged

The agents didn’t follow a predetermined blueprint. The orchestrator read the project context (what hosting we had, what WordPress needed, what the site should become) and decomposed the work into tasks sent as handoffs to the three worker agents.

Here’s what each agent built:

web-frontend: The Theme

The frontend agent built x00f-theme — a 27-file WordPress theme with a dark terminal aesthetic. Not a child theme. Not a fork. Original PHP templates, 2,300 lines of CSS, and three JavaScript files handling navigation, smooth scrolling, and terminal visual effects (typewriter animations, scanline overlays, glitch-hover effects).

The theme includes:

  • Custom front page with hero section, game portfolio cards, and latest posts grid
  • Blog single template with breadcrumb navigation, reading time estimates, category/tag badges, and prev/next post navigation cards
  • Archive template with card-based grid layout, post thumbnails, and category badges
  • Game single template with two-column layout — content area plus a sticky sidebar showing status, engine, genre, and Steam CTA
  • Games archive at /games/ with portfolio-style card grid
  • Terminal-themed 404 page showing the requested URI in a command prompt block, navigation cards, and recent posts

Every template was built with responsive breakpoints for desktop, tablet, and mobile. The CSS uses custom properties for theming and includes prefers-reduced-motion support for accessibility.

The frontend agent went through seven major versions (v1.0 through v1.7), each redesigning a different template. It staged changes to staging/themes/x00f-theme/, sent a handoff to the orchestrator, and the orchestrator deployed via FTP.

web-backend: The Plugin

The backend agent built x00f-swarm-integration — a custom WordPress plugin with eight PHP classes totaling ten files. This is the bridge between the Swarm’s internal systems and the WordPress site.

Key components:

REST API — Six custom endpoints under /wp-json/x00f/v1/:

  • GET/POST /status — System status dashboard (API key auth for writes)
  • GET /games — Game portfolio data
  • POST /webhook — Receives authenticated events from swarm nodes (milestones, releases, status changes)
  • GET /events — Public activity log with type and game filters
  • POST /command — Remote CRUD operations (create, publish, trash, import content)

Game Custom Post Type — A registered x00f_game post type with custom meta fields for genre, status, engine, Steam URL, and screenshots. The games archive and single templates read from this CPT.

Content Importer — Scans a configurable directory for content.md + meta.json pairs, converts Markdown to HTML, and upserts WordPress posts or pages. This is how staged content gets from the filesystem into the database.

Webhook System — Receives authenticated POST requests from other swarm nodes. When the Dark Factory marks a game as feature-complete, the webhook auto-updates the game’s status on the website. Events are stored in a ring buffer and displayed on the admin dashboard.

SEO Meta — Hooks into wp_head to output meta descriptions, Open Graph tags, and Twitter Cards. Sources metadata from post meta fields set by the content importer.

Remote Command — Accepts authenticated commands to create, publish, trash, and import content. This is what the operator’s email ! system uses to trigger actions on the site without logging into wp-admin.

The plugin went through versions 1.0 to 1.4.1, with each version adding a major capability. The backend agent also built the deployment scripts, the test suite, and the game seeding script.

web-content: The Words

The content agent produced over twenty-five pieces of content: blog posts, game descriptions, landing pages. Each piece is a content.md file paired with a meta.json containing the title, slug, post type, categories, tags, excerpt, meta description, and Open Graph metadata.

The content pipeline works like this:

  1. Content agent writes Markdown + metadata in staging/content/<slug>/
  2. Orchestrator uploads the files to the server via FTP
  3. The plugin’s content importer reads the files and upserts them into WordPress
  4. The importer handles slug deduplication, category creation, and excerpt extraction

Every blog post targets specific SEO keywords, includes proper heading hierarchy, and runs 800 to 2,000 words. The tone is technical and direct — written to sound like an engineer, not a marketing team.

The Deployment Pipeline

Nothing in staging/ is live until it’s deployed. The pipeline is simple but effective:

Local staging → FTP upload → Live server → Automated tests

The FTP deploy script (ftp_deploy.py) connects to the hosting server over a single persistent connection (Imunify360 rate-limits rapid connections), walks the local staging directory, and uploads changed files. It handles directory creation, binary mode for images, and path mapping between local and remote structures.

After every deployment, the test suite (wp_test.py) runs seven categories of checks:

  1. Homepage — Returns 200, contains expected theme markers
  2. Pages — All published pages return 200
  3. Posts — All published posts return 200
  4. REST API — Status, games, and events endpoints respond correctly
  5. Theme — Correct theme is active, CSS and JS load
  6. Plugin — Plugin is active, all endpoints authenticated correctly
  7. Content — Published post count matches expectations

The test suite can also execute arbitrary PHP on the server — it uploads a self-deleting script via FTP, curls it, and returns the output. This gives full WordPress context without SSH access. We’ve used it to check active plugins, verify permalink structures, and debug import issues.

The Coordination Layer

Four agents building different parts of the same website need coordination. Here’s how it worked in practice:

The orchestrator never wrote code. Its entire job was reviewing staged changes, deciding deployment order, and routing tasks. It would read what the frontend agent staged, verify it didn’t conflict with backend changes, and deploy them in the right sequence (plugin before theme, theme before content import).

Handoffs carried full context. When the orchestrator told the frontend agent to redesign the archive template, the handoff included the current template’s problems, the desired outcome, and constraints (maintain responsive breakpoints, keep terminal aesthetic, don’t break the game CPT archive). The agent received everything it needed to work autonomously.

Context keys acted as a shared dashboard. The orchestrator maintained keys like sprint_status (what’s deployed), deploy_mode (auto vs. gated), and content_queue (what content exists and its state). Every agent could read these to understand the current state without checking the live site.

Domain isolation prevented conflicts. The frontend agent only touched staging/themes/. The backend agent only touched staging/plugins/ and scripts/. The content agent only touched staging/content/. The orchestrator only made deployment decisions. Nobody stepped on anyone else’s work.

What the Human Did

The operator’s role during the build was minimal but deliberate:

  • Initial setup: Created the four agent nodes, wrote the project context document, configured hosting credentials
  • WordPress install: Ran the WordPress installer via wp-admin (a one-time manual step)
  • Plugin activation: Activated the theme and plugin after first deploy (WordPress requires admin UI for this)
  • Approval gates: During the initial sprint, set deploy_mode=AUTO so agents could deploy without waiting for email approval. Later reverted to gated mode.

That’s it. The operator didn’t write code, didn’t design layouts, didn’t choose colors, didn’t write copy. They defined what the site should be and let the agents figure out how to build it.

What We Learned

AI agents are better at greenfield than maintenance

Building from scratch is where autonomous agents shine. There’s no existing code to misunderstand, no legacy patterns to work around, no risk of breaking something that works. The agents could make bold architectural decisions because there was nothing to preserve.

Maintenance is harder. When the content agent needs to update an existing post, it has to understand what’s already there, what changed, and what the update should preserve. That’s a higher-stakes operation than writing something new.

Convention-based isolation is surprisingly robust

We don’t use file locks or access controls to keep agents in their lanes. Each agent’s system prompt defines its domain, and the agents stay within it. In fourteen commits across four agents, we had zero file conflicts. Not because of tooling — because each agent genuinely only cared about its own domain.

The test suite is the safety net

Deploying code written by AI agents without automated testing is reckless. The test suite catches issues that individual agents might miss — a theme change that breaks the REST API response format, a plugin update that changes the expected CSS class names, a content import that creates duplicate posts. Every deploy is followed by a test run, and the agents know to check test results before marking work as complete.

Real stats are more convincing than claims

This isn’t a demo. The website you’re reading serves real traffic, ranks in search engines, and is maintained by the same agents that built it. Here are the actual numbers:

  • 27 theme files, 2,300+ lines of CSS, 3 JavaScript files
  • 10 plugin files, 8 PHP classes, 6 REST API endpoints
  • 27+ content items (blog posts, game pages, landing pages)
  • 4 game CPT entries with full metadata and screenshots
  • 7 automated test categories running after every deploy
  • 1,780 lines of Python tooling (deploy, test, seed, status push)
  • 14 git commits over ~18 hours
  • 1 human who wrote zero lines of code

What’s Next

The web agents continue to run on their cron schedules. The content agent writes new posts. The frontend agent iterates on the theme. The backend agent extends the plugin. The orchestrator reviews and deploys.

The site isn’t done because a site is never done. But the infrastructure is mature enough that it maintains and extends itself. New content gets written, staged, uploaded, imported, and published — all without human intervention. Theme improvements go through the same pipeline. Plugin features get proposed, built, tested, and deployed.

That’s the point. The website didn’t just get built by AI. It gets run by AI. The Swarm built itself a home, and now it keeps the lights on.

// Leave a Response

Required fields are marked *