SystemsActive

Nerve Center

Mission control for AI-native developers. A self-hosted ops dashboard that tracks your projects, monitors your agents, and uses a local LLM to tell you what to focus on next.

24
Projects Tracked
100+
Agent Sessions Logged
Every 30min
Intelligence Cycles
Gemma 4 26B
Local LLM

The Problem

If you use AI agents for development — Gemini, Claude, Cursor — you know the problem. Each session starts from scratch. Agents don't know what happened in the last session. They can't see your other projects. There's no shared memory, no continuity, no prioritization intelligence.

The result: you become the bottleneck. You're the one remembering context, re-explaining decisions, and manually tracking what's in progress across 15 different tabs and 5 different agent conversations.

Nerve Center solves this by giving agents a shared operating system. Every session is registered. Every project has live context. And a local LLM runs in the background, analyzing your portfolio and generating intelligence — without sending a single byte to the cloud.

Capabilities

Project Portfolio Tracking

24 active projects tracked in real time. Status, priority, progress, and activity metrics — all visible on one dashboard.

AI Agent Session Management

REST API for AI agents to register sessions, report activity, and maintain cross-session continuity. Built for Gemini, Claude, and Cursor workflows.

Cortex Intelligence

Background LLM (Gemma 4 26B via Ollama) runs analysis cycles every 30 minutes. Generates prioritization insights, identifies scope creep, and surfaces blockers.

Filesystem Auto-Discovery

Chokidar watches your project directories for git operations and file changes. New projects are automatically detected and registered.

Service Health Monitoring

Tracks the health of external services — Vercel, GitHub, Hetzner, Supabase. Aggregated status view for your entire infrastructure.

Real-Time Updates

Server-Sent Events stream dashboard updates as they happen. No polling. When an agent finishes a session, the dashboard reflects it instantly.

Agent Protocol

// Agent starts a session
POST /api/sessions/start
{
"conversation_id": "abc-123",
"project_slug": "catchflow"
}
// Nerve Center responds with project context
{
"project": { name, status, priority, last_session... },
"recent_sessions": [ last 5 agent sessions with summaries ],
"insights": [ Cortex LLM prioritization ]
}
// Agent now has full context. No re-explanation needed.

Architecture

// The Nerve Center Stack
Next.js 16 (App Router) → Dashboard + API routes
SQLite (WAL mode) → Local-first database with FTS5 search
Windows Service → Runs as background daemon, auto-restarts
Chokidar → Filesystem watcher for auto-activity detection
Server-Sent Events → Real-time dashboard updates
Ollama (Gemma 4 26B) → Local LLM for Cortex intelligence
~4,500 lines · 30+ files · TypeScript · Framer Motion
Zero cloud dependencies — all data stays on your machine

Zero Data Leaves Your Machine

Nerve Center runs entirely on your local machine. The database is SQLite. The LLM runs through Ollama. The dashboard is localhost. Your project data, agent sessions, and intelligence outputs never touch a cloud server.

In a world of SaaS-everything, this is a deliberate choice. Your development context — what you're building, how you're building it, what your agents are doing — is some of the most sensitive data you have. It should stay with you.

Join the Waitlist

Nerve Center is currently a single-operator system running in production. We are exploring a packaged distribution — Docker image or native installer — for solo developers and small teams working with AI agents.