Axiom Works

We build software with AI. Not just about AI.

Developer tools, security infrastructure, and MCP servers. One developer and a fleet of AI coding agents. Security first. Local-first where it matters.

See the Products Local First

Four products. Building in the open.

Click any card to see the full picture: capabilities, tech stack, roadmap.

Published
GreyMatter Solo

Your AI forgets everything after every conversation. GreyMatter doesn't. It remembers what you worked on, what you learned, and what matters — and it gets smarter the more you use it. Like having a second brain that actually works.

BSL-1.1 Python Go MCP

Click to expand →

Capabilities

  • An intelligent harness that wraps your AI coding agent with persistent memory
  • A second brain that remembers decisions, patterns, and context across every session
  • Spaced repetition (FSRS-6) that surfaces knowledge before you forget it
  • Works with Claude, Codex, and Gemini through the MCP protocol
  • Local LLM inference via Ollama — works fully offline, no cloud required
  • Security scanning on every interaction via SecureLLM

Tech Stack

  • Python — core runtime and MCP server
  • SQLite — local knowledge graph and observation store
  • MCP (Model Context Protocol) — 17 tools
  • Ollama — local LLM inference for quality gates
  • Claude Opus, Sonnet, Haiku — primary frontier LLMs
  • macOS (launchd) and Linux (systemd) service support

What It Solves

  • Your AI starts from zero every conversation — Solo makes it remember
  • Corrections you give today are forgotten tomorrow — Solo makes them stick
  • Your code, your data, your context never leaves your machine
  • No internet? No problem. Full functionality offline with local LLMs
  • One install, one command — works in minutes, not days

Roadmap

Q1 '26
Knowledge graph, MCP tools, Ollama integration, installer
Q2 '26
Go coordinator, session dispatch, PyPI publish
Q3 '26
Voice integration, multi-LLM support
Q4 '26
iOS/tvOS app, next-gen IDE
GA Q4 2026
GreyMatter Teams

Your team's collective intelligence, always available. Every person keeps their own private brain. When they share knowledge with the team, it's encrypted, controlled, and retractable. No one loses ownership of their ideas.

BSL-1.1 Python Go mTLS

Click to expand →

Capabilities

  • Every team member runs their own GreyMatter Solo — their private second brain
  • Share knowledge across the team with encryption and access controls
  • Retractable sharing — revoke access to anything you've shared, at any time
  • Quality gates that review AI-generated code before it merges
  • A fleet of specialist AI agents, each focused on a language or domain
  • Live dashboard with cluster topology and agent status

Tech Stack

  • Go — coordinator, dispatch engine, API server
  • Python — MCP server, knowledge graph, spaced repetition
  • SQLite — local persistence on every node
  • mTLS — mutual TLS on all inter-node communication
  • Raft consensus for high-availability coordination
  • macOS, Linux, and air-gapped deployment support

What It Solves

  • AI agents on your team can't share what they've learned with each other
  • Sharing knowledge today means giving up control of it — Teams doesn't
  • No governance exists for fleets of AI coding agents
  • Enterprise environments need encrypted, auditable AI operations

Roadmap

Q1 '26
mTLS cluster, OTEL tracing, knowledge sync
Q2 '26
Go coordinator, agent dispatch, quality gates
Q3 '26
Multi-cell federation, enterprise features
Q4 '26
Kubernetes operators, enterprise dashboard
GA Q3 2026
SecureLLM

You're already using AI at work. So is everyone on your team. SecureLLM makes sure nobody accidentally shares a social security number, an API key, or a customer's private data with an AI. It sits between your people and the AI, and it catches what humans miss.

BSL-1.1 Rust Tokio

Click to expand →

Capabilities

  • Catches PII before it reaches the LLM — SSNs, credit cards, API keys, emails
  • Blocks prompt injection attempts before they hit the model
  • Classifies content safety on both requests and responses
  • Works with any LLM provider — Claude, GPT, Ollama, any OpenAI-compatible API
  • Deploys as an inline proxy or sidecar — no code changes needed
  • Full audit trail of every AI interaction

Tech Stack

  • Rust — 14 crates, 26.8K lines of code
  • Tokio — async I/O for high-throughput scanning
  • Zero runtime dependencies for the core engine
  • Provider abstraction — plug in any LLM backend
  • BSL-1.1 — source-available, free for individuals

What It Solves

  • People paste sensitive data into AI tools without thinking — SecureLLM catches it
  • Prompt injection is a real attack vector — SecureLLM blocks it before the model sees it
  • Compliance requires an audit trail for AI use — SecureLLM provides one
  • You shouldn't have to trust employees to never make a mistake with AI
  • Works silently in the background — no training, no behavior change required

Roadmap

Q1 '26
Core scanner, PII detection, 4 LLM providers
Q2 '26
Rust refactor, open source preparation
Q3 '26
crates.io publish, documentation
Q4 '26
Enterprise dashboard, policy engine
Published
YouTube MCP

68 MCP tools for YouTube and YouTube Music. Search, playlists, comments, analytics, library management. The most complete YouTube integration for Claude Code.

Apache 2.0 Python PyPI

Click to expand →

Capabilities

  • 68 tools covering the full YouTube Data API v3 — not just transcripts
  • Full YouTube Music library management — playlists, likes, albums, artists
  • Search, upload, comment, moderate, analyze — everything the API supports
  • When the API has gaps, we document them and contribute upstream fixes
  • OAuth 2.0 with TV client flow for YouTube Music authentication
  • Works with Claude Code, Codex CLI, Gemini CLI, or any MCP client

Tech Stack

  • Python — published on PyPI
  • Apache 2.0 — fully open source, use it however you want
  • Google YouTube Data API v3
  • ytmusicapi — unofficial YouTube Music API
  • MCP (Model Context Protocol) — standard tool interface
  • Active upstream contributions to ytmusicapi

What It Solves

  • Every other YouTube MCP server just does transcripts — this one does everything
  • YouTube Music had no MCP integration at all — now it does
  • Manage your entire YouTube presence from inside your AI coding agent
  • API gaps are documented, not hidden — and we fix what we can upstream
  • One install, one config — pip install and you're running

Roadmap

Q1 '26
68 tools, PyPI publish, YouTube Music support
Q2 '26
Upstream contributions, API gap documentation

Your data stays on your hardware.

Local First is a set of principles where your device is the primary authority for your data — not a cloud server. Coined by Ink & Switch in 2019, it's how software should work: fast, private, and yours.

Fast
Reads and writes happen against local storage. No spinners, no round trips. Sub-millisecond response times because nothing leaves your machine.
Works Offline
Full read and write capability without internet. Not "gracefully degrades" — actually works. Your AI tools function wherever your laptop does.
Private by Architecture
We don't see your data. Not because of a privacy policy — because the architecture makes it impossible. Your code and context never leave your device.
You Own Your Data
If we disappear tomorrow, your tools and data still work. No sunset, no export deadline, no scramble. Local files, open formats, on your hardware.
Compliance by Default
GDPR, HIPAA, data sovereignty — local-first satisfies them structurally. No DPA needed with a cloud vendor if the data never leaves your jurisdiction.
Cost at Scale
Cloud inference charges per token, forever. Local inference costs hardware once and runs as many times as you need. At scale, local wins.

Part of the Local First movement.

Built for environments where compliance isn't optional.

Defense, law enforcement, financial services, healthcare. Our products are being built to meet the standards these industries require.

FIPS 140-3
Post-quantum algorithms implemented (ML-KEM, ML-DSA). Targeting FIPS 140-3 cryptographic module validation.
Algorithms Implemented
CJIS
Criminal Justice Information Services security policy compliance for law enforcement data.
In Progress
Air-Gapped Deployment
Designed for full functionality without internet. Local LLM inference via Ollama, on-premises knowledge graph, offline-first architecture.
Designed
Post-Quantum Encryption
ML-KEM and ML-DSA. Quantum-resistant key exchange and signatures. 26K+ LOC, zero dependencies.
Implemented
SOC 2
Security, availability, and confidentiality controls for enterprise SaaS deployments.
Planned
GDPR
Data residency, consent management, right to deletion. Built into GreyMatter Teams from day one.
In Progress
GovRamp / FedRAMP
Federal cloud authorization framework. Required for government cloud deployments.
Planned
mTLS Everywhere
Mutual TLS on all inter-node communication. Certificate pinning. No plaintext in transit.
Implemented
SIGS
Shared Information Governance Standards for cross-agency data sharing in law enforcement.
Planned

Agentic AI software development. No shortcuts.

Axiom Works is a software development company that builds with AI, not just about AI. One developer and a fleet of AI agents build production software through a self-correcting pipeline with auto-generated architectural rules. Every work item is planned, executed, reviewed, and verified before it merges.

The products are local-first, developer-sovereign, and open source at the core. No cloud dependency. No vendor lock-in. No data leaves your network unless you say so.

Local-First
Your data, your hardware, your rules. Every product runs on-premises or air-gapped. Cloud is optional, never required.
Developer-Sovereign
No vendor lock-in. Open standards, open protocols, open source core. You own the stack.
Compliance-Ready Architecture
Post-quantum cryptography (ML-KEM, ML-DSA). Targeting FIPS 140-3 and CJIS. Built for defense, law enforcement, and regulated industries.
Self-Correcting Pipeline
A fleet of AI agents, quality gates, automated review. Work items flow from plan to production without manual intervention.