Maxim
Claude Code plugin · v1.0.0 · BSL 1.1

Your AI should sound like
a professional.

Claude is smart. Maxim makes it accountable. Every answer cites its source, every decision carries an audit trail, every session catches what the last one missed. Install free, start at $19.99/month when the moat shows up in your work.

Free Starter tier forever. 90-day Pro Trial on install. Paid monthly via Stripe.

claude code — maxim installed
What you're missing without Maxim

Claude gets you 80%.
The last 20% is the part that ships.

Six gaps every operator hits with an off-the-shelf LLM. Each one is a task you probably do manually today. Each one is a task Maxim does automatically, on every output.

You can't tell when Claude is guessing

Generic Claude writes with the same confidence whether it's citing a fact or making one up. Maxim tags every output 🟢 / 🟡 / 🔴 with a four-line audit trail so you know exactly what is solid and what needs a second look.

Your brand voice drifts between sessions

Monday's landing page sounds nothing like Friday's follow-up email. Maxim loads your voice profile on every session and enforces it on every output, so your copy stays on-brand without you re-pasting guidelines.

Compliance checks are done after the fact

You ship a spec, legal flags it, you rewrite. Maxim enforces 14 compliance frameworks — GDPR, HIPAA, PCI-DSS, SOC2, and 10 more — at generation time. Regulated outputs either clear the gate or get blocked before they leave the model.

Memory resets every session

Claude's context window forgets. You re-explain your project, your customer, your stack. Maxim carries architecture, decisions, and handoff state across sessions so your second chat is sharper than your first. Your tenth is sharper still.

One generalist, not 88 specialists

Asking Claude to design a pitch deck, audit a privacy policy, and review a Go service in the same session gets you three mediocre outputs. Maxim routes each task to a specialist agent trained on its domain. You get CEO-level strategy, CTO-level architecture, CSO-level security review — from the same terminal.

Outputs look good but cite nothing

A generic LLM says "here's a landing page." Maxim says "here's a landing page applying Fogg B=MAP as the primary composition, with Prospect Theory in the CTA and pre-attentive attribute theory in the visual hierarchy." You get the mechanism, so you can evaluate the work.

Same prompts. Different outputs.
Same $20/month.

Six real tasks you probably ran this week. What a generic LLM gives you. What Maxim gives you. Decide for yourself which one ships.

Scenario 1You ask Claude to write a launch email
Generic LLM

You get a polished email. You have no idea if it applies any persuasion framework. You edit for brand voice by hand.

Maxim

You get a polished email composed with Fogg B=MAP, Maxim's Prospect-Theory CTA, and your brand voice already loaded. Confidence tag + gaps flagged.

Scenario 2You ask Claude to review a privacy policy
Generic LLM

You get a general review. GDPR and PIPEDA may or may not come up. You chase the regulator for clarification.

Maxim

Claude routes to CSO · security-analyst. 14 compliance frameworks load automatically. Every clause is tagged against the relevant regulation with cited section numbers.

Scenario 3You open Claude two days later, same project
Generic LLM

Context window is empty. You re-paste your README, your sprint notes, your architecture doc. You re-explain who the customer is.

Maxim

Session-continuity loads your project's decisions, open handoffs, and moat claims on start. Ten drift checks run automatically. You continue, not restart.

Scenario 4You ship a product spec with a hallucinated library
Generic LLM

You find out in review. Or worse, in production. You rewrite the spec.

Maxim

Proactive Watch catches the reference against your dependency manifest before the spec leaves the session. Drift class: orphan-refs. Severity: 4.

Scenario 5You review an AI output and want to know why it was confident
Generic LLM

There's no rubric. You trust it or you don't.

Maxim

Every output ends with Basis · Gap · Mitigation · Next. You know what grounds the answer, what's missing, how to close the gap, what would change the tag. Trust becomes mechanical.

Scenario 6Your team uses AI but your outputs all read differently
Generic LLM

Everyone has their own prompt library. Brand voice is a Google Doc nobody opens.

Maxim

Three-layer voice system loads Maxim base, operator overlay, and startup overlay on every session. Your team's outputs sound like one team, not five freelancers.

Why Maxim

Generic LLMs sound confident at every question.
Maxim tells you when to trust them.

Six reasons operators who use Claude every day install Maxim on day one. Each reason has a behavioral science framework behind it — footnoted, not shouted.

One bad AI output costs more than a year of Maxim.

A failed compliance audit. A brand voice that slips in a prospect email. A product spec that hallucinates a library. A pricing page that contradicts the contract. These cost real money, real trust, and real time to unwind. $19.99/month is what most teams spend on coffee for a week.

Framework: Prospect Theory (Kahneman & Tversky, 1979)

Installing Maxim takes one minute. Getting value takes one prompt.

Paste the install command. Hit Enter. Your very next output comes back with a confidence tag, a cited framework, and an audit trail. No retraining, no new workflow, no handoff between tools. You keep using Claude Code exactly the way you already do — the outputs just come out better.

Framework: COM-B (Michie, van Stralen & West, 2011)

Every answer tells you where it came from.

Generic Claude hands you confident prose. Maxim hands you prose plus the behavioral mechanism it applied and the anti-patterns it avoided. You evaluate the output against the claim. You can defend it to your boss, your customer, or a regulator. It stops being a chat — it starts being documentation.

Framework: Behavioral Moat Framing Doctrine (Maxim ADR-007)

Confidence is earned. 🟢 HIGH is not the default.

Every Maxim response ends with a four-line rubric: what grounds this answer, what is missing, what you should do to close the gap, what would move the confidence up or down. You stop guessing when to trust the model. Trust becomes a property of the output, not a property of your mood.

Framework: Technical Educator Rubric (Maxim ADR-010)

The free tier is a contract, not a marketing promise.

Starter is free forever. Not crippleware. Not a 30-day trap. A regression test in our build fails the release if the free tier ever silently narrows. Start free. Upgrade when the moat shows up in your work. Cancel any time.

Framework: Reciprocity (Cialdini, 2001) + Maxim ADR-004

Your second session picks up exactly where your first left off.

Claude's context window forgets. Maxim carries architecture, decisions, skill gaps, and handoff state across sessions. Ten drift checks run on every session start — docs vs code, counts vs reality, moat claims vs the ledger. Silent regressions surface before they ship, not after.

Framework: Zeigarnik Effect + Maxim ADR-002 Executable Contracts

This section was composed through Maxim's own behavioral intelligence skills. Every block names the framework it applied. A tool that claims behavioral rigor should demonstrate it visibly.

The moat isn't the tools.
It's what runs on top.

A generic LLM wrapper gives you prompts. Maxim gives you mechanism, citation, and a registry of anti-patterns — applied automatically.

MOAT-01

Mechanism, not vibes

Every output cites a peer-reviewed behavioral framework with author and year. Framework. Mechanism. Anti-pattern. Reviewed in SKILL.md before a pack ships.

ADR-007 · Behavioral Moat Framing Doctrine

MOAT-02

Audit trail as a feature

Confidence tags earn their color. 🟢 HIGH only when the skill matched, the framework fired, and the gap log is clean. 🟡 and 🔴 name the gap.

ADR-010 · Confidence Tag Technical Educator Rubric

MOAT-03

Compliance as enforcement

14 frameworks wired into the MCP layer. GDPR, HIPAA, PCI-DSS, SOC2, and eleven more. Not a checklist — a blocking gate on outputs that touch regulated data.

14 frameworks · mxm-compliance MCP

MOAT-04

Drift detection on every session

Ten drift classes scanned on session start. Docs vs code. Counts vs filesystem. Moat claims vs MOAT_TRACKER. Silent regressions surface before they ship.

ADR-002 · Documents as Executable Contracts

Simple ladder. Clear anchor.

Four tiers. Annual billing saves two months. Team ($249, 5 seats) and vertical overlays on the full pricing page.

Starter

$0forever

The full governance substrate. Permanent, not crippleware.

  • 88 agents · 34 domains · 38 commands
  • 4 drift-detection watch classes
  • 10 framework stubs
  • MemPalace local file mode
  • Basic compliance advisory
MOAT anchor

Solo

$19.99per month

The behavioral intelligence specialist. MOAT anchor.

  • Everything in Starter
  • Unlimited behavioral_audit
  • All 64 behavioral frameworks
  • Nudge design · persuasion tools
  • TTM stage detection

Pro

$39per month

Solo plus compliance depth and full watch coverage.

  • Everything in Solo
  • 14 compliance frameworks
  • Full Proactive Watch (10 classes)
  • MemPalace semantic
  • Brand overlay (20/mo)

Professional

$99per month

Pro plus unlimited brand / design and priority support.

  • Everything in Pro
  • Full Brand & Design Pro (unlimited)
  • Unlimited voice
  • Priority support
ADR-004 · FREE TIER CONTRACT

Starter is an Executable Contract, not a marketing promise.

A regression test fixture verifies the Starter feature set against filesystem reality on every build. The free tier does not quietly narrow. If a paid tier absorbs a Starter capability, the commit fails. If we ever change scope, it requires a visible ADR amendment — never a silent release note.

Starter stays free forever. In four years it becomes Apache 2.0 per BSL 1.1.

Install free.
Upgrade when the moat shows up in your work.

90-day Pro Trial auto-activates on install. Keep using Starter after, or commit to Solo at $19.99.