AI context, engineered.

Aspect Code generates optimized context from your codebase, watches for drift, and learns what works — so AI coding agents make fewer errors on real code.

$npm i -g aspectcode && aspectcode
aspectcode — my-project
analyzed 2,340 files, 15K dependency edges
generated AGENTS.md
generating probes to test agent behavior
evaluating behavior on test probes
found 5 gaps — editing AGENTS.md
watching for changes...
src/api/routes.ts saved — 3 dependents checked
auto-resolved: naming convention match (96%)
src/core/auth.ts saved — hub file, 12 dependents
learned: allow default exports in handlers
dream cycle: consolidated 5 corrections
AGENTS.md updated — 2 behaviors strengthened
49 changes · 12 learned · synced

Approach

Static analysis, iterative refinement, continuous learning.

01

Graph extraction

Tree-sitter parses the project into a dependency graph — hub files, naming conventions, entry points, module boundaries. No configuration or LLM calls required.

$ aspectcode
  Analyzing... — 2,340 files, 15K edges
  Done in 4.2s
02

Probe-and-refine optimization

Generated context is tested with behavioral probes against the actual codebase. Failures are fed back to refine the output — the same methodology we validated on SWE-bench.

AGENTS.md                    written
.claude/rules/ac-hub-core.md written
.cursor/rules/ac-hub-core.mdc written
Probe: 5/5 behaviors pass
03

Continuous drift correction

A file watcher monitors changes against the dependency graph. Corrections accumulate and periodically consolidate into stronger context through an offline refinement pass.

watching — 47 files indexed
src/api/routes.ts — 3 dependents checked
auto-resolved: naming match (96%)
learned: allow default exports in handlers
12 preferences · synced

Research-backed

Evaluated on SWE-bench

Probe-and-refine is the optimization loop behind Aspect Code. Tested on SWE-bench Verified — 500 real GitHub issues, end-to-end.

40% fewer errors

55 errors vs 92 baseline

55
probe-and-refine
92
baseline

Resolve rate

% of issues fully resolved

34.2%
probe-and-refine
27.4%
static KB
22.8%
baseline

SWE-bench Verified, 500 instances, Qwen3.5-35B-A3B at 200 agent steps. Absolute resolve rates reflect the constrained study model — the methodology applies to any model. Read the paper

Compatibility

Every major coding agent

Aspect Code generates an open format (AGENTS.md) that works with every agent out of the box.

GitHub Copilot
Cursor
Claude Code
Codex
Windsurf
Cline
Gemini
Aider

AGENTS.md is an open format. Any AI tool that reads project files will pick it up automatically.

Hosting

Use it free. We offer hosted compute if you want it.

Aspect Code is open-source. We run hosted inference so you don't have to bring your own key — but you always can.

Open source

Free

$0

Full analysis, watch mode, auto-resolve, scoped rules. Aspect Code is open-source — this isn't a limited trial, it's the real thing.

100K lifetime hosted tokens on Haiku 4.5.

Hosted inference

Pro

$8/mo

Everything in Free, plus recommended rules from indexed repos. Your agent gets context from projects like yours — architecture patterns, convention enforcement, common gotchas.

1M tokens/week, resets weekly. Billed monthly.

Self-hosted

Own Key

$0

Everything in Free with your own OpenAI or Anthropic key. Unlimited tokens, any model, unlimited iterations. Provider auto-detected from key prefix.

No community suggestions. Add apiKey to aspectcode.json or set ASPECTCODE_LLM_KEY.