AI context, engineered.
Aspect Code generates optimized context from your codebase, watches for drift, and learns what works — so AI coding agents make fewer errors on real code.
npm i -g aspectcode && aspectcodeanalyzed 2,340 files, 15K dependency edgesgenerated AGENTS.mdgenerating probes to test agent behaviorevaluating behavior on test probesfound 5 gaps — editing AGENTS.mdwatching for changes...src/api/routes.ts saved — 3 dependents checkedauto-resolved: naming convention match (96%)src/core/auth.ts saved — hub file, 12 dependentslearned: allow default exports in handlersdream cycle: consolidated 5 correctionsAGENTS.md updated — 2 behaviors strengthened49 changes · 12 learned · synced
Approach
Static analysis, iterative refinement, continuous learning.
Graph extraction
Tree-sitter parses the project into a dependency graph — hub files, naming conventions, entry points, module boundaries. No configuration or LLM calls required.
$ aspectcode Analyzing... — 2,340 files, 15K edges Done in 4.2s
Probe-and-refine optimization
Generated context is tested with behavioral probes against the actual codebase. Failures are fed back to refine the output — the same methodology we validated on SWE-bench.
AGENTS.md written .claude/rules/ac-hub-core.md written .cursor/rules/ac-hub-core.mdc written Probe: 5/5 behaviors pass
Continuous drift correction
A file watcher monitors changes against the dependency graph. Corrections accumulate and periodically consolidate into stronger context through an offline refinement pass.
watching — 47 files indexed src/api/routes.ts — 3 dependents checked auto-resolved: naming match (96%) learned: allow default exports in handlers 12 preferences · synced
Research-backed
Evaluated on SWE-bench
Probe-and-refine is the optimization loop behind Aspect Code. Tested on SWE-bench Verified — 500 real GitHub issues, end-to-end.
40% fewer errors
55 errors vs 92 baseline
Resolve rate
% of issues fully resolved
SWE-bench Verified, 500 instances, Qwen3.5-35B-A3B at 200 agent steps. Absolute resolve rates reflect the constrained study model — the methodology applies to any model. Read the paper
Compatibility
Every major coding agent
Aspect Code generates an open format (AGENTS.md) that works with every agent out of the box.
AGENTS.md is an open format. Any AI tool that reads project files will pick it up automatically.
Hosting
Use it free. We offer hosted compute if you want it.
Aspect Code is open-source. We run hosted inference so you don't have to bring your own key — but you always can.
Open source
Free
Full analysis, watch mode, auto-resolve, scoped rules. Aspect Code is open-source — this isn't a limited trial, it's the real thing.
100K lifetime hosted tokens on Haiku 4.5.
Hosted inference
Pro
Everything in Free, plus recommended rules from indexed repos. Your agent gets context from projects like yours — architecture patterns, convention enforcement, common gotchas.
1M tokens/week, resets weekly. Billed monthly.
Self-hosted
Own Key
Everything in Free with your own OpenAI or Anthropic key. Unlimited tokens, any model, unlimited iterations. Provider auto-detected from key prefix.
No community suggestions. Add apiKey to aspectcode.json or set ASPECTCODE_LLM_KEY.