LLM Optimization

Iteratively evaluate and improve AGENTS.md using an LLM.

What is LLM optimization?

AGENTS.md can be generated through static analysis alone, but LLM optimization is where the real quality gains happen.


When you provide an OpenAI or Anthropic API key, Aspect Code runs a probe-and-refine loop:


  • Generate — an LLM produces a seed AGENTS.md from the knowledge base
  • Probe — test scenarios are generated and the AGENTS.md is evaluated against them
  • Diagnose — gaps and weak areas are identified
  • Refine — targeted improvements are applied
  • Repeat — the loop runs for multiple iterations until quality converges

  • Press [r] in watch mode to re-run the probe-and-refine loop at any time.


    Without an API key, you still get a complete AGENTS.md from static analysis. With a key, the output is significantly better.

    Setting up LLM optimization

    Set your API key as an environment variable:


    # OpenAI
    export OPENAI_API_KEY=sk-...
    
    # Or Anthropic
    export ANTHROPIC_API_KEY=sk-ant-...

    Then run Aspect Code normally:


    aspectcode

    When an API key is detected, the optimization loop runs automatically after static analysis. You'll see progress in the terminal dashboard.

    API costs

    LLM optimization typically uses a small number of API calls per run. Exact costs depend on your codebase size and the model used, but are generally minimal (cents per run).


    The optimization loop is conservative — it only makes changes that improve the instruction quality score.