LLM Optimization

Iteratively evaluate and improve AGENTS.md using an LLM.

What is LLM optimization?

AGENTS.md can be generated through static analysis alone, but LLM optimization is where the real quality gains happen.


When you provide an OpenAI or Anthropic API key, Aspect Code runs an optimization loop that iteratively improves the generated instructions:


  • Evaluate — An LLM reads the generated AGENTS.md and scores it
  • Improve — The LLM suggests specific improvements
  • Accept — Changes are applied if they improve the score

  • Without an API key, you still get a complete AGENTS.md from static analysis. With a key, the output is significantly better.

    Setting up LLM optimization

    Set your API key as an environment variable:


    # OpenAI
    export OPENAI_API_KEY=sk-...
    
    # Or Anthropic
    export ANTHROPIC_API_KEY=sk-ant-...

    Then run Aspect Code normally:


    aspectcode

    When an API key is detected, the optimization loop runs automatically after static analysis. You'll see progress in the terminal dashboard.

    API costs

    LLM optimization typically uses a small number of API calls per run. Exact costs depend on your codebase size and the model used, but are generally minimal (cents per run).


    The optimization loop is conservative — it only makes changes that improve the instruction quality score.