Methodology: How the Recommendation Engine Works
Transparency is a first-class requirement. This page documents exactly how the AAF toolchain produces trade-off analysis, design recommendations, and posture interpretations — including where AI is used and where it is not.
1. Source of truth
The framework documents are community-authored and community-governed. Every tool output — trade-off analysis, design recommendations, posture interpretations — is derived from these docs. The docs are the root.
2. Where AI is used
AI is used in exactly one place: extracting structured trade-off data from framework prose. Everything downstream is deterministic.
- A script (
extract-tradeoffs.js) reads the pillar documents and sends them to an LLM with a strict extraction prompt. - The extraction prompt is version-controlled in the repo — anyone can inspect exactly what the AI is instructed to do.
- The prompt explicitly constrains the AI: "Extract only. Do not add insights, opinions, or recommendations not explicitly stated in the source material."
- Every AI extraction includes the exact source quote from the document so reviewers can verify the extraction matches the prose.
- Items the AI considers implied rather than explicit are flagged as
confidence: "inferred"for extra scrutiny during review.
3. Where AI is NOT used (deterministic)
Everything downstream of the approved data model is purely deterministic:
- Pattern-matching engine — given your design choices, it looks up applicable trade-offs using rule-based matching. No AI. Same input always produces the same output.
- Doc block renderer — generates the "Design Recommendations & Trade-offs" sections at the bottom of each pillar doc directly from the approved data model.
- Design questionnaire — questions are extracted from pillar docs structurally (headings and bullet lists), not by AI.
- Posture scoring — heuristic checks match codebase patterns against known indicators. No AI involved.
- MCP tool responses — deterministic lookups against the approved data model.
4. The air gap
AI output never enters the data model automatically. Every change requires a maintainer-approved pull request.
The governance flow:
- Framework docs change (community PR merged to main)
- GitHub Action runs the AI extraction script against updated docs
- Script opens a PR with proposed changes to the trade-off data model
- Each proposed trade-off is shown alongside its source quote from the docs
- Items flagged
confidence: "inferred"are highlighted for extra scrutiny - Maintainers review: verify accuracy, adjust wording, reject overreach
- Only after a maintainer merges the PR does the data model update
- Git history shows who approved what and when
5. How to trace a recommendation
Every trade-off entry in the data model includes citation fields:
{
"tension": "Every validation gate costs tokens and time",
"sourceQuote": "Cost optimization sits tightly adjacent to...",
"source": {
"doc": "docs/08-pillar-cost.md",
"section": "7.8"
},
"confidence": "explicit"
}source.docandsource.sectionpoint to the exact framework document and section.sourceQuotecontains the verbatim text from the document that supports the trade-off.confidenceindicates whether the trade-off is explicitly stated or inferred from the text.- The CLI and MCP tools include these citations in their output, so users can always trace a recommendation back to its source.
6. How to challenge or improve
- The trade-off data model is version-controlled — open a PR to add, modify, or dispute any entry.
- The extraction prompt is also open for community review and improvement.
- Changes to the framework docs automatically trigger new AI extraction proposals, which go through the same maintainer review process.
- If you believe a trade-off is missing, inaccurate, or goes beyond what the docs say, open an issue or PR on the FrameworkCore repo.
7. Architecture overview
Five layers, clearly separated:
- Layer 1 — Community-authored framework: The pillar docs (
docs/*.md). Source of truth. Community-governed. - Layer 2 — AI extraction: Script reads docs, LLM proposes structured trade-offs with citations. Transparent, constrained, auditable.
- Layer 3 — Air gap: GitHub PR. Maintainers review, adjust, merge. Nothing enters the data model without human approval.
- Layer 4 — Approved data model:
trade-offs.json. Version-controlled, human-reviewed, every entry cited. - Layer 5 — Deterministic rendering: Pattern matcher, doc block renderer, MCP tools, CLI. No AI. Same input → same output. Always.
Questions about this methodology? Start a discussion on GitHub.