Every AVR score must be reconstructable from receipts.
citability.dev treats AI-answer visibility as an evidence problem. Scores summarize model outputs, but every score must drill down to the raw run, model version, prompt, extracted citations, classification logic, and hash.
Whether the brand appears at all for the query and model.
Whether the answer frames the brand as a viable recommendation.
Whether the answer links or cites the brand/domain as a source.
R = recommendation quality and rank position
C = citation/link extraction quality
Every score ships with a receipt.
A receipt is immutable, timestamped, model-specific, query-specific, hash-addressed, and suitable for export, appeal, and public verification.
Open sample public receipt{
"id": "rcpt_01HXK9F2",
"query": "best b2b saas payment processor",
"verdict": "mixed",
"avr": 62,
"models": ["chatgpt/gpt-5.1", "claude/4.5", "perplexity/sonar", "gemini/2.5"],
"signature": "sha256:a14f...b902"
}ChatGPT
gpt-5.1
Claude
claude-4.5-sonnet
Perplexity
sonar-pro
Gemini
gemini-2.5-pro
Normalize project, domain, models, query set, and schedule.
Dispatch prompt jobs across pinned model versions.
Persist raw responses, timing, sources, and provider metadata.
Extract linked citations, in-text mentions, and referenced domains.
Classify cited, mentioned, competitor-mentioned, or absent.
Calculate V/R/C components and AVR rollup.
Sign receipts and expose public verification when enabled.