← All articles

Why AI Recommends Your Competitors (And What to Do About It)

Chudi Nnorukam|
ai-visibilityrecommendationavr-frameworkcompetitive

AI tools like ChatGPT, Perplexity, and Claude know your brand exists. They still recommend your competitor. The gap between brand recognition and brand recommendation is where most businesses lose their AI search traffic. It happens because AI systems do not recommend based on brand familiarity. They recommend based on structural trust signals, content freshness, and direct answerability. Your competitors likely hit those signals better. Here is the data and the fix.

The Recognition-Recommendation Gap Is Real, and the Numbers Are Unsettling

We ran AVR (Awareness, Visibility, Recommendability) baseline testing on citability.dev and chudi.dev across ChatGPT, Perplexity, and Claude. Both domains have 100% brand recognition. The AI knows they exist. The recommendation numbers are a different story.

| Domain | Brand Recognition | Recommendation Rate | Citability Rate | |--------|-------------------|--------------------|-----------------| | citability.dev | 100% | 40% | 12% | | chudi.dev | 100% | 11% | 0% |

Recognition without recommendation is a liability, not an asset. It means the AI knows you are an option and still points users elsewhere. That is harder to fix than invisibility because it suggests a structural problem with how your content is packaged for AI extraction, not just a crawlability issue.

The queries where we saw the highest NOT_VISIBLE rates were recommendation-type queries: "best tool for X," "how to do Y," "what should I use for Z." These are the queries with buying intent. This is where the competitive gap matters most.

What Are the Three AVR Signals?

The AVR framework separates AI presence into three distinct measurements. Conflating them is how most businesses misdiagnose the problem.

Visibility is whether the AI can confirm your brand's existence when asked directly. "Does citability.dev exist?" or "Tell me about chudi.dev." This is the baseline. Every legitimate business with a working website eventually passes this. It tells you almost nothing about your competitive position.

Recommendability is whether the AI surfaces your brand unprompted when a user asks a question your product answers. "What tools exist for AI search optimization?" Your brand appearing in that answer, without the user asking for you specifically, is recommendability. This is the metric that maps to actual traffic and revenue. A 100% visible brand with 11% recommendability is functionally invisible to most buyers.

Citability is the strictest signal: whether the AI includes your URL as an attributed source in its response. Not just a name mention. A link. A "according to citability.dev" with a URL attached. This requires your content to be structured, fresh, and original enough that the AI treats it as a primary source rather than a brand it is aware of. Our data shows citability.dev at 12% and chudi.dev at 0%, despite both domains passing all 10 infrastructure checks.

The three signals are a funnel. You cannot be recommended without first being visible. You cannot be cited without first being recommended. Most businesses have a leak at the visibility-to-recommendation transition. Understanding which transition is breaking is the entire diagnostic.

Why Is Your Competitor Being Recommended Instead?

When an AI recommends your competitor over you, it is making a real-time evaluation of which source best answers the user's query. There are three common reasons your competitor wins that evaluation.

Their content answers the question faster. AI systems extract the most direct, concise answer available. If your competitor's page opens with a clear, specific claim and your page opens with background context, the AI uses their content. The user never sees that your content exists. This is the most common failure pattern and the fastest to fix. Read the opening paragraph of the page that is being recommended instead of you. Then read your opening paragraph. The gap is usually obvious.

Their content has structural signals yours lacks. JSON-LD structured data is how AI crawlers understand what a page is about without parsing ambiguous HTML. Article schema tells the AI who wrote it and when. FAQPage schema tells it which section answers which question. HowTo schema tells it the step structure. Without these, the AI is guessing. When two pages cover the same topic, the one with explicit schema wins the recommendation because the AI has higher confidence in its extraction.

Their content is more recently updated. AI systems, especially those with web retrieval like Perplexity and ChatGPT's Browse mode, weight recent content heavily. A page last updated 14 months ago competes poorly against a competitor who published an updated version last quarter. This is not just about the date stamp. AI systems are increasingly able to detect when only the date changed without the content changing. The fix is substantive updates with new data, new examples, or new findings, not cosmetic date changes.

Does Domain Authority Matter for AI Recommendations?

No. Our benchmark across 6 sites tested domains from DA 28 (chudi.dev) to DA 97 (Reddit). Ahrefs, with a DA of 92 and 100% AI visibility, was cited in only 5% of relevant queries. Reddit, with DA 97, failed basic infrastructure checks and was untestable for recommendation rates.

This is the core structural difference between traditional SEO and AI search optimization. Google's algorithm treats domain authority as a major ranking factor. AI recommendation systems do not have an equivalent mechanism. They evaluate the content in front of them: is it structured, is it fresh, does it answer the question directly. A DA 28 site with clean JSON-LD, an answer-first opening, and original data outperforms a DA 92 site with a buried answer and no structured data.

If you have been waiting to invest in AI optimization until you build more authority, the data says that strategy does not transfer. The levers are different.

What AI Crawlers Need to Recommend You

The infrastructure scan we run at citability.dev checks 10 signals that predict AI crawlability. Failing these does not guarantee you get skipped. Passing them does not guarantee you get recommended. But they are the floor. Without them, you are competing with one hand tied.

The signals that most directly connect to recommendation rates rather than just crawlability:

Answer-first content. The direct answer to your target query must appear in the first 100 words of the page. This is the single highest-leverage change most sites can make. It does not require a redesign or a platform migration. It requires rewriting the opening paragraph.

Per-page structured data. Site-level Organization and WebSite schema helps crawlability. Per-page Article, FAQPage, or HowTo schema helps recommendation. The AI needs to know the content type, author, and freshness of each individual page, not just the site.

Original data. AI systems preferentially cite content that contains information no other source has. Our benchmark data, the AVR baseline numbers in this article, the failure patterns we observed across 50+ scans: these are citable because they cannot be paraphrased from an existing source. Summary content, content that restates what other sites have already said, is harder to cite.

Substantive freshness. Pages updated quarterly with genuine new content, new statistics, new cases, hold recommendation position better than pages with stale content. The AI is not looking at the date alone. It is evaluating the content's evidential currency.

The Diagnostic Process: How to Find Your Specific Gap

The competitor comparison is the fastest diagnostic tool. Pick the five queries where you most want to appear. Run them on ChatGPT, Perplexity, and Claude. Record who appears. Then do this for each competitor recommendation you find:

  1. Fetch their top-ranking page for that query.
  2. Read the first 100 words. Is the answer there?
  3. View page source. Do they have JSON-LD? What type?
  4. Check the dateModified in their schema.
  5. Look for original data: statistics, benchmarks, proprietary findings.

Compare that to your equivalent page. The gap is your action list. Most of the time, one or two changes close the competitive gap on a specific query.

The full AVR baseline test, running the same queries across all three AI platforms and tracking visibility, recommendability, and citability separately, takes longer but gives you the clearest picture of where in the funnel you are losing. It also gives you a benchmark to measure improvement against after you make changes.

Where to Start

The fastest path from "AI knows I exist" to "AI recommends me" is three changes applied to your highest-value pages: answer-first opening, per-page Article schema with dateModified, and one piece of original data per page.

None of these require a developer. They require rewriting your opening paragraphs, adding a JSON-LD block to your HTML head, and running one test or collecting one data point you publish.

The citability gap, where you are recommended but not cited by URL, is the longer project. It requires building the kind of primary source content that AI systems treat as reference material rather than background. That is a content strategy, not a technical fix.

Run the free scan at citability.dev to see your current infrastructure score. It checks all 10 signals and shows you exactly which ones are failing, with documentation on why each one matters for AI recommendation rates. It takes about 30 seconds and gives you a starting point that is more reliable than guessing.

Your competitor is being recommended because they built pages that AI systems know how to use. That is fixable. The data shows the gap is structural, not authoritative. You do not need to outrank them in Google first. You need to make your content easier for AI to extract, trust, and recommend.

Check your AI visibility

Free scan. No account required. Results in 10 seconds.

Start Free Scan