Frequently asked questions

What you need to know about ARA.

Everything from how the score works to how to commission a study — answered plainly.

01 The Basics
What is algorithm readiness? +
Algorithm readiness is the degree to which a brand can be found, understood, and recommended by AI systems — ChatGPT, Gemini, Claude, Perplexity, and the next generation of AI agents that will execute purchases on behalf of consumers.

A brand that is algorithm-ready has the right structural infrastructure, a clear and distinctive identity in machine-accessible data, and a positive emotional footprint that AI systems consistently reproduce. A brand that isn't may be well-known to humans and nearly invisible to machines.
For five thousand years, every brand decision had one thing in common: a human was at the end of it. That is no longer guaranteed. Algorithm readiness is what brands need to build for the world where it isn't.
ARA stands for The ARA Index. It is an independent audit that measures how visible, legible, and recommendable your brand is to the AI systems your customers are already using — ChatGPT, Gemini, Claude, Perplexity, and others.

We score each brand across five proprietary dimensions and produce a ranked score out of 100, benchmarked against your competitive set. The result is a clear picture of how AI sees your brand today, and a prioritised roadmap for improving it.
AI is now the first point of discovery for a growing share of purchase decisions. When someone asks a language model what to buy, which brand to trust, or which product to choose — they get an answer. That answer is not random. It reflects the structured and unstructured data that AI was trained on and continues to retrieve from.

Brands that have invested in machine-legibility appear in those answers. Brands that haven't are losing recommendations they don't even know they're missing. The stakes rise further as AI agents gain the ability to execute purchases autonomously — the agent picks, the agent buys, with no human at checkout.
In 2024, over 40% of consumers reported using an AI assistant for product discovery at least once a month. That figure is growing. The time to build machine presence is before the recommendation becomes a transaction.
There are now three distinct disciplines, and it matters which one you're doing.

SEO optimises for search crawlers — links, keywords, technical signals. It determines whether machines can find your pages.

GEO (Generative Engine Optimisation) optimises content for AI retrieval — structuring articles, FAQs, and product descriptions so that language models surface them in responses. It's SEO's next layer. JBL's 2,434% LLM referral surge during peak 2025 shopping came from this kind of work.

ARA works upstream of both. It audits and optimises the brand itself — DNA, core architecture, positioning, semantic identity, and emotional footprint — the foundations that determine what any content or page-level optimisation can achieve. A brand with weak architecture will hit a ceiling on GEO no matter how well its content is structured. ARA finds and fixes that ceiling.

SEO asks: can machines find you? GEO asks: can machines retrieve your content? ARA asks: do machines understand, trust, and recommend your brand? These are different questions at different levels of the stack.
Our Synthetic Customer Test (A3) runs live queries across four platforms: ChatGPT, Perplexity, Gemini, and Claude. We run a standardised set of purchase-occasion queries for each category and record which brands are recommended, across which platforms, and in what context.

Our other four dimensions — Structural Readiness, Semantic Clarity, Emotional Residue, and Brand Voice — draw on a broader analysis of how AI systems represent brands across their training data and retrieval behaviour.
02 How It Works
We score each brand across five proprietary dimensions, each worth up to 20 points for a maximum of 100:
A1 · STR
Structural Readiness
Can machines find and parse the brand? Schema markup, retail distribution, product feed density, and data infrastructure.
A2 · SEM
Semantic Clarity
Do machines understand the brand as intended? Positioning clarity, differentiation specificity, and identity consistency across training data.
A3 · SYN
Synthetic Customer Test
Who wins when an AI agent shops? Live query testing across ChatGPT, Perplexity, Gemini, and Claude on real purchase occasions.
A4 · EMO
Emotional Residue
What did brand building leave in the data? Sentiment, cultural footprint, and trust signals in AI training material.
A5 · VOI
Voice & Agentic Readiness
What happens in a zero-visual world? Phonetic clarity, voice resolution, and personality consistency across AI outputs.
Each dimension uses a distinct technical methodology:

A1 — Structural Readiness is assessed through automated auditing of schema.org markup completeness, product feed density and accuracy across retail platforms, review corpus volume and recency, and the technical infrastructure that makes a brand machine-parseable.

A2 — Semantic Clarity is measured by probing AI models directly with identity and positioning questions — "What is [Brand]?", "What does [Brand] stand for?", "Who is [Brand] for?" — and comparing the outputs against the brand's stated positioning. Specificity, accuracy, and differentiation are all scored.

A3 — Synthetic Customer Test runs a defined set of purchase-occasion queries through ChatGPT, Perplexity, Gemini, and Claude under a standardised protocol: fresh session, no system prompt, default model settings, verbatim response recording. Brand appearances are coded against a consistent rubric and mapped to a presence heatmap.

A4 — Emotional Residue analyses the sentiment distribution of brand-adjacent content across editorial, social, and review corpora likely to have influenced AI training. It also probes how models characterise a brand's emotional identity — the vocabulary, associations, and affect they reproduce unprompted.

A5 — Voice & Agentic Readiness evaluates phonetic clarity and name resolution in voice contexts, consistency of personality descriptors across AI outputs, and how reliably a brand's tone is reproduced when AI systems are asked to write in its voice.
Tiers describe a brand's overall ARA standing within a scored category. They are relative to the study cohort, not absolute thresholds. The four tiers are:

Awesome — category-leading machine presence. AI consistently finds, understands, and recommends this brand.

Strong — above-average AI readiness with identifiable gaps. Competitive but not dominant.

Average — AI knows the brand but doesn't reliably recommend it. Structural or semantic work needed.

Weak — AI either misrepresents the brand or passes it over. Significant intervention required.
Category scores are benchmarked within their cohort, not across the full index. A score of 72 in luxury fragrance means something different from 72 in fast food — the competitive dynamics, AI query patterns, and structural data norms differ by category.

Across-category comparisons should be treated as directional rather than exact. Where we have run multiple waves in the same category, wave-on-wave comparisons are fully valid.
Each A3 query set is designed before testing begins and held constant across all platforms and brands in the study. Queries represent real purchase occasions derived from category-specific consumer research — not prompts engineered to produce particular outputs.

We run each query through a standardised session protocol (fresh context, no system prompt, default model settings) and record responses verbatim. Brand mentions are coded by a consistent rubric: present, absent, or not applicable based on query relevance. Platform totals and query totals are reported transparently in every deliverable.
Yes. AI models are updated continuously — some through fine-tuning, some through retrieval augmentation, and some through full retraining. This means that A3 results for a given brand can shift between waves without any action on the brand's part, simply because the model has ingested more (or different) training data.

This is precisely why re-auditing matters. A brand that earns strong A3 performance in Q2 cannot assume that result holds in Q4 without re-testing. Sustained machine presence requires ongoing investment in the inputs that inform AI training.
A4 is one of the more nuanced dimensions. It evaluates what brand-building activity has left behind in the data AI systems trained on — not just whether a brand is known, but whether it is known well.

We assess the sentiment distribution of brand-adjacent content across editorial, social, and review corpora. We also probe how AI models characterise the brand when asked open-ended questions: the vocabulary, associations, and affect they reproduce. High A4 scores reflect brands that have generated rich, positive, and specific cultural material — not just reach.
Authentic brand love leaves a measurable linguistic signature. Real advocates write differently about a brand than manufactured marketing does. AI can tell the difference.
Not directly. Paid media does not influence how AI language models represent a brand in conversational contexts. AI systems are not advertising platforms — they do not sell placements, and paid spend has no bearing on whether or how an AI recommends a brand.

That said, paid media can indirectly affect AI readiness over time — it drives traffic, review volume, and earned media coverage that does enter training data. We account for the structural and semantic effects of this activity, not the spend itself.
03 Why Now
Because the data that shapes AI outputs is being written right now — and it reflects what brands have done over the past several years, not what they plan to do next quarter.

AI systems don't discover brands in real time. They synthesise understanding from the cumulative record of what's been published, reviewed, discussed, and linked to. A brand that starts building machine-legible content today will see those signals enter AI training cycles over the coming months. A brand that waits will be catching up to competitors who didn't.

The second driver is urgency of the underlying shift. AI is already the first point of discovery for a meaningful share of purchase decisions. The next wave — agentic AI that executes transactions autonomously — is moving from experiment to product. In that world, the brands machines don't know, trust, and consistently recommend simply won't be bought.
During Black Friday and Cyber Monday 2025, JBL recorded a 2,434% surge in referrals from large language models — from content-level optimisation alone. Marketing Brew, Dec 2025 ↗ ARA doesn't start with content. It starts with brand DNA, core architecture, and positioning — the foundations that determine what any content optimisation can achieve. The window to build that kind of machine presence is open now. It will close incrementally, then suddenly.
It depends on which dimension you're improving. Structural changes (A1) move fastest — corrected schema markup, new retail distribution, and expanded review corpus can show up in AI outputs within weeks of implementation.

Semantic and emotional dimensions (A2, A4) move on a longer cycle. They reflect what AI systems have absorbed across their training data, which updates over months. A sustained content programme will show meaningful improvement over two to three quarters — not two to three weeks.

We map each recommendation to a timeframe in the audit delivery, and we recommend re-auditing every six months for actively managed brands. The score you get today is a baseline, not a ceiling.
Structural dimension (A1) changes can be reflected in AI outputs within weeks of implementation — new retail distribution, corrected schema markup, and expanded review corpus have measurable near-term effects.

Semantic and emotional dimensions (A2, A4) move more slowly. They reflect what AI systems have absorbed across their training data, which updates on months-long cycles. A sustained content programme will show meaningful A4 improvement over two to three quarters.

We recommend re-auditing every six months for actively managed brands, and annually for tracking purposes.
04 The Audit
ARA operates on two levels.

The public index — available at araco.ai — covers broad competitive sets across published categories. It shows each brand's overall ARA score, tier, and per-dimension breakdown. This is independent research published by ARA as market intelligence. If your brand is in a published category, you can already see where you stand.

A commissioned study goes significantly deeper. It is private by default and includes everything in the public index plus:

· Extended A3 testing — more purchase occasions, more query variants, platform-specific analysis
· Full per-brand findings narrative: Strengths, Vulnerabilities, Opportunities, and Watches
· A prioritised recommendations roadmap with implementation timeframes
· Category-level intelligence on AI positioning dynamics
· Wave-on-wave tracking if re-auditing

The public score is real. The commissioned study is everything you need to act on it.
A standard category study takes three to four weeks from scoping to delivery. This includes query design and platform testing (A3), structural and semantic analysis across all brands, findings synthesis, and production of the full report and recommendations deck.

Rush timelines (two weeks) are available for an additional fee. Ongoing monitoring programmes operate on a continuous basis with quarterly snapshots.
Our standard study covers a competitive set of 6–10 brands. The benchmarked context is a core part of the value — knowing your score in isolation tells you less than knowing where you stand against the brands AI is recommending instead of you.

Single-brand audits are available on request. They are best suited to brands seeking a baseline before entering a full category study, or to businesses operating in categories where we have already completed competitive research.
Very little. We need:

· The category you want assessed
· The brands you want in the cohort (or we can recommend a standard competitive set)
· Your commissioning contact and any internal briefing context

We do not require access to your data, internal systems, or agency relationships. Our methodology is entirely external — we assess how AI systems represent your brand based on publicly available information and live query behaviour.
ARA operates on two tiers, and the privacy model is different for each.

The public index is independent research — ARA-initiated category studies published as market intelligence. Any brand in a published category has a public score. This is intentional: the index exists to demonstrate what algorithm readiness looks like in practice, and to give brands a baseline before they decide to go deeper.

Commissioned studies are private by default. The extended findings, recommendations roadmap, and full per-brand narrative are delivered exclusively to the commissioning client and not shared with other brands in the study, the public, or third parties.

The short version: your public score is visible to anyone. Everything you commission beyond it is yours alone.
Three things underpin ARA's credibility:

Live testing, not modelling. The A3 Synthetic Customer Test is conducted live across ChatGPT, Perplexity, Gemini, and Claude using a standardised protocol. We record which brands AI systems actually recommend, on which platforms, for which purchase occasions. This is not estimated or inferred — it is observed.

Transparent methodology. Every dimension is documented, every scoring rubric is explicit, and every A3 query is disclosed in the deliverable. You can see exactly how each number was arrived at. Read the methodology.

Benchmarked context. Your score only means something relative to what's possible and what your competitors are achieving. All ARA studies include full competitive set scoring — you see where you stand, not just what you scored.
05 Working with ARA
If your category is already in the public index, you can start there — your brand's score, tier, and dimension breakdown are already live at araco.ai. That's the baseline.

When you're ready to go deeper, submit an audit request — tell us your brand, category, and any context about your timeline or brief. We'll be in touch within two business days to scope the commissioned study, confirm what's already been covered publicly, and outline what the full private engagement adds.

We don't need access to your internal systems or data. Our methodology is entirely external — we assess how AI systems represent your brand based on publicly available information and live query behaviour.
The public index is free. If your brand is in a published category, your score, tier, and per-dimension breakdown are available at araco.ai at no cost.

Commissioned studies — which include extended A3 testing, the full findings narrative, a prioritised recommendations roadmap, and private delivery — are priced per engagement. Pricing is scoped based on category complexity, the size of the competitive set, and whether the study includes implementation support or ongoing wave tracking.

A standard commissioned study is priced to be comparable to a mid-range research project — meaningful, but well within a marketing or strategy budget. Get in touch and we'll send you a scoping proposal with full pricing within a week.
Studies are typically commissioned by CMOs, brand directors, and their agency partners at challenger and market-leading brands. We work with brands that take the AI readiness question seriously — usually those who have already seen AI-driven discovery erode their share of recommendation, or who want to understand their exposure before it does.
Yes, under a formal partnership agreement. Agencies can integrate ARA methodology into their client offering as a licensed capability. We work with a small number of agency partners to ensure quality is maintained. Get in touch to discuss partnership terms.
ARA's core product is the audit and the roadmap — we identify what to do, prioritise by impact, and explain the mechanics of why each intervention works.

Implementation support is available as a separate engagement for clients who want hands-on assistance executing specific recommendations — particularly around structured data, content architecture, and semantic positioning. Ask about this when you request your audit.
Still have questions?
Request an audit and we'll walk you through exactly what ARA covers, how the study is scoped, and what you'll get at the end of it.
Request an audit →