The challenge with measuring AI visibility
AI responses are not deterministic. Ask ChatGPT the same question twice and you may get different answers. Ask it with slightly different phrasing and the results can vary significantly. This makes AI visibility inherently harder to measure than a Google ranking, which is more stable for a given query, device, and location.
Our approach addresses this by focusing on two things: the structural factors that reliably influence whether a business is cited, and a consistent query methodology that tests representative patterns rather than single data points.
Platforms we audit
The seven dimensions we score
Does AI correctly identify what this business is, what it does, and that it exists as a real, verifiable entity?
Does AI correctly state where the business operates and its service area?
Can AI accurately describe the services the business offers when asked?
Does AI recognise and cite the business's professional credentials and accreditations?
How often does the business appear in recommendation queries for its sector and area?
Does the business have content that directly answers the questions customers ask AI about it?
Are reviews, credentials, and entity data consistent and complete across the web?
What we do not measure
We do not claim to predict exactly which queries a business will appear in across every AI platform "” that is not currently possible with any reliability. AI models are updated frequently, and citation patterns shift with model versions, training data, and query phrasing.
What we do measure is the structural readiness of a business's online presence for AI visibility "” and where the specific gaps are. These structural factors are the most actionable things a business can change, and they have a reliable and measurable impact on AI citation likelihood over time.