aiWOW measures how visible your brand, pubs, and beers are to AI. It asks the same questions your customers ask — "where should I eat near Tower Bridge?", "what's a good Kentish ale?" — and measures whether AI recommends you, gets your facts right, and connects you to your brand. It does this across 4 major AI platforms that together reach over 90% of AI users worldwide.
The Pipeline: Poll → Parse → Score
1. POLL — "We ask the questions"
aiWOW sends real customer-style questions to 4 AI platforms: Google Gemini, ChatGPT, Perplexity, and Claude. Each pub gets questions about its local area. Each beer gets questions about style and recommendations. Each brand page gets questions about the company. The raw responses are stored word for word.
2. PARSE — "We read the answers"
Each AI response is analysed by a parser model that extracts structured data: Was your pub mentioned? In what position? Was the brand named? Were the facts accurate? What competitors appeared? What was the sentiment? Three parser options are available: Haiku — Solid Performer (fast, cheap), Sonnet — Super Brain (most accurate), and Gemini Flash — Solidly Average (cheapest).
3. SCORE — "We measure the result"
The extracted data is scored across 5-6 dimensions to produce a Pint & Pie Index score (0-100) for every entity. Scores are compared across your estate to show who's winning and who needs attention.
All Tabs Explained
Insights — Executive summary with brand recognition rate, recommendation rate, sentiment breakdown, platform bar chart, and key findings. Start here.
Scores — The league table. Every pub, beer, and brand page ranked by their Pint & Pie Index score. Click a pub to expand and see Who's Beating Us? — competitor intelligence showing what AI recommends instead of you and what you can learn from it. Use the Standards Dial to adjust scoring strictness and the Compare Models button to see platform divergence. Filter by model for per-platform scoring.
Raw Responses — The actual AI responses, word for word. Click any row to read the full response with token counts and costs.
Parsed Results — The extracted data from each response. Shows whether you were mentioned, recommended, and brand-attributed. Filter by sentiment.
Poll Runs — History of every poll run with per-run and cumulative costs, duration, and status.
Fix It — AI-powered recommendations. Quick wins, strategic insights, and per-entity action items ranked by priority.
Prompts — The questions being asked. View per-entity prompts with mention-rate stats. Add, delete, or use Suggest Prompts to get AI-generated prompt ideas.
Methodology — The full Pint & Pie Index methodology, all scoring dimensions, formulas, and framework documentation.
Controls & Buttons
Run Poll — Sends prompts to AI platforms. Costs money. Pin protected. Choose All, Pubs Only, Beers Only, Brand Only, or a single entity.
Parse — Analyses responses using your selected parser model. Costs money. Pin protected.
Score — Recalculates all index scores from parsed data. Free. No API calls.
Fix It — Generates AI recommendations via Claude Sonnet. Costs money. Pin protected.
Refresh — Reloads all data from the database. Free.
Admin — Opens the settings panel (pin protected). Manage scoring standards, platform weights, PINs, and default parser model.
? Help — Opens this guide.
Standards Dial
The Standards Dial adjusts how strictly the scoring engine interprets parsed data. Same data, different lens. Five positions from Very Lenient (generous — a passing mention counts as a recommendation) to Very Strict (demanding — only first-position mentions, explicit positive sentiment, full brand attribution). The default is Normal. Scores update instantly with no API calls. Persist via Admin panel.
Platform Weighting
Equal Weights — All 4 AI platforms count the same (25% each).
Platform Weighted — Weights by real-world reach: Gemini 45% (2B+ users via Google Search & Siri), ChatGPT 30% (800M users), Perplexity 15% (780M queries/mo), Claude 10% (20M users). Configurable in Admin.
Who's Beating Us?
Available for pubs only. Click any pub row in the Scores tab to expand and see which competitors AI recommends instead of you. This isn't competitor analysis — it's a learning tool. It shows what attributes competitors are described with that you're not, and what you can do to close the gap. The Learn From This button generates actionable recommendations about your own content, schema, and Google Business Profile.
What "Good" Looks Like
In a typical local pub search, AI recommends 5-8 venues. Appearing in 38% of conversations means you consistently earn a seat at the table. A score above 60 is strong. Above 40 is competitive. Below 20 needs attention. The goal isn't 100% — it's consistently being in the conversation.
Admin Settings
Scoring Standards
Adjusts how strictly the scoring engine interprets parsed data. Same data, different lens.
Low StandardsHigh Standards
Normal
Platform Weights
Must sum to 1.0. Adjusts how much each AI platform counts in weighted scores.
Sum: 1.00
PIN Management
Default Parser Model
Coming Soon
User Management
Methodology Editor
Scheduling
API Key Management
aiWOW
Turning whispers into insight
The Pint & Pie Index
Stop
Prompt
Entity
Model
Response Preview
TokensⓘHow many words the AI used in its response.