How we score every winery, every month
The Central Coast IQ methodology in full. Eight scoring dimensions, three search channels, one continuous practice. None of this is theory. It is the work that runs across every tracked Central Coast winery, every month.
The 8 dimensions we score every winery on
Every winery in our coverage scores monthly on the same eight dimensions. Five are universal, applying to any local operator-led business. Three are wine-specific. The same framework powers every Discover report, every Diagnose audit, and every Monitor scorecard.
Five Universal Dimensions
Online Visibility
Discoverability across Google, Yelp, AI assistants (ChatGPT, Claude, Perplexity, Gemini), and category directories.
Scored 0-20 points
- Google Business Profile presence and completeness (categories, hours, photos, attributes)
- Google Maps ranking on AVA-specific and winery-name queries
- Yelp listing claimed and complete
- Wine-Searcher merchant listing (single, consolidated)
- Vivino producer listing
- TripAdvisor listing presence
- AI assistant mentions across the 48-query monthly panel (12 prompts × 4 assistants: ChatGPT, Claude, Perplexity, Gemini)
- Wine-industry directory presence (regional wine routes, varietal directories)
Review Authority
Volume, recency, ratings, sentiment across primary platforms.
Scored 0-15 points
- Total review volume across Google, Yelp, TripAdvisor
- Star rating consistency across platforms (divergence is a signal)
- Review velocity in the trailing 90 days vs. historical baseline
- Owner-response rate per platform
- Owner-response speed and voice quality
- Review-source diversification (over-concentration on one platform is fragile)
Brand Consistency
Cross-platform brand integrity: name, hours, links, contact info, schema.
Scored 0-10 points
- NAP (Name, Address, Phone) consistency across Google, Yelp, Wine-Searcher, Vivino, your own site
- Social handle consistency (Instagram, Facebook, TikTok use the same handle)
- Hours match across Google, Yelp, your site
- Schema markup name matches the brand name displayed
- Dual / fragmented listings flagged (e.g., Wine-Searcher merging)
- Brand transition handling for rebrands (old name still pointed at correctly)
- Footer / homepage social links resolve to live, owned accounts
Web Presence
Site quality, technical health, schema, content depth, mobile parity.
Scored 0-10 points
- Core Web Vitals (LCP, INP, CLS)
- Mobile parity with desktop content
- HTTPS valid certificate, no expiration risk
- Schema markup depth (LocalBusiness, Winery, Product, FAQPage, Event)
- llms.txt and AI-discoverability files present
- robots.txt configuration (allowing GPTBot, ClaudeBot, PerplexityBot, Google-Extended)
- Key pages present (visit, club, shipping, food, events, story)
- sitemap.xml present, valid, submitted to Google Search Console
Customer Experience
Review response rate, complaint resolution patterns, engagement signals.
Scored 0-10 points
- Owner-response rate to reviews (especially on Google)
- Response speed (7-day SLA on new reviews is the bar)
- Tone and quality of responses (templated vs. personal)
- Complaint resolution patterns (do negative reviews get addressed)
- Hospitality signals in review text (warmth, recall of guests, named staff praise)
- Sentiment themes across the review corpus (fed by the Sentiment Synthesizer module)
Three Wine-Specific Dimensions
Social Signals
Instagram, TikTok, Facebook cadence and engagement.
Scored 0-15 points
- Instagram presence, follower base, posting cadence, engagement rate
- Facebook presence, posting cadence, link integrity from your site
- TikTok presence and engagement (where applicable)
- YouTube presence and content cadence (where applicable)
- Cross-platform handle consistency
- Comment activity and response rate
- Content mix variety (winemaking, tasting room, food, vineyard, customer moments)
Conversion Flow
DTC commerce friction (Commerce7, Shopify, Orderport), club signup conversion.
Scored 0-10 points
- Commerce platform health (Commerce7, Shopify, Orderport, WineDirect)
- Product page quality (unique titles, schema, photography, tasting notes)
- Cart-to-checkout friction (steps, required fields, guest checkout)
- Shipping page presence and clarity (states, costs, thresholds)
- Wine club signup form usability and conversion mechanics
- Mobile checkout completion path
- Trust signals near purchase CTAs
- Reservation system (Tock, SevenRooms, Commerce7 native, OpenTable) and booking lead time
Offering Depth
SKU count, wine club tiers, varietal breadth.
Scored 0-10 points
- Wine portfolio breadth (varietals, vintages currently available)
- Library access (older vintages still purchasable)
- Wine club tier count and tier differentiation
- Tasting flight menu structure and variety
- Food program (snacks, full pairing menu, on-site kitchen, food trucks)
- Events calendar (release parties, member events, public tastings, dinners)
- Lodging or destination amenities (where applicable)
- Allocation or scarcity-driven offerings
SEO, GEO, and AEO: what they mean for your winery
The way people find wineries is changing. There are now three distinct channels your winery needs to be visible in, and most wineries are only thinking about one of them.
SEO, Search Engine Optimization
This is the one most people know. SEO is about showing up when someone types a query into Google: "best wineries in Santa Ynez Valley" or "Sta. Rita Hills tasting rooms open Sunday." It's driven by keywords, backlinks, page speed, and Google's ranking algorithm. SEO still matters. Google still sends traffic. But it's no longer the only game.
GEO, Generative Engine Optimization
GEO is about showing up in AI-generated search results: the summaries that Google, Bing, and other search engines now produce at the top of the page before traditional blue links. When someone searches "romantic wine tasting experience Central Coast" and Google shows an AI-generated overview recommending specific wineries, that's GEO. The wineries that appear in those summaries get the click; the ones below the fold don't. GEO requires structured content, clear factual claims, and well-organized information that AI models can confidently quote.
AEO, Answer Engine Optimization
AEO is the newest and fastest-growing channel. It's about showing up when someone asks ChatGPT, Claude, Perplexity, or Gemini a direct question: "What wineries in Paso Robles have the best Rhone blends?" or "Where should I do a wine tasting in Santa Barbara this weekend?" These aren't search engines. They're answer engines. They don't show a list of links; they recommend specific wineries by name. If your winery isn't in their training data, isn't cited by trusted sources, and doesn't have structured, AI-readable content, you won't get recommended. Period.
Why does this matter right now?
AI-referred visitors convert at three times the rate of traditional search visitors. They arrive with higher intent because they've already been told your winery is worth visiting, by a system they trust. The wineries that get visible in AI answers now, while competitors are still focused exclusively on Google, will own the recommendation layer for years. SEO took a decade to become competitive. AEO is in its first year. The window is open.
What Central Coast IQ does about it
We track all three channels for every Central Coast winery in our coverage, every month. Discover reports surface regional patterns. Diagnose audits surface site-specific gaps. Monitor scorecards alert when your visibility shifts. The data is the deliverable; the recommendations follow from it. For a deeper look at how the three layers interact, read the full SEO vs GEO vs AEO breakdown on the MSIQ blog.
Get found in the new world of AI search
Fewer people are typing queries into Google. More are asking ChatGPT, Claude, Perplexity, and Gemini for winery recommendations, tasting room suggestions, and wine club picks. Most wineries are completely invisible to these systems. We track and fix that.
Structured Data and AI-Readable Schema
AI models rely on structured data to understand what your winery is, where it is, what it offers, and why it's worth recommending. We surface the schema.org gaps and the location, product, and review signals that make your winery legible to AI.
Content That AI Can Cite
AI search engines prefer sources with clear, factual, well-organized content. Our reports name where your winery descriptions, tasting notes, varietal profiles, and regional context fall short, and what to do about it.
Citation and Authority Signals
AI recommendations are heavily weighted toward wineries that appear in trusted publications, review sites, and wine industry sources. We track which citations move you from invisible to recommended.
llms.txt and AI Discoverability Files
The emerging standards for telling AI crawlers what your site is about, which content they can use, and how to represent your brand. Most wineries don't have these yet. We flag whether you do, and what should be in them.
Ongoing AI Search Monitoring
We test what ChatGPT, Claude, Perplexity, and Gemini say when someone asks about Central Coast wineries, and track your winery's presence in those answers over time. The same way you'd track Google rankings, but for AI.
Why This Matters Now
AI search is still young. The wineries that get visible now, while competitors are still figuring it out, will be the ones AI recommends for the next five years. This is the equivalent of SEO in 2005. Get it right now or play catch-up later.
How we make sure the numbers hold up
AI search is stochastic. The same question, two minutes apart, can produce a different answer. Here is how we keep the visibility number from moving on a single run, and how we frame what the number actually represents.
Why your test on ChatGPT might disagree with our report
Our visibility test runs against each platform's API. The consumer chat surfaces (claude.ai, chatgpt.com) include web search, conversation history, and a system prompt that the API does not. A logged-in user asking the same question may see different specific recommendations. Our score reflects baseline model knowledge of your brand, which is what informs every higher-context surface, including the chat product you use yourself.
Variance is real, and we say so
Each prompt runs once against each assistant. AI answers are stochastic; the same question two minutes apart can produce different answers. We do not run repeated samples and average them. We run the 48-query panel once a month at a pinned model snapshot and report the snapshot. The number is a leading indicator, not a precision measurement; the reliability comes from cadence and timestamped repeatability, not from oversampling.
Pinned model snapshots and timestamped runs
Every report names the exact model version used, in UTC, on the exact date the audit ran. Providers occasionally roll new snapshots under the same name. The timestamp pins what was current at audit time, so a re-test next month can be compared apples to apples instead of confused by a silent model upgrade.
Why we re-run, not expand
The variance in AI visibility lives in how a model answers a single question, not in how many questions you ask. A focused twelve-prompt panel run monthly is more reliable than a thirty-prompt panel run once. The bigger lever is cadence. We re-run the same panel monthly and report the trend, so a winery's improvement after schema or content fixes is the story, not a single point estimate.
What we do not claim to measure
The score is a measure of mention probability across a defined query set. It is not a measure of the dollar value of those mentions, the conversion rate of AI-driven visits, or the personalized recommendations a logged-in user with browsing history will see. It is a leading indicator of brand presence in generative search, useful for tracking change over time and for identifying where structured-data and content investments are working.
The query panel is published, not hidden
The 12 prompts are project-specific and chosen from real Google autocomplete demand. They are not a secret. Every report includes the verbatim queries, the platforms tested, the model snapshots, and the timestamp. A winery can take any prompt and re-run it themselves to verify what we saw. This transparency is the contract.