AI VISIBILITY / REVIEWMay 15, 2026

Visby.ai, AI Visibility, and the Fake Smart Marketing™ Problem

I wanted answers. I got circles, scores, dashboards, and a strange feeling that I understood less than when I started.

By Ivan Jimenez14 min read~3,200 wordsCategory-defining analysis

The Real Story: How I Found Visby.ai On AppSumo

I found Visby.ai the way I find most things that disrupt my workflow: through AppSumo. Lifetime deal. Reasonable price. A promise that felt like exactly what I needed: see where your brand appears in AI-generated answers.

I connected one of my client sites. A local service business with decent SEO authority, good reviews, active GBP. The kind of site that should theoretically be doing something in AI search. I hit the analyze button expecting the tool to tell me four things:

  • Exactly what pages to build or optimize
  • Where my competitors have AI visibility gaps I can exploit
  • What ranking factors AI systems are using that I'm not accounting for
  • Where ChatGPT or Perplexity is citing competitors but not me — and why

Instead, I somehow felt less informed.

That feeling stuck with me. Not frustration at Visby specifically. Something larger. The same feeling I get whenever a new category of software arrives promising to measure something that nobody has measured before — and instead of producing clarity, it produces confidence. Those are not the same thing. In fact, they are often opposites.

"Software promising to measure AI visibility reminds me of when analytics promised to measure everything, and somehow we understood our customers less than before we had the data."

— Ivan Jimenez, Doral SEO

The Most Dangerous Dashboard Is One That Looks Precise

Here is what I saw when I connected my client site. I have anonymized the numbers because the point is the pattern, not the client.

AI VISIBILITY DASHBOARD

Client Site · [Anonymized] · Last 30 days

FOR ILLUSTRATION PURPOSES
78
SEO Score
+3 vs last month
72
GEO Score
Needs work
73
SERP Score
Position avg
19
ChatGPT Traffic
Est. monthly visits
Brand Mentions in AI
12
Competitor Mentions
47
AI Share of Voice
14%

What even is this?

Not "How do I improve this?" — which would be a reasonable next question.

The question I kept asking was more fundamental: what exactly does a GEO Score of 72 mean? Is it out of 100? Is 72 good? What is the standard? What moved it from whatever it was before to 72? What would make it 80?

And more importantly — ChatGPT Traffic: 19. Nineteen what? Visits? Impressions? Estimated mentions? Over what period? Based on what methodology? Is 19 terrible or typical? My competitor shows 47 brand mentions in AI. Is that measured the same way?

The dashboard looks precise. Four significant scores. Color-coded severity. Comparison percentages. The design communicates confidence, authority, and measurement sophistication. But the moment you ask "what should I do differently on Monday morning?" — the precision evaporates.

Key Observation

Dashboards create emotional certainty. Understanding creates operational certainty. These are not the same thing. A dashboard that looks precise but cannot tell you what to do next has confused the two.

Measurement BS Is Real

We have been here before. The SEO industry lived through it with DA/DR scores — single numbers that carry enormous perceived weight despite being composite metrics based on methodologies each company guards like trade secrets. Two different tools can show wildly different authority scores for the same domain, and both are "correct" by their own definitions.

We lived through it with AdTech attribution. Last-click, first-click, linear, time-decay, data-driven — same user journey, four completely different stories about what "caused" the conversion. Each model created emotional certainty. None created consensus understanding. And yet billions were optimized against these scores.

We are living through it again now with AI visibility metrics. And I want to be precise about what I mean by Measurement BS — because I am not saying the data is fake. The data is real. Visby is measuring something. The problem is the gap between measurement and understanding — and whether the tool makes that gap visible or papers over it with a confident-looking number.

Emotional Certainty

  • Your GEO Score is 72
  • ChatGPT traffic: 19 visits
  • AI Share of Voice: 14%
  • You rank "below average" vs competitors

Operational Certainty

  • Add FAQPage schema to 12 specific pages
  • Create a Wikidata entity for your brand
  • Build 3 comparison pages targeting these queries
  • Internal link from X to Y using this anchor text

The right column is what software should produce. The left column is what most AI visibility software produces. There is a category difference between them — not a quality difference, a purpose difference. One tells you where you stand. The other tells you what to do.

The most experienced SEOs I know do not need to be told where they stand. They need to be told what to do next. And that is the gap that no AI visibility tool has closed yet.

AI Visibility Is Real

I want to be unambiguous here before the rest of this article lands the wrong way: AI visibility is real. It is consequential. And ignoring it is a genuine strategic mistake.

People often frame it as: "I want ChatGPT to recommend my brand." That is the right instinct wrapped in the wrong mental model. ChatGPT recommending you is not like Google ranking you. It is like a knowledgeable colleague citing you in a briefing. The system that makes citations happen is not keyword-matching — it is retrieval-augmented generation, entity recognition, and confidence scoring.

The Real Model

AI visibility is SEO for a different type of search engine — except the retrieval systems are fundamentally different. They pull from:

Websites
YouTube
GBP Posts
LinkedIn
Entities
Citations
Structured Context
Training Data

Google still matters. But it is losing monopoly power over discovery. Every AI answer that does not cite you is visibility you are not getting — regardless of your Google rankings.

The reason I spent time with Visby.ai is because I believe the category it is building toward matters enormously. A tool that could genuinely tell you why you are not being cited — which entity signals are missing, which content formats AI systems are ignoring, which competitor pages are being preferred and why — would be one of the most valuable tools in any SEO stack.

That tool does not fully exist yet. That is the problem.

I Wanted A Decision Engine

Let me show you exactly what I was hoping to walk away with — versus what the dashboard actually gave me.

What I WantedWhat I Got
Exactly which pages to build or optimizeScores and circles
Explain the gaps vs competitors specificallyA bar chart showing I'm behind
Why ChatGPT cites my competitor and not meA number I can't interpret
A prioritized list of 3 things to do this weekA list of tasks with no context
What content format AI systems prefer for my nicheA "GEO score" of 72
Whether my Schema.org markup is actually workingDashboard graphs
Entity gaps that block my AI citationA percentage I can't act on

"AI visibility software should act more like a decision engine and less like macaroni art."

— Ivan Jimenez, Doral SEO

I am not saying this to be clever. I am saying it because the design philosophy matters. A decision engine starts with the question "what should you do?" and builds data collection and presentation around answering that. A macaroni art tool starts with the available data and presents it as attractively as possible.

Both can look impressive in screenshots. One of them tells you what to do on Monday.

Software Should Expose Patterns — Then Recommend

There is a clean way to think about what software should and should not do in the AI visibility category. And it starts with being honest about the division of labor between software and the human using it.

Software does this

Exposes patterns. Shows where you appear in AI answers, which competitors are cited, which entity signals are present, which content formats are being retrieved. The raw pattern layer.

You do this

Interpret the patterns. You know your business, your budget, your team, your competitors. You convert raw patterns into strategic priorities that software cannot produce alone.

Then this happens

Recommendations. Based on the patterns software exposed + the context only you have. The decision lives at this intersection — not inside the software alone.

You do not want software to tell you about your own business. Software exposes patterns. You interpret patterns. Then recommendations happen.

The problem with most AI visibility tools — and I am including Visby in this category — is that they collapse the first two steps into a single layer and call it a recommendation. They surface a data point ("your GEO score is 72"), then suggest a priority action ("improve your content for AI systems"), without actually showing the pattern that connects one to the other.

Show me the pattern first. Show me that pages with FAQ schema in my niche are cited 3x more frequently. Show me that my top competitor has 12 sameAs connections and I have 2. Show me which specific queries I am invisible for and which competitor is winning them. Then I will tell you what the recommendation is.

The Problem Might Not Be Visby

I want to keep this fair — because I genuinely considered that I might be wrong about the tool.

Maybe category maturity is the real issue. Ahrefs once looked primitive. The early Ahrefs dashboard was genuinely confusing to anyone who had not already been doing backlink analysis for years. Google Analytics once produced data that most website owners had no idea how to use. The SEO tool category spent 15 years becoming what it is today. AI visibility as a measurable category is approximately 18 months old.

The signals AI systems use to determine citations are not fully understood even by the researchers who build those systems. The inputs to citation probability are genuinely uncertain in ways that backlinks and keyword density never were. Asking Visby to produce operational clarity from a measurement problem that Google itself has not fully published the rules for may be an unfair demand.

The Early Category Argument

Visby matters because it shows what is possible. A tool that tells you your brand appeared in 12 AI answers this month — even if the methodology is opaque — is the beginning of a measurement infrastructure that will eventually produce operational clarity. The tool is a proof of concept for a category that will exist in a much more powerful form in 3-5 years.

Early Ahrefs users had to figure out what to do with DA scores on their own. Early Google Analytics users had to figure out what bounce rate actually meant for their business. The pattern is consistent: measurement tools arrive before the interpretive frameworks that make them actionable. The frameworks come later, built by practitioners who used the tools.

Maybe I am a practitioner who arrived at the tool too early expecting frameworks that do not exist yet. That would not be Visby's fault. That would be mine.

If Visby Disappeared Tomorrow, What Still Matters?

This is the question I always ask about any tool: if it went away, what would be lost, and what would not? The answer separates the tool from the underlying strategy.

If Visby disappeared tomorrow, AI visibility would still matter. The infrastructure required to be cited by AI systems would still be the same. What still matters — regardless of whether any AI visibility tool exists to measure it — is:

Definition Pages

Clear, citable definitions of core concepts in your niche. AI systems need authoritative definitions. If you have them, you get cited.

Comparison Pages

Head-to-head comparisons between you and alternatives. These are among the highest-citation-probability content formats.

FAQ Chunking

FAQPage schema with explicit question-answer pairs that AI retrieval systems can extract without inference.

Structured Entities

Wikidata entries, Schema.org markup, and sameAs chains that make your brand a verified entity AI systems trust.

Internal Linking

Semantic internal link architecture that creates topical webs retrievable by vector search systems.

Authority Systems

The body of original research, data, and perspective that gives other sources reason to cite you.

Reference Architecture

Pages that are designed to be referenced — comprehensive, accurate, updated — not just read.

Revenue Infrastructure

The conversion architecture that captures value from AI-driven discovery, not just the discovery itself.

Notice what is not on this list: GEO score. AI Share of Voice. ChatGPT traffic estimates.

The metrics are not the strategy. The infrastructure is the strategy. If you build the infrastructure correctly, the metrics will follow — whether or not you are measuring them.

Verdict: Would I Take The AppSumo Refund?

"If AppSumo offered a refund today, I'd take it immediately."

But read the next paragraph before you agree.

Not because AI visibility is fake. It is not. AI visibility is one of the most consequential shifts in search since Google introduced PageRank, and the brands that build AI citation infrastructure now will have meaningful advantages over those who start in 2028.

I would take the refund because after spending significant time with Visby.ai, I still do not know what I should do next. And that is the fundamental failure mode of any software product. Not that it is wrong. Not that it is slow. That it leaves the most important question — what do I do now? — unanswered.

Software should reduce confusion. Not create it. A tool that shows you numbers you cannot act on is not a tool — it is a dashboard. And dashboards have a long history of creating the feeling of understanding without creating understanding itself.

The Honest Summary

AI visibility as a category🔥 Essential
Visby.ai concept and ambition✓ Legitimate
Current operational clarity✗ Missing
Decision-making support✗ Not yet
Worth watching in 12-18 months⚡ Possibly
Worth buying at AppSumo today? Depends on your tolerance

If you are an early adopter who enjoys exploring category-defining tools and can build your own interpretive frameworks from raw data — the AppSumo deal has value as a learning experience. If you need operational clarity that tells you exactly what to build next — you will be frustrated.

FAQs

Related Reading

DIGITAL IVAN

Want An AI Citation Infrastructure — Not Just A Score?

The Revenue Website™ framework builds the structural foundation that makes AI citation possible: entity systems, reference architecture, FAQ infrastructure, and authority loops. It is the alternative to chasing dashboards.