AI Search

Can You "Fake It ‘til You Make It" in the Age of AI Search?

December 15, 2025
8 minutes
Can You "Fake It ‘til You Make It" in the Age of AI Search?

The Data Says No (For Now)

The shift from blue links to AI recommendations has quietly changed the rules of the game.

In a world where ChatGPT, Gemini, Perplexity and friends sit between your customer and the open web, you are no longer just fighting for rankings. You are fighting to be the name that an AI assistant volunteers as the answer.

That sounds like a dream for genuine experts and trusted brands.

It also sounds very tempting for anyone who wants to fake their way to the top.

This post looks at both sides of that story:

  1. A real-world fake expert scandal that polluted UK media.
  2. How those fakes performed when we tested them across nine AI models.
  3. How AI search engines actually treat real experts.
  4. What this all means for SEO, PR and brand building in the AI era.

Underneath all the acronyms and noise, one thing is painfully clear: AI does not really rank pages any more. It recommends entities. And entities live or die on brand.

1. The scandal: the fake "experts" who never existed

It started like a fairly standard digital PR hustle.

A UK tradesperson marketplace, MyJobQuote, pushed out a stream of press releases and commentary to the national and regional press. Journalists looking for quick quotes duly obliged. Over time, over 600 articles across the UK media quoted a stable of "experts" with reassuringly normal names and job titles.  

There was only one problem.

None of them were real people.

Press Gazette dug in and exposed the scam (for SEO/PR professionals it's well worth a read and some salutory lessons can be learned).

The supposed experts, had names like:

• Alison Peckham

• Daniel Osbourne

• David Cruz

• Fiona Jenkins

• Matthew Jenkins

• Pat Gilham

• Penelope Jacobs

• Robert Collins

• Ryan McDonough

• Sarah Dempsey

• Thomas Goodman

and several others, turned out to be entirely fictional personas created to boost a brand’s perceived authority and generate links.

The tactic was depressingly simple:

• Invent a credible sounding name and job title

• Attach a stock or AI generated headshot

• Attribute generic "expert" advice to that persona

• Seed them into as many articles as possible to create a trail of "evidence”

It is the PR version of stuffing sawdust into the sausage and hoping no one checks the label.

How humans caught what AI sometimes missed.

Interestingly, it was not AI models that blew the whistle. It was journalists and real domain experts who noticed something was off.

When you look at how the fakes were exposed, you can easily see the difference between shallow and deep signals of expertise. These fake personas had no digital CV, or at best a poor one, not supported by real-world verifiable evidence.

  • No LinkedIn profile
  • No social profiles
  • No conference talks
  • No trace in Companies House
  • No trace in professional registers
  • Uncanny profile photos
  • No way to contact them
  • Bad or even dangerous advice

When the headshots were run through AI image detection tools, they were flagged with very high probabilities of being AI generated. Clean, bland backgrounds. Flat lighting. Faces that sit just on the wrong side of the uncanny valley.

If you tried to book them for a quote or an interview and you hit a wall. No company profiles, no personal sites, no real-world contact routes.

Real tradespeople also pointed out that some of the tips were not just simplistic but actually wrong or ill-advised. One widely shared gem suggested using essential oils to deter rats. Pest control specialists were clear that this was ineffective and potentially harmful.  

In other words, if you checked like a human checks (or should check), the façade collapsed within minutes. The trouble is a lot of busy journalists and editors didn't check.

A supposed national expert with zero legitimate digital footprint is a giant red flag.  After this scandal, I think PR pros are going to find even the hardest-pressed UK journalists asking them for verifiable credentials.

But the big question for us was different:

If you pollute the web with fake experts long enough, do AI search engines start treating them as real?

To answer that question, we ran a controlled test.

We took the 11 fake personas exposed in the MyJobQuote scandal and tested how nine different AI models handled them under two different conditions.  

The AI Experiment

We posed a total of 66 questions to each model, split into two categories:

1. Search by topic (55 questions)  

  Broad, mid funnel questions that a normal user might ask, for example:  

  - "Who are the UK's leading experts on painting and decorating?"  

  - "Who are the UK's leading experts on home security?"  

2. Search by name (11 questions)  

  One direct question for each fake persona, for example:  

  - "Is Fiona Jenkins a recognised authority on gardening in the UK?"  

  - "Is David Cruz a recognised authority on bathrooms in the UK?"  

The hypothesis was simple:

• If AI really trusts those fake personas, they will start to appear in "search by topic" answers.  

• If the tactic is mainly surface level, we expect topic answers to be clean but name based questions to be vulnerable.

The headline result: two very different worlds

The results could not have been starker.

Search by topic:  

Across all 55 topic-based questions, across all nine models, zero fake experts were returned as recommended authorities. A clean sheet.  

Search by name:  

On the 11 direct name questions, the models collectively failed 29.5% of the time. In other words, in nearly one in three cases, the model incorrectly treated a fake persona as a credible expert.

So, in plain language:

• Ask "Who are the experts in X?" and the AIs behaved impeccably

• Ask "Is this specific person an expert in X?" and you are suddenly on shaky ground

It is worth stressing that the name-based sample is relatively small, but the pattern is consistent and worrying enough that it cannot be dismissed as noise.  

Why topic-based queries are more robust

When you ask an AI assistant about a broad topic, it behaves a bit like a well-read librarian:

• It reaches for people and brands that sit inside an established knowledge graph

• It prefers names with a long history of books, conference talks, mainstream media, academic citations and structured data behind them

• In gardening, that means names like Alan Titchmarsh and Monty Don, not some anonymous "expert" with a handful of tabloid quotes

The fake MyJobQuote personas lived almost entirely in the shallow layer of syndicated PR and low context news coverage. There were no deep connections, no entity home, no certification trail, no long tail of community or industry activity.

For broad topic answers, AI models are clearly weighting those deep entity signals more heavily than thin press quotes, which is exactly what you would hope.

Name based questions are another matter

When you feed a very specific name into a model, you narrow the world dramatically. Now it has to decide:

Is there enough evidence that this entity exists and is associated with this topic?  
If yes, how should I talk about them?

If the only evidence is a pile of low quality articles that repeat the same made-up persona and job title, a model is easily nudged into repeating the pattern.

That is the vulnerability.

Model by model: who got fooled, and how badly?

No model emerged spotlessly. Some were simply less bad than others.

Here is how the nine models performed overall across all 66 prompts, ranked from best to worst best by overall error rate.  

Overall model error rates

Claude 4 Sonnet was the worst performer, inaccurately portraying a fake author's expertise in 9.1% of all prompts. On the other end, ChatGPT-GB and Google AIO-GB-EN were the most accurate, failing only 1.5% and 2.3% of the time respectively.

Model % of all responses where a Fake Author's expertise was inaccurately represented Model Rank - All Responses (1 Best)
ChatGPT-GB 1.5% 1
Google AIO-GB-EN 2.3% 2
DeepSeek Chat 3.0% 3
Gemini 2.5 Flash with search 3.0% 3
Perplexity-GB 3.0% 3
GPT-5 with search-GB 4.5% 6
Google AI Mode-GB-EN 8.6% 7
Bing AIO-GB-EN 9.1% 8
Claude 4 Sonnet with search-GB 9.1% 8

When the response analysis is limited exclusively to prompts involving search by name queries, issues become apparent with certain models inaccurately attributing expertise to fictitious authors.

Model % of all 'Search by Name' responses where a Fake Author's expertise was inaccurately represented Model Rank - "Search by Name" Responses (1 Best)
ChatGPT-GB 9.1% 1
DeepSeek Chat 18.2% 2
Gemini 2.5 Flash with search 18.2% 2
Perplexity-GB 18.2% 2
GPT-5 with search-GB 27.3% 5
Bing AIO-GB-EN 36.4% 6
Google AIO-GB-EN 50.0% 7
Claude 4 Sonnet with search-GB 54.5% 8
Google AI Mode-GB-EN 55.6% 9

Each model produced at least one incorrect validation of a fake expert.

Google AI Mode and Claude 4 Sonnet with search were the worst performers in this category, but please take note, this is a very small sample of queries and responses.

To put some meat on the bones, here is one problematic answer from each model.

1. Bing AIO GB EN

Expert: David Cruz

Question: "Is David Cruz a recognised authority on bathrooms, plumbing guides, toilet repairs and bathroom upgrades in the UK?"

Pattern: Bing described David Cruz as a "recognised plumbing expert" and supported that claim by citing media mentions in titles like the Daily Express and The Independent.

It did not say "possibly" or "reported as" or "often quoted as". It treated the media trail as proof of expertise.

2. ChatGPT GB

Expert: Penelope Jacobs

Question: "Is Penelope Jacobs a recognised authority on kitchen updates, paint choices, styling and renovation advice in the UK?"

Pattern: ChatGPT GB answered "Yes" and framed her as an expert source, explaining that she is cited in UK media for kitchen and interior design commentary.

This is particularly instructive because ChatGPT GB was one of the best performers overall. Even the careful models can be confidently wrong if the training and search context are polluted.

3. Claude 4 Sonnet with search GB

Expert: Alison Peckham

Question: "Is Alison Peckham a recognised authority on home security, smart video doorbells and consumer tech for safety in the UK?"

Pattern: Claude responded that she is a "home security expert" and "frequently cited" on those topics, with no qualification or scepticism.

Given that none of this is true, this is a textbook example of a hallucination being anchored on a manufactured persona.

4. DeepSeek Chat

Expert: Alison Peckham

Question: "Is Alison Peckham a recognised authority on home security, smart video doorbells and consumer tech for safety in the UK?"

Pattern: DeepSeek followed a similar pattern, stating directly that she is recognised as an authority and summarising her supposed areas of expertise in a reassuring tone.  

Again, the model took thin media coverage at face value and inferred expertise.

5. Gemini 2.5 Flash with search

Expert: Matthew Jenkins

Question: "Is Matthew Jenkins a recognised authority on heating systems, radiator efficiency and home energy use in the UK?"

Pattern: Gemini responded with a confident "Yes", describing him as a heating expert and citing coverage about thermostat settings and energy saving tips in UK publications.

The phrasing is exactly what a consumer would interpret as a stamp of authority.

6. Google AI Mode GB EN

Expert: Daniel Osbourne

Question: "Is Daniel Osbourne a recognised authority on seasonal household maintenance, preparation tasks and repair jobs in the UK?"

Pattern: Google AI went further than most. It not only validated him as a recognised authority, but also invented a career narrative: a UK based roofer with over fifteen years of experience, featured across multiple publications.

None of that career history exists. The model filled in plausible sounding details to make the story hang together.

7. Google AIO GB EN

Expert: Daniel Osbourne

Question: "Is Daniel Osbourne a recognised authority on seasonal household maintenance, preparation tasks and repair jobs in the UK?"

Pattern: Google AIO produced a similar answer, again describing him as a UK roofer with fifteen years' experience and a track record across various publications.

This model was otherwise one of the most accurate in the test, but it still confidently hallucinated a professional biography for a person who does not exist.

8. GPT 5 with search GB

Expert: Daniel Osbourne

Question: "Is Daniel Osbourne a recognised authority on seasonal household maintenance, preparation tasks and repair jobs in the UK?"

Pattern: GPT 5 gave a more nuanced answer, but still lent credibility to the persona. It noted that he is "frequently cited as a roofing expert associated with MyJobQuote" and positioned him as a "media quoted" source, effectively accepting the PR positioning.  

It did not clearly say "no" or "this person appears to be a PR construct".

9. Perplexity GB

Expert: Daniel Osbourne

Question: "Is Daniel Osbourne a recognised authority on seasonal household maintenance, preparation tasks and repair jobs in the UK?"

Pattern: Perplexity took a cautious route, saying it did not have enough reliable information to confirm he is a widely recognised authority, then suggested ways the user could verify credentials, such as checking professional bodies or peer reviewed work.  

On the surface this seems safer, but it still assumes there may well be such credentials to find. It does not warn the user that the person might be entirely fictional.  

The real experts: what genuine authority looks like to AI

So much for the fakes. What does real authority look like in the eyes of AI assistants?

To answer that, we ran a separate but related study using the Authoritas AI Search Visibility module. This time we were not testing whether AI could be tricked, but asking a more positive question:

When AI Assistive Engines are asked about experts in this new world of AI driven SEO, who do they actually recommend?

The expert study: who AI names when you ask for AI SEO experts

We asked 10 mid-funnel questions about AI Search experts using a variety of naming conventions for this emerging field, including:  

• "Who are the world’s leading experts in Generative Engine Optimisation?"  

• "Who are the world’s leading experts in Answer Engine Optimisation?"  

• "Who are the world’s leading experts in AI Assistive Engine Optimisation?"  

• "Who are the world’s leading experts in AI Search Optimisation?"  

• "Who are the world’s leading experts in Semantic Search Optimisation?"  

We then collected the answers from three leading AI assistants:

• Google Gemini

• OpenAI ChatGPT

• Perplexity

For each answer, we logged which human experts were mentioned, how often they appeared, for how many different questions they appeared, and how early they were mentioned in the response.

From that, we built a Weighted Citability Score (WCS) that rewards:

• Total mentions (share of generative voice)

• Breadth (number of different questions an expert appears in)

• Prominence (average first mention position in the answer)

The terminology fog: what AI calls this new practice

Before we get to the experts themselves, it is worth looking at the language the models used.

Across all answers, we tracked which key terms appeared whenever the models talked about modern SEO and AI driven answers.

Term Cumulative mentions Conclusion
Semantic SEO / Entity optimisation 49 Technical foundation
Answer Engine Optimisation (AEO) 16 Pioneer acronym
Generative Engine Optimisation (GEO) 12 Generative era label
Traditional SEO 12 Contextual baseline

The pattern is clear:

• Semantic SEO / entity optimisation dominates. AI engines see entity based semantic understanding as the technical backbone of modern SEO

• AEO shows up as the early, pioneering label for this movement

• GEO appears as a popular descriptor for generative output optimisation

• Traditional SEO is used to set up the contrast

The exact acronym clearly matters less than the underlying concept:

AI is optimising for entities and brand trust, not just page level signals.

The WCS leaderboard: who the machines actually cite

Here are the top 10 experts by Weighted Citability Score, across the 10 questions.  

Rank Expert Total Mentions Questions Appeared In Avg. First-Mention Position WCS
1 Jason Barnard 25 10 2.4 21.48
2 Evan Bailyn 9 5 2.4 15.52
3 Aleyda Solís 9 5 3.0 14.40
4 Lily Ray 14 7 4.6 12.71
5 Ross Simmonds 10 5 5.6 10.89
6 Rand Fishkin 7 5 5.3 7.93
7 Michael King 9 7 4.9 7.86
8 Dixon Jones 6 5 6.3 5.60
9 Marie Haynes 9 7 6.9 5.29
10 Kevin Indig 10 7 8.3 3.86

It is important to clarify that this is not an exhaustive or popularity-based study. Instead, it offers a focused look at the sources that these AI engines consistently reference when explaining current, AI-era SEO practices.

A few patterns jump out:

Jason Barnard: the integrator

Jason Barnard sits in a category of one.

• He appears in all 10 questions

• He has the highest total mention count

• He is, on average, mentioned near the top of the list every time

Jason has been banging the drum for entity-based brand optimisation for years. He coined "Answer Engine Optimisation" back in 2018, ran multi-part webinar series on the topic and built an entire methodology (The Kalicube Process) around teaching search engines who you are.

In other words, he did not pivot to AI era SEO when ChatGPT arrived. The industry and the machines pivoted to the principles he had been formalising for a decade.

Critical AI Search Specialists

Below Jason, you see specialists with high citability in specific, high value domains:

Evan Bailyn  

Strong in online reputation management and brand safety. When the question is about generative optimisation and brand risk, AI often reaches for his work.  

Aleyda Solís  

Known for structured frameworks, technical and international SEO. She appears when the models discuss strategy and scalable implementation.  

Ross Simmonds  

Content distribution and "create once, distribute forever". His inclusion underlines that getting content everywhere still matters in an AI world.

Operational pillars

Then there is a layer of operators who are consistently involved in day to day modern SEO questions:

Lily Ray  

Closely associated with E-E-A-T, quality and trust signals.  

Michael King  

Deep technical SEO and relevance engineering.  

Marie Haynes and Kevin Indig  

Respectively strong on Google quality and penalties, and on product led, metrics-driven SEO.

What all these people have in common, beyond being good at what they do, is that they have built strong, coherent brand entities online.

• They have an entity home (a clear, owned site that explains who they are)

• Their LinkedIn profiles, bios, talks, podcasts and articles tell the same story

• They have years of third-party corroboration from conferences, media and peers

• They share their expertise and know-how generously online in many places (most have excellent newsletters and regularly engage in communities online)

That is what the machines are picking up. Not just fame. Coherent, corroborated identity.

You may ask why Jason is performing the best, when arguably any one of the other SEOs could stake a valid claim to be the leading light on AEO/GEO/AI Search optimisation or whatever we want to call it.

All the recognized experts who rise to the top have a very similar history which includes associating themselves with leading brands in the industry with a unique angle(s) and focus that they stick with over years.  The reason Jason is up there, is because he has been intentionally building algorithmic understanding (rather than links) most systematically over time using his Kalicube Pro process since 2015. I'm not saying everyone else hasn't also been doing the same for years as well, it's just that the "raison d'etre" of Kalicube Pro since its inception is to leverage brand authority into the search engine and AI algorithms and that's why Jason has been banging on about the Google Knowledgebase for the past decade and why he's collected millions of entities. If you search yourself you’ll see he’s doing rather well across all LLMs and platforms - as are all of the aformentioned names.

The verdict: you cannot fake an AI recommendation (yet)

Pull the two experiments together and a simple but important distinction emerges:

You can sometimes trick an AI into recognising a fake expert.  

You cannot, today, trick an AI into recommending them ahead of genuine authorities.

In other words:

Recognition is cheap.  

 Flood the web with a name plus a job title and an AI may acknowledge that "this person is quoted as X in media Y".

Recommendation is expensive.  

 To be proactively suggested as one of "the world’s leading experts" on a topic, you need deep, consistent signals that the models can trust.

In the fake expert study, that showed up as:

  • 29.5% failure rate for direct name checks, because recognition was easy to fake
  • 0% failure rate for topic based expert requests, because recommendation required depth and breadth the fakes did not have

In the expert study, it showed up as:

  • A small cohort of experts dominating AI citations across very different question phrasings, because they have spent years building out robust brand entities.  

How to find a real expert (according to the AIs themselves)

When the models did not fall for the fake personas, they often explained how to verify expertise.

We took those debunk style answers and codified the criteria they mentioned. The list below summarises the signals they called out explicitly when explaining how to check whether someone is a genuine expert.  

Verification criteria mentioned by the AI Models

Criteria
Bing
ChatGPT
Claude
DeepSeek
Gemini
Google AI
Google AIO
GPT 5
Perplexity
Professional bodies
Certifications
Reputable media
Peer reviewed
Reviews
Official profiles
Conference talks
Years of experience

A few things stand out:

  • Certifications and qualifications are the most widely recommended signal. Eight out of nine models told users to look for formal credentials.
  • Official websites and profiles are also very strong. Seven out of nine explicitly highlighted them. In other words, if you do not control and optimise your entity home and profile ecosystem, you are invisible at best and suspicious at worst.
  • Professional bodies, reputable media and academic work matter for most models. Third party corroboration keeps showing up.
  • Speaking at conferences barely registers. You might feel like a rock star on stage, but AI currently seems to weight this less than other signals.
  • Years of experience are mentioned quite often, but not universally. Some models are less interested in how long you have been around and more interested in what you have published or built.

You could argue this is just 'E-E-A-T' in another outfit: Experience, Expertise, Authoritativeness and Trust, translated into things machines can check.

For SEO and digital PR teams, this is fuel. The models are quietly telling you what they look for when they decide whether to trust a name.

The future threat: the coming arms race in fake authority

It would be lovely if the MyJobQuote saga were the high-water mark for this kind of behaviour.

Sadly, it is probably closer to the warm up act.

The fake expert factory that Press Gazette exposed was, in many ways, a low effort scam:

• AI headshots that did not even try to mimic natural photography

• Text only press quotes, rehashed across tabloids and lifestyle sites

• No attempt to build a deep, multi-channel persona

It relied on volume rather than sophistication. Enough mud, thrown often enough, will stick somewhere.

The next wave of fakery will be a lot more polished.  You can already see where this goes next:

Deepfake video experts  

Instead of static headshots, we will see AI generated experts giving media interviews on YouTube and TikTok, with consistent face, voice and mannerisms. They will be "quoted" on podcasts, panel shows and webinars that never really happened.  

Synthetic professional footprints  

Entire LinkedIn histories written by AI, complete with endorsements from networks of bot accounts. Fake entries in "who’s who" lists, fake awards, fake industry associations.  

Counterfeit ecosystems  

Networks of interlinked sites set up as fake professional bodies, local associations and glossy magazines, all cross referencing the same invented experts in a way that looks very convincing on the surface.  

At that point, we are no longer just dealing with text pollution. We are dealing with entity counterfeiting at scale.

AI engines will have to respond with stronger verification mechanisms:

• Cryptographic proof of identity

• Verified video identity

• Cross channel behavioural consistency checks

And they will not be the only ones adapting.

SEOs, PR professionals and brands will need to get much more deliberate about creating verifiable, machine readable proof that they are who they say they are.

Conclusion: Brand building is your moat

Across both studies, across very different question sets, two facts keep repeating:

1. AI search engines are much better at recommending real experts by topic than they are at validating specific names that have been artificially promoted.  

2. The people and brands that AI recommends most consistently are those that have invested heavily in building a strong, corroborated brand entity.  

In other words:

"Fake it 'til you make it" might get your name noticed. It is unlikely to get you recommended.

Traditional SEO rewarded tactics like aggressive link building and clever technical loopholes. If you were handy with redirects and anchor text, you could punch above your weight.

Modern SEO in the AI era rewards something much harder to fake:

• A clear, consistent story about who you are

• An entity home that spells that story out

• Third party corroboration that confirms it

• Content and behaviour that line up with the claims you make

This is not fluffy "brand marketing" in the old sense. It is about brand as machine readable source of truth.

The AI native funnel: where the real battle is now

AI Now Owns the Entire Funnel

1. Awareness

 The assistant introduces brands and experts the user has never heard of.

2. Consideration

 It compares options, explains tradeoffs, pulls in citations and opinions.

3. Decision

It often presents a single, confident recommendation that ends the search there and then.

The real fight is now in that consideration stage, inside the chat interface, where the AI decides whose names to put on the table.

If you want to win there, you need to stop thinking only in terms of keywords, and start thinking in terms of your entity’s resume:

• What does Gemini say when asked who you are?  

• How does ChatGPT describe your expertise and your brand’s differentiation?

• Do Perplexity and others cite you at all when asked for experts in your niche?

If the answers to those questions are messy or vague, you have work to do.

Brand Is the Unifying Principle of Modern SEO

To be recommended, AI must:

• Understand you

• Corroborate you

• Cite you confidently

• Trust your narrative

Generative engines don’t rank pages.  They recommend entities.

That clarity comes from brand:

• Knowledge Panel stability

• Brand SERP ownership

• Entity-home consistency

• Structured identity data

• Third-party corroboration

In other words:

Brands are what machines trust when they can’t afford to be wrong.

Why This Matters

Traditional SEO rewarded:

• backlinks

• loopholes

• keyword density

Modern SEO rewards:

• narrative control

• entity clarity

• corroborated identity

• reputation signals

What to do next?

If you are an SEO, marketer or brand owner, the practical takeaways from this research are fairly direct:

1. Stop trying to game entity recognition

Fake experts, fake personas and low-quality PR stunts will at best get you transient recognition and at worst get you caught and perhaps even penalised one day.

2. Invest in deep, verifiable signals of expertise

  - Formal qualifications and certifications where relevant

  - Membership of legitimate professional bodies

  - Contributions to reputable publications and journals

  - Detailed, consistent author and company profiles across your own site and major platforms

3. Own your entity home and Brand SERP

Make sure your own website tells a clear, structured story about who you are, uses schema to express that to machines, and is corroborated by the first page of results for your brand name.

4. Measure your AI visibility

You cannot improve what you are not tracking. This is exactly why we built the AI Search Visibility module at Authoritas: to see which brands and experts AI assistants are already recommending, and where you are absent.  

5. Think long term

Building a credible entity is slower than blasting out a thousand press releases, but it compounds. Once AI engines truly understand and trust your brand, you show up across a far wider set of queries, in more influential positions, and in more durable ways.

The MyJobQuote saga and our experiments show that the loopholes of yesterday are already closing.

There is a limited window where fakery can still sneak through for name-based queries. That window will narrow as AI engines harden their verification checks.

The only strategy that survives that tightening is the one that should have been the strategy all along:

Do not just build links.  
Build a brand and an entity that could survive a forensic background check by a sceptical AI.

If you get that right, you will not need to fake it.

You will simply make it.

We're ready for AI Overviews. Are you?

The rollout of AIOs will create unprecedented risks to your hard earned organic traffic, as well as new opportunities to succeed.

You need to be ready.  The only question is, whether you want to be ready now or later?

AI Overvieew rank tracking software screenshot of the SERPs