Optimizing for LLMs: What Actually Matters and What's Just Hype

May 21, 2025

Nik Vujic

Founder & CEO

As LLMs become a go-to source for information, getting your brand, products, or services mentioned in their answers is quickly becoming the new frontier of digital visibility.

Over the past three months, our team at Get Stuff Digital conducted a focused, hands-on analysis across multiple large language models (LLMs) to uncover what impacts content visibility in AI-generated answers. Using common-sense logic, real-world testing, and manual LLM audits, we tested various sites, datasets, and tactics.

We're sharing this research publicly, not as a final answer, but as a baseline. No one knows for sure how to "optimize" for LLMs. Not us, not Google, not OpenAI. We only know what we've observed working across multiple real examples.

LLMs are Not the Same Thing as AI

LLMs are language models trained to predict and generate human-like text based on patterns learned from vast amounts of textual data.

AI is a broader field of technology focused on creating systems capable of performing tasks that typically require human intelligence, such as reasoning, decision-making, perception, and learning.

The distinction matters, especially for SEO, because it's specifically how LLMs gather, interpret, and generate information from their training data that shapes what's considered relevant and determines what content gets surfaced in their responses.

Everyone needs to stop calling this "AI optimization." This isn't artificial intelligence. It's pattern prediction at scale. What we're dealing with are LLMs, models trained on specific datasets that pull from platforms like:

  • Grok = X (Twitter)
  • GPT = Bing
  • Gemini = Google Search and YouTube
  • Meta AI = Facebook + Instagram

But there's one source they almost all share: Reddit.

Reddit shows up across nearly every LLM as a primary input source because Reddit isn't just user-generated content, and it offers opinions, experience, and nuance. These are exact things LLMs favor when constructing semi-coherent answers to real queries.

The Content Formats That LLMs Actually Use

Let's skip theories. Based on our research and tracking citations across outputs, here's what LLMs are referencing most:

  • Listicles (e.g., "Best Vinyl Flooring Brands")
  • "Best of" roundups
  • Side-by-side comparisons
  • User-generated content (Reddit, forums, reviews)
  • Research-backed or dataset-driven content
  • Informational/educational content (still referenced, but used less often than you'd think)

Here's how this breaks down across sectors:

B2B: comparisons, market insights, detailed product overviews
B2C: reviews, "best of" rankings, and Reddit-heavy UGC
Services: client-centric listicles and direct comparisons

So what do you do with this? You stop writing generic blog posts. You stop thinking you can keyword your way into AI search. You build content mimicking what LLMs pull from, which is always user-oriented. Specific, practical, comparative, and trusted.

[CTA_INSERT]

What We Tested (and What Showed Results)

We ran structured tests using multiple LLMs and types of sites, manually analyzing what kinds of content got cited and surfaced in responses. We prioritized:

  • Authority-driven websites
  • Pages placed on heavily cited lists
  • Integration of Reddit mentions

Result? Increased visibility in LLM responses. Not because we gamed a system. Because we aligned with what LLMs already favored: trusted domains, specific formats, and Reddit linkage.

We didn't change the structure. We didn't mess with Schema. We didn't apply some magical prompt-hacking nonsense. We published content that helped users, answered real queries, and did so in a way that matched what LLMs already use.

Let's Kill the Myth: No One Can Optimize for LLMs (Yet)

If someone tells you they're an expert in LLM optimization, run. There's no such thing. This field is raw. The rules aren't set. No model is transparent about how it chooses citations. Anyone promising guaranteed exposure is selling vapor.

This is SEO all over again, except it's 2003 and everyone's pretending it's 2015.

LLMs are built on historical data. They don't invent knowledge. They remix what they've been trained on. So you're invisible if your content isn't already part of the high-authority, high-discussion dataset.

There is no proven tactic, no technical setup, no exact formula. Schema doesn't magically push you into an LLM response. Internal linking doesn't force a citation. This is a guessing game, and most of what's being sold as LLM optimization today is recycled SEO hype with new packaging.

What You Should Actually Do (If You Care About Visibility)

Here's what we recommend for generative engine optimization based on testable, repeatable results:

  1. Invest in content formats that LLMs favor.
    • Make listicles. Not fluff lists, but useful ones.
    • Do side-by-side product comparisons.
    • Publish client-focused reviews.
  2. Add Reddit to your strategy.
    • Get mentioned. Create threads. Add value.
    • Reddit is quietly shaping AI visibility. Please don't ignore it.
  3. Build on what already works in SEO.
    • Content that's helpful, fast, focused, and trustworthy.
    • Strong headlines, logical structure, and clean site UX still matter.
  4. Treat LLMs like emerging search engines.
    • Same user intent game. Same need for relevance.
    • But zero control over how it's presented.
  5. Avoid gimmicks.
    • Don't chase AI-hype tools or plugins.
    • Nothing replaces authority and clarity.

What the Google I/O 2025 Event Just Told Us About Search

Let’s not ignore what just happened at Google I/O 2025. The entire event confirmed that search is evolving fast, and it’s not slowing down. Key announcements included deeper integration of Gemini into Google Search, AI Overviews going fully global, expanded use of multi-step reasoning in answers, and the introduction of AI-organized search results tailored by personal context. 

Google doubled down on SGE (Search Generative Experience), showcasing plans to reduce clicks to websites and deliver answers directly. For SEOs and businesses, this means that they have to adapt fast or lose ground.

Google, AI Overviews, and the Future of Discoverability

Let's address the elephant in the room: Google's AI Overviews.

As seen in the May 2024 Google I/O event, AI Overviews are already being tested at scale. And they're messy. Hallucinations are common. Dangerous misinformation is slipping through. User queries are being intercepted before reaching the actual content.

While the hallucination problem has been partly addressed, Gemini 2.0 still occasionally makes mistakes masked as confident knowledge.

This is a problem.

If Google proceeds with this path, it should give users a choice: personalized AI-driven results OR traditional 10-blue-link search. The current hybrid model is chaotic. AI Overviews interrupt discovery and dilute site traffic without improving user clarity.

Search should remain user-first, not AI-first. Until the hallucination problem is solved, AI Overviews are hurting more than they help.

The Tracking Gap No One Talks About

One major problem in this whole conversation is that there’s no consistent or accurate way to track LLM citations or AI-driven visibility. GA4 is useless here because traffic from AI surfaces is either misattributed or not tracked at all. 

Ahrefs struggles to reflect real-time keyword shifts, and when it comes to generative search, it’s mostly blind. Tools that claim to track AI mentions, like AthenaHQ and others we’ve tested, are either glitchy or simply unreliable. 

The data can’t be verified. Until that changes, we’re all flying partially blind. Testing is key, but without real tracking, no one can definitively say what works and what doesn’t.

The Bottom Line (And a Reality Check)

There is no official method to optimize for LLMs. What exists today is foundational logic, some smart testing, and a focus on aligning with the most popular formats.

Yes, SEO still matters. More than ever. Because LLMs depend on existing web content. Someone has to produce that. Someone has to earn that SEO.

Will it be called SEO in five years? Maybe not. But the function will remain: making your content visible, trustworthy, and aligned with how search (traditional or AI-driven) works.

We call our evolving approach GEO (Generative Engine Optimization), but even that comes with a warning. It’s volatile. It’s 90% the same principles SEO has always relied on.

So be smart. Avoid the hype. Focus on authority, intent, and clarity. And understand that what you publish today might be what a model references tomorrow, you’ve earned that place.

This piece is based on original research conducted by Get Stuff Digital across multiple LLMs and content platforms. It’s not speculative; it’s not regurgitated advice. It results from deep, practical testing rooted in experience and common sense.

For those serious about preparing their content for the next era of search, this is the signal through the noise.

AUTHOR
Nik Vujic

Nik Vujic

Founder & CEO

Nik Vujic is the founder of Get Stuff Digital, the agency brands call when they need growth that sticks. His mission? Build, optimize, and push clients beyond good enough.

Opportunities start with being found
We help you appear where demand already exists - so the right people discover what you have to offer. Let’s make it happen.
Contact Us
Opportunities start with being found
We help you appear where demand already exists - so the right people discover what you have to offer. Let’s make it happen.
Contact Us

Continue reading

Content Rework: Breathing New Life into Your Existing Content

Content Rework: Breathing New Life into Your Existing Content

Read more
Branding
What Makes SEO the High-ROI Growth Strategy for B2B SaaS Companies

What Makes SEO the High-ROI Growth Strategy for B2B SaaS Companies

Read more
Branding
Content Strategy Done Right: How to Drive Engagement and Conversion Rates in 2025

Content Strategy Done Right: How to Drive Engagement and Conversion Rates in 2025

Read more
Branding