What Cornell’s New LLM Ranking Research Means for SEO & AI Search

Picture this: you ask an AI assistant to recommend the best running shoes and instead of skimming through ten blue links, you get a smart list of suggestions in plain language. That’s the reality of LLM-based search — also called generative engines — where large language models (LLMs) like GPT-4o or Claude take on the heavy lifting of ranking and summarizing search results for users. But here’s the twist: the order in which those results originally show up dramatically influences what the AI suggests. That’s what a team of researchers from Cornell recently dug into in a paper titled Controlling Output Rankings in Generative Engines for LLM-based Search. 

In the world of traditional SEO, we fought tooth and nail to rank websites in search engine result pages (SERPs). Now the game has evolved: AI selects and synthesizes content on our behalf, and that introduces a new visibility challenge — position bias — that could bury even brilliant products or content deep in AI-driven results. 

From SERP to AI Recommendation: The Ranking Problem

Let’s be honest. Traditional search gave marketers a clear battlefield. You targeted keywords, built links, optimized structure, and boom — you ranked. But with generative engines:

  1. The LLM retrieves candidate items from a conventional search engine.
  2. Then it synthesizes them into a ranked output for the user.

That second part is huge because it’s influenced by the initial retrieval order — the product or content at position one in the raw list tends to dominate the recommendations. That’s what the Cornell team observed. 

Imagine a tiny brand that earns position 8 on a traditional product search. In a generative engine, that product almost never gets to the top of the AI’s recommendation list because the model tends to “trust” the earlier rankings. This is position bias in action. 

Enter CORE: Controlling Output Rankings Experimentally

So what did the Cornell researchers do about it? They invented a method called CORE — which stands for Controlling Output Rankings in Generative Engines. 

Instead of tweaking keywords or building links the old way, CORE works by optimizing the content the AI already sees before it synthesizes output. It doesn’t need to break into the LLM. It just appends smartly designed content to the retrieved items so the AI picks up better signals when deciding what to rank first.

You can think of CORE as a new kind of generative engine optimization — let’s call it GEO 2.0 — where the content guides the AI rather than competing for clicks. This is critical because LLMs treat ranking as part of natural language generation, not a separate engineered process like in search engines.

Three Flavors of Optimization

The paper tests three types of optimization content:

  • String-based tweaks that adjust the wording and structure of descriptions
  • Reasoning-based enhancements that add logical explanations or cues
  • Review-based signals that leverage crowdsourced sentiment and details

These tweaks aren’t random. Think of them as neural breadcrumbs that help the model see your content as more relevant or authoritative, nudging it up in the final ranking. 

The ProductBench Benchmark

To test CORE under real conditions, the researchers built ProductBench — a dataset of 15 product categories with 200 products each, pulling the top ten items from Amazon’s search results as the candidate pool. 

When they applied CORE across four major LLMs — GPT-4o, Gemini-2.5, Claude-4, and Grok-3 — the results were eye-opening:

  • 91.4% success at Top-5
  • 86.6% at Top-3
  • 80.3% at Top-1

In simple terms, CORE boosted the visibility of optimized items into higher positions at scale while keeping the generated content fluent and natural. 

What This Means for SEO Experts

If you’ve been watching AI take over search, here’s your first real hint of how to adapt:

1. Traditional SEO is still relevant

But ranking within a dataset that an LLM will See matters even more than ranking in SERPs alone.

2. Content structure wins

The paper shows that reasoning patterns and review signals can shape how LLMs prioritize results. This opens up a new dimension for content strategy.

3. Experiment with optimization content

You now have reason to explore content that is specifically tuned for AI ranking — not just keyword targeting.

4. New KPI possibilities

Instead of clicks, think about AI visibility, AI mention placement, and model engagement rates as future SEO metrics.

Final Thoughts

Cornell’s research doesn’t just show us a problem — it gives us a potential playbook for reclaiming visibility in an AI-centric search world. As generative engines take over how users discover information, marketers will need to embrace language-guided optimization as a core strategy.

SEO once meant mastering Google SERPs. Tomorrow it might mean mastering LLM output optimization — and CORE is one of the first practical tools to make that transition real.

Leave a Reply

Your email address will not be published. Required fields are marked *

Newsletter

Signup our newsletter to get update information, news, insight or promotions.

Latest Article

Signup to our newsletter to get updated information, news, insights and promotions.

Copyright © 2013-2026. All Rights Reserved.