LLMs vs Traditional NLP

A Practical Experiment in Contextual Understanding for Advertising

Brian Bouquet
Brian Bouquet4 min read

Contextual advertising sounds simple on paper. Show relevant ads next to relevant content.

In practice, it has always been harder than it looks.

Understanding what a piece of content is actually about is not the same as pulling keywords out of it. Meaning lives in relationships, tone, implication, and emphasis. For years, most contextual systems have approximated this by looking for words and patterns and hoping that was close enough.

This project started with a simple question:

Could an LLM's contextual processing outperform ad-tech's traditional NLP approach?

A Quick Detour: Why Context Is Hard

Let’s start with a sentence:

The trophy did not fit in the suitcase because it was too small.
(Classic AI ambiguity problem)

You immediately know what “it” refers to. Not because the sentence forces a single answer, but because one explanation makes the most sense. Your brain chooses it and moves on.

This is how humans process language all the time. We do not decode meaning word by word. We infer it based on context.

Traditional NLP systems struggle here.

They tend to:

  • Break text into pieces

  • Extract features

  • Apply rules or classifiers

  • Lock in meaning early

This works well when meaning is obvious and local. It works poorly when meaning depends on how ideas relate across a paragraph, or an entire article.

Why This Matters for Advertising

Advertisers do not want keywords.They want signals.

They want to know:

  • What is the topic, really

  • What intent is implied

  • What themes dominate the content

  • What mindset the reader is likely in

Keyword-based systems often miss this. They over-weight surface terms and under-weight nuance. Two articles can use the same words and convey very different meaning. One might be informational. Another might be opinionated.

For advertisers, this leads to:

  • Misaligned ads

  • Brand safety concerns

  • Weaker campaign performance

So the hypothesis for this project was straightforward.

The Hypothesis

By using attention-based language models to process publisher content, I can extract richer, more accurate contextual signals than traditional NLP approaches, and those signals will be more useful for advertising.

Not because LLMs are magical. But because they are built to keep context alive instead of freezing meaning too early.

What I Built

LLM-based contextual processing

  • Content represented as vectors

  • Meaning captured as relationships in a semantic space

  • Attention used to determine what matters most in context

  • Signals extracted based on themes, intent, and emphasis

Instead of asking “what words appear,” I asked “what ideas dominate, and why.”

The output was not just labels. It was contextual signals that could be mapped to advertising use cases.

Using an LLM to process the context of a web page
The LLM is very effective at paying attention to the signal in the noise when it comes to processing the context of an article.

What I Observed

LLM-based processing:

  • Identified primary vs. secondary themes

  • Distinguished informational content from commercial or opinionated content

  • Surfaced intent signals that never appeared as explicit keywords

  • Stayed consistent across longer, more complex articles

In short, the LLM approach behaved more like a reader and less like a scanner.

Why This Works Technically

Two concepts matter here: vectors and attention.

Vectors allow meaning to be represented as position and distance, not labels. Related ideas cluster naturally. Unrelated ideas drift apart. This lets the system reason about similarity and relevance without hard-coded rules.

Attention allows the model to decide, moment by moment, which parts of the content matter most. A headline may matter more than a sidebar mention. A conclusion may reframe the entire article. A passing reference may be safely ignored.

Together, this allows the system to extract signals that reflect what the content is actually about, not just what words it contains.

This is much closer to how humans interpret content, and that turns out to matter a lot for advertising.

What This Means

For advertisers, better contextual understanding means:

  • Higher-quality signals passed into the ad ecosystem (plus, deep specficity)

  • Better alignment between the content and ads

  • Fewer brand safety violations

  • Improved return on ad spend

This is not about surveillance or user tracking. It is about making better use of the content publishers already create.

Conclusion

Traditional NLP is not wrong. It is just limited by design. It was built for extraction and classification, not interpretation.

Attention-based language models offer a different foundation. They treat language as connected, evolving, and context-dependent. That turns out to be exactly what advertisers need.

The result of this project supports the original hypothesis: LLMs produce richer contextual signals for advertising because they understand content more like humans do.

And sometimes, that difference starts with something as small as understanding what “it” really means.


LLMs vs Traditional NLP | Brian Bouquet | Idlewire