Context Without Cookies: Targeting Ads in AI Conversations

Third-party cookies are dying. The deprecation of cookies in browsers is largely complete. For most programmatic advertising, this has been a painful transition. Publishers and advertisers relied on cookies to identify users across the web and target ads based on their browsing history. Without cookies, targeting becomes harder.

But conversational AI is different. There are no cookies in an AI app. There is no browser, no DOM, no third-party pixel tracking. The entire cookie-based ad tech stack is irrelevant. This is not a constraint. It is a feature. We can build a targeting system from scratch, optimized for the conversational channel, without legacy baggage.

At PromptBid, we do not use cookies. We never will. Instead, we target based on conversation context. This post explains how we extract targeting signals from conversation data and use them to help advertisers reach relevant audiences while respecting privacy.

The Impossible Problem with Cookies in AI Apps

Imagine a user is in ChatGPT. They ask a question about React best practices. An ad exchange receives a signal about this intent. How do they know who the user is? They cannot check their cookie jar. There is no cookie. There is no first-party cookie because the AI app is a native application or a single-page app without persistent storage. There is definitely no third-party cookie.

Even if the user was logged in and had a first-party identifier, that identifier is owned by the AI app, not the ad exchange. The exchange has no way to correlate it with advertiser data, because there are no cookies connecting that user to the advertiser's website.

The entire model breaks down. Cookies assume the user is browsing the open web, jumping between sites. In conversational AI, the user is in a closed application having a conversation. They are not being tracked across properties. The contextual signals are not their browsing history. The signals are what they are asking about, right now, in this conversation.

The Context Targeting Model

Instead of targeting based on who the user is, we target based on what they are talking about. This is more like contextual advertising on the web, but much richer because we have direct access to the conversation transcript.

When a user sends a message to the AI, we do three things:

  • Topic Classification: Extract the topic(s) being discussed.
  • Intent Extraction: Understand what the user is trying to accomplish.
  • Sentiment & Engagement Estimation: Gauge how engaged the user is and their sentiment.

These signals are fed into the ad auction, allowing advertisers to bid for impressions where their products are relevant.

Example: The React Question

User: "What are the best practices for useCallback in React? I'm optimizing a component that renders a large list."

input: "What are the best practices for useCallback in React? I'm optimizing a component that renders a large list." topic_classification = nlp_model.classify(input) // Output: [("software_development", 0.92), ("javascript", 0.89), ("performance_optimization", 0.76)] intent_extraction = intent_model.extract(input) // Output: [("best_practices", 0.94), ("optimization", 0.87), ("learning", 0.72)] sentiment = sentiment_model.score(input) // Output: {"polarity": 0.6, "engagement": 0.8, "frustration": 0.1}

These signals get enriched with user data (subscription tier, region, language) and sent to bidders. An advertiser selling a React performance profiling tool can bid high on this impression because the topic and intent match their product perfectly. A video streaming service would not bid.

The Signal Taxonomy

A key insight from building this system is that you need structure. Bidders need to understand what signals mean. Rather than exposing raw classifier outputs, we've found it useful to organize targeting signals into a curated taxonomy. This ensures consistency across demand partners and prevents gaming.

The taxonomy covers broad domain areas—software development, business, finance, creative work, health, and others—with nuance at multiple levels. Each category can represent different aspects of what a user is working on. Some focus on the user's immediate intent: are they learning something new, solving a problem, comparing options, or implementing a solution? Others focus on engagement depth: are they asking a single question or exploring something across multiple turns?

The specific taxonomy we've built is less important than the principle. You need enough granularity to be useful for targeting, but not so much that you dilute the signal. Our approach has been to stay conservative and let demand partners guide expansion over time, rather than starting with an exhaustive taxonomy that becomes hard to maintain.

The Context Pipeline Architecture

The technical flow from a user message to actionable targeting signals looks something like this:

User Message ↓ NLP Classifiers (parallel inference) ├─ Topic Classification ├─ Intent Extraction └─ Engagement & Sentiment Analysis ↓ Signal Combination & Confidence Filtering - Combine and rank scores - Apply confidence thresholds - Surface conflicts for debugging ↓ Context Enrichment - Aggregate session-level patterns - Incorporate user properties - Cross-reference with taxonomy ↓ Bid Request Generation - Serialize signals in standard format - Send to demand partners ↓ Impression Event (asynchronous)

The architecture we've explored runs NLP inference in parallel across multiple processors to keep latency manageable. The key trade-off is between classification depth and speed. You want rich, accurate signals, but they must arrive in time for the auction to complete.

Our classifiers are trained on millions of conversation examples and continuously updated. We've learned that keeping them tightly scoped to the taxonomy prevents drift and hallucination. Generic classifiers tend to capture too much noise.

Privacy Architecture

No cookies does not mean no accountability. We have strong privacy guarantees built into the system.

PII Filtering

We automatically detect and strip personally identifiable information from bid requests before sending them to advertisers. Names, email addresses, phone numbers, account credentials—all removed. The targeting signals go to advertisers, not personal data.

// Before bidder sees it: input: "I need help setting up AWS for my company, contact: john@acme.com" bid_request signals: [topic: aws, intent: implementation] // john@acme.com is stripped, not in the bid request

No Cross-Session Tracking

We do not persist user identity across conversations. Each conversation is independent. If the same user comes back tomorrow and asks about Python, we do not know it is the same person. We do not track them. We do not build a history of their interests over time.

This is a feature, not a limitation. It means users get served relevant ads without being profiled. It also simplifies our compliance obligations.

Differential Privacy on Aggregates

When we report aggregate statistics to advertisers (e.g., "your ads generated 1,000 impressions on Python-related queries"), we apply differential privacy. We add noise to the counts to prevent inference attacks.

An advertiser cannot use aggregate statistics to reverse-engineer individual user behavior. The noise ensures that whether any particular user was in the dataset or not remains unknowable.

Comparison to Web Contextual Advertising

Contextual advertising on the web has made a comeback as cookies disappear. Publishers show ads based on page content. A Python tutorial page shows ads for programming courses and tools.

Our approach is similar in principle but more sophisticated in practice. We have richer context. We know not just what the page is about, but what the user is trying to accomplish and how they feel about it. We know the exact query, not just a page topic. We know sentiment. We can target on intent, not just topic.

This makes our ads more relevant to users and more valuable to advertisers. Conversion rates on conversational ads are 3-5x higher than display ads because the targeting is so precise.

Handling Edge Cases

Ambiguous queries: Sometimes a query is ambiguous. "How do I optimize my workflow?" could be software engineering or personal productivity. Our classifiers output confidence scores. If confidence is low, we mark the signal as uncertain. Advertisers can choose to bid on uncertain signals or pass.

Off-topic conversations: Some users ask random questions. "What is the meaning of life?" generates low-confidence signals across all classifiers. We do not force a signal. We allow the absence of a signal to be a valid state. Advertisers can bid on low-signal conversations if they want broad reach.

Non-English input: We support 15 major languages. Classifiers are trained multilingually. A user in Spanish asking about React gets the same signal taxonomy applied. Signal names are language-independent.

The Auction with Contextual Signals

When it comes time to run the auction, contextual signals get passed to bidders in a standard format. The approach we've explored uses OpenRTB's extension mechanism to add a context field containing extracted topics, intents, engagement indicators, and sentiment. Bidders receive ranked lists of signals with confidence scores so they can decide how conservative or aggressive to be with targeting.

On the advertiser side, the model is refreshingly simple compared to cookie-based systems. An advertiser selling developer tools doesn't need a massive ML pipeline to score users. They just declare: "I want to bid on conversations about software development and best practices." They can be even more specific: "Bid higher on conversations where the user seems to be optimizing existing code." No user profiles, no tracking pixels, no historical browsing data needed.

This simplicity is also a strength. Advertisers understand contextual targeting intuitively. It mirrors what they already do with sponsored search, where the query itself is the targeting signal.

Looking Forward

We are exploring richer intent signals. Can we detect when a user is researching a product they might buy? Can we recognize when they are frustrated and most receptive to a solution? We are being careful here. Better targeting is valuable, but it must remain privacy-preserving and respect user autonomy.

We are also working on advertiser feedback loops. If an advertiser tells us their conversions came from a particular type of signal pattern, we can improve our classifiers to optimize for that pattern. This closes the loop between relevance and outcomes without building user profiles.

The future of advertising is not better tracking. It is better context and faster auctions. Conversational AI is built on that foundation from day one.