Monetizing the Non-Human Impression: A Sell-Side Framework for AI Agent Traffic
AI agents are graduating from novelty to daily infrastructure. They crawl, retrieve, summarize, plan, and occasionally transact. This creates an awkward reality for sell-side teams: a growing share of server calls and pageviews are initiated by non-human actors that are not malicious, yet also do not fit legacy impression and viewability economics. Instead of treating all non-human traffic as waste, the sell side can define, package, and monetize legitimate agent activity. The goal is not to erode human advertising value, but to create parallel, fit-for-purpose products that align with agent use cases, compliance expectations, and publisher economics. This thought piece outlines a practical framework for SSPs, intermediaries, and publishers to capture value from AI agent traffic. It leans into standards like ads.txt, sellers.json, and OpenRTB, respects MRC IVT guidance, and suggests concrete product, pricing, and measurement tactics that can ship in quarters, not years.
What changed: agents, not just bots
Legacy bot management drew a bright line between Good Bots (indexing, uptime checks) and Bad Bots (fraud, scrapers). AI agents blur that line. They retrieve specific content snippets, follow links, write summaries, and may trigger downstream actions like price checks or affiliate lookups. That behavior is neither a conventional impression nor an obvious security threat. It is machine consumption of publisher inventory with real utility for end users elsewhere. Treating it as pure IVT leaves money on the table. Treating it as a human impression risks miscounting and mistrust. A third path is needed: a “non-human impression” class with its own signals, policies, pricing, and reporting.
Definitions and taxonomy
To design products, we need shared definitions. Below is a pragmatic taxonomy for sell-side teams.
- GIVT: General Invalid Traffic that can be filtered reliably via lists and rules. Typical examples are known data-center crawlers and monitoring bots. Referenced by MRC IVT guidance.
- SIVT: Sophisticated Invalid Traffic that requires advanced detection. Examples include hijacked devices and obfuscated automation. Also defined by MRC IVT guidance.
- Good Bots: Declarative bots that identify themselves, respect robots directives, and do not impersonate users. Search indexing, uptime checks, and accessibility tools.
- AI Agents: Programmatic actors that fetch, transform, and sometimes act on content for an end user session elsewhere. They may identify themselves via user-agent or token, may request structured responses, and may follow workflows.
- Non-Human Impression (NHI): A server-verified rendering or content retrieval event initiated by a non-human actor that meets a publisher and SSP policy for monetizable machine use. Not counted toward human impression currency, but eligible for agent monetization units.
This taxonomy keeps MRC’s IVT discipline intact while carving out a legitimate, bounded space for monetization that is not a backdoor to inflate human impression counts.
Why monetize agent traffic now
- Volume and trajectory: Agent traffic is growing with the proliferation of AI assistants, retrieval augmentation, and automation tools. Early volumes may be small but compounding.
- Publisher economics: Machine consumption has a cost. Serving requests at scale without revenue creates a hidden tax on infrastructure and editorial investment.
- Market need: Agents require high quality, fresh, and structured data. Publishers can package this value directly, and SSPs can intermediate at scale.
- Trust and control: Clear monetization channels reduce incentive for gray scraping and encourage agents to identify themselves and comply with policies.
North star: parallel rails, not a shortcut
Human ad impressions and agent events should run on parallel rails. Keep counting systems, currencies, and SLAs distinct, then build bridges only where it makes sense. That preserves trust with buyers and regulators while unlocking new revenue.
A sell-side framework in seven pillars
1) Identification and classification
The foundation is reliable identification. You cannot price, package, or report what you cannot classify.
- Signals to collect: user-agent, reverse DNS, ASN, TLS fingerprints, referer, method, IP reputation, token headers from major AI providers, and frequency patterns.
- Positive identification: Encourage agents to self-identify via a registry and token mechanism. Offer rate and access benefits for verified agents.
- Policy tiers: Tier 0 block (bad behavior), Tier 1 free access with strict rate limits, Tier 2 paid access with SLAs, Tier 3 enterprise agreements.
- Event labeling: Attach a boolean is_ai_agent plus categorical agent_class (retriever, assistant, tool), and agent_source (declared vs inferred).
Keep classification in infrastructure, not business logic, so product teams can evolve monetization without refactoring routing.
2) Policy and compliance
Agent monetization must be privacy-safe and transparent.
- Consent boundaries: Treat agent requests as contextual by default. Do not attach user-level identifiers unless the agent provides a lawful basis and token that scopes processing under GPP/TCF policies.
- Robots and no-AI directives: Honor robots.txt and meta signals for AI use. Where gray areas exist, err on the side of transparency and provide opt-out channels for specific content types.
- Jurisdiction awareness: If the agent presents a token asserting user consent in a regulated region, validate and scope usage strictly to the declared purpose.
- Contracts over headers: For paid access and SLAs, rely on contracts and keys, not only user-agent self-declaration.
3) Packaging and productization
Create distinct SKUs and deal constructs tailored to agent use.
- Agent Access API: A structured endpoint that returns clean text, metadata, and policy-controlled snippets. Optimized for RAG and summarization agents.
- Agent Impression Unit: A standardized JSON payload that includes sponsor metadata, source attribution, and link-back requirements. No human viewability promises.
- Contextual Agent Deals: PMPs or programmatic deals labeled as agent inventory with topical taxonomies and freshness guarantees.
- Data Licensing: Bulk or stream licenses for archives and feeds with clear scope and watermarking where feasible.
4) Pricing and yield management
Price based on utility and cost, not legacy CPMs alone.
- Metered pricing: Per request, per token, or per kilobyte served. Aligns close to API economics.
- Outcome-aligned: For commerce or lead-gen domains, use CPA or rev-share tied to affiliate attribution or tracked actions.
- Tiered SLAs: Higher prices for freshness, low latency, and premium access windows such as pre-index or embargoed content syndication.
- Hybrid CPM bridge: Where programmatic rails are convenient, set a clear NHI CPM that reflects compute cost and editorial value, kept separate from human CPM reporting.
5) Delivery and creative formats
Agents do not need pixels or JS. They need structured payloads and provenance.
- Machine-friendly markup: JSON or lightweight HTML with semantic tags for title, author, date, canonical URL, and sponsor disclosure.
- Sponsor metadata: Embed sponsorship fields that agents can render in their UI or summary output. Require attribution and link-back.
- Contextual safety: Provide brand-safety labels and content topics to guide agent-safe sponsor matching.
- Cache hints: Allow agents to cache for defined intervals to reduce load and stabilize pricing.
6) Measurement and reporting
Create a reporting vocabulary that suits machines.
- Core metrics: NHI requests, successful responses, deduped sessions, agent_ids, topics, bytes served, latency percentiles, and sponsor inclusion rate.
- Attribution: For commerce agents, track affiliate tags and post-click outcomes where permitted. For assistants, track link-back CTR and referral traffic.
- Quality controls: Distinguish declared vs inferred agents. Track policy violations and downgrade or block as needed.
- Currency separation: Maintain totally separate ledgers for human impression currencies and NHI metrics.
7) Governance and stewardship
Monetization should increase the supply of high quality content, not crowd it out.
- Editorial guardrails: Preserve human-first experiences. Agent monetization should not degrade page speed or UX.
- Transparency: Communicate policies publicly. Provide a self-serve portal for agent registration, keys, and rate cards.
- Industry cooperation: Support a standard list for AI agents similar to the IAB bots and spiders list. Advocate for OpenRTB and ads.txt extensions.
- Review cadence: Quarterly audits of agent partners, abuse patterns, and pricing to keep incentives healthy.
Technical blueprint
Below are concrete proposals that sell-side teams can implement incrementally.
A) OpenRTB extensions for agent inventory
When you choose to transact agent access over programmatic rails, add explicit flags. This keeps buyers and analytics tools honest.
{
"id": "req-123",
"at": 2,
"tmax": 120,
"imp": [{
"id": "1",
"secure": 1,
"displaymanager": "rv-agent",
"displaymanagerver": "1.0",
"banner": { "w": 1, "h": 1 },
"ext": {
"ai_agent": {
"is_agent": true,
"class": "retriever",
"declared": true,
"source": "openai",
"token_scope": "contextual",
"purpose": ["summarization", "snippet"],
"freshness_sla_ms": 2000
}
}
}],
"site": {
"domain": "examplepublisher.com",
"page": "https://examplepublisher.com/article/abc",
"cat": ["IAB1-6"],
"ext": {
"inventory_class": "agent_nhi"
}
},
"device": {
"ua": "Agent-Retriever/1.2",
"ip": "2001:db8::1"
},
"user": {
"ext": { "gpp": "", "gpp_sid": [] }
},
"ext": {
"prebid": { "channel": { "name": "server", "version": "0.100" } }
}
}
Notes:
- Use ext.ai_agent.is_agent as the canonical flag.
- Avoid viewability metrics in settlement. Report response integrity and SLA adherence instead.
- Keep GPP or TCF fields empty unless the agent presents a verified user consent token that lawfully scopes processing.
B) ads.txt and sellers.json hints
Do not overload standards with non-standard fields in production, but you can experiment with discoverability hints. Longer term, advocate for IAB Tech Lab-led extensions. Prototyping idea for a sidecar file:
# agents.txt (publisher root) # Discovery hints for verified AI agents and policy endpoints contact=monetization@examplepublisher.com policy=https://examplepublisher.com/ai-agent-policy access_api=https://api.examplepublisher.com/agent/v1 ratecards=https://examplepublisher.com/ai-agent-ratecards preferred_id=examplepublisher-aisellers.json can include a business_type hint in ext fields only when supported by your SSP. Coordinate with partners before shipping.
C) Routing layer: segment and serve
Use your edge or origin to identify agents and route them to structured endpoints or to block as appropriate. Nginx example:
map $http_user_agent $is_ai_agent { default 0; ~*(GPTBot|CCBot|Claude-Web|Google-Extended|PerplexityBot) 1; } server { listen 443 ssl; server_name examplepublisher.com; location / { if ($is_ai_agent) { add_header X-Agent-Policy "refer-https://examplepublisher.com/ai-agent-policy"; proxy_pass https://api.examplepublisher.com/agent/v1/content?uri=$request_uri; break; } try_files $uri $uri/ /index.html; } }Notes:
- Supplement user-agent matching with reverse DNS, ASN, and token validation.
- Use keys for paid tiers and rate-limiting for free access.
D) Prebid hook for labeling and analytics
If your monetization stack uses Prebid Server or Prebid.js on the human rail, keep the agent rail separate. Still, analytics adapters can help centralize reporting.
pbjs.que.push(function () { if (window.navigator.userAgent.includes('Agent-Retriever')) { pbjs.setConfig({ ortb2: { site: { ext: { inventory_class: 'agent_nhi' } }, ext: { ai_agent: { is_agent: true, declared: true } } } }); } });For server-side, apply an analytics pipeline that writes to a separate table partitioned by inventory_class.
E) Measurement schema
Define a minimal schema for agent events to drive billing and quality.
CREATE TABLE agent_events ( event_time TIMESTAMP, publisher_id STRING, property STRING, url STRING, agent_id STRING, agent_source STRING, -- openai, anthropic, in-house, unknown declared BOOL, purpose STRING, -- summarization, commerce, indexing bytes_served INT64, response_ms INT64, status INT64, sponsor_included BOOL, policy_tier STRING, -- t0, t1, t2, t3 contract_id STRING, ip_hash STRING ) PARTITION BY DATE(event_time);Keep PII out of the schema. Hash IPs or discard entirely once session controls are in place.
Business models that work for agents
1) Metered API access
Charge per request or per kilobyte. Provide discounts for higher volumes and higher cache hit rates. Offer a free tier with low rate limits to encourage registration and compliance.
2) Sponsorship and attribution
Return sponsor metadata that assistants can display in summaries. Require link-back and attribution. Measure referral traffic and pay on a hybrid CPM plus CPC or CPA basis when applicable.
3) Topic bundles and freshness windows
For news and financial data, premium pricing for fast lanes with 5 to 15 minute freshness windows. For evergreen content, lower price tiers and caching rights.
4) Commerce and affiliate alignment
For product review and deals content, embed affiliate parameters in canonical links. Agents that surface deals can drive measurable downstream revenue. Share value via rev-share or fixed fees plus performance bonuses.
5) Enterprise licenses
For large AI platforms or search assistants, negotiate flat-fee or minimum-commit agreements with SLAs, usage caps, and indemnities. Bundle archives and priority access.
Packaging for programmatic
There are valid reasons to use programmatic rails for agent monetization: vendor neutrality, existing billing and invoice flows, and ease of buying. If you go this route:
- Use PMPs: Keep agent supply private and well-labeled. Expose clear seat IDs.
- Create an inventory_class: agent_nhi is a simple, explicit label in site.ext or imp.ext.
- Communicate currency: Do not count toward human impression goals. Settlement is per-response, not viewability-adjusted.
- Bundle by topic: Use taxonomies buyers know such as IAB content categories.
Measurement and currency: what to report
Avoid bending human metrics to fit a non-human world.
- Serve quality: response code, latency, completeness, content freshness.
- Attribution: link-back rate, referral CTR, affiliate conversion where applicable.
- Integrity: declared vs inferred agent mix, policy compliance incidents, token usage rates.
- Value density: average bytes per request, topics accessed, and sponsor inclusion rate.
Publishers and SSPs can expose dashboards that resemble API monitoring more than traditional ad analytics. This is fine. The buyer is a platform team, not a media planner.
Avoiding pitfalls
The temptation is to route all non-human traffic into a quick CPM model. Resist that.
- Do not contaminate human ledgers: Keep agent events out of viewability and reach calculations.
- Do not quietly inject sponsors into human UX: Sponsorship metadata is for agents to display in their environments with disclosure.
- Do not skip contracts: Keys and self-declared headers are not substitutes for commercial agreements when scale grows.
- Do not harvest personal data: Treat agent requests as contextual unless accompanied by valid consent artifacts.
SSP role: orchestrator and standard-setter
SSPs can create durable value and margins by productizing agent rails for the supply side.
- Agent registry: A shared directory of verified agents with tokens, capabilities, and rate classes.
- Normalization: Map agent signals to standard fields and provide an OpenRTB extension spec with open-source adapters.
- Deal packaging: Curate agent PMPs by vertical, freshness, and policy tiers. Provide marketplace discovery to AI platforms.
- Billing and settlement: Consolidate usage across publishers, apply SLAs, and issue a single invoice to agent buyers.
With a critical mass of publishers and consistent labeling, SSPs can make agent monetization easy to buy and safe to sell.
Publisher playbook: a 60 to 90 day plan
Here is a pragmatic plan to move from exploration to revenue.
Days 0 to 30: classify and control
- Inventory: Instrument edge logs to capture signals. Create a dashboard of agent candidates by UA, ASN, and path.
- Policy: Publish an AI Agent Policy and contact email. Offer a basic registration form.
- Routing: Route declared agents to a structured endpoint. Rate-limit others. Block clear abusers.
- Schema: Implement the agent_events table with privacy-safe fields.
Days 31 to 60: package and price
- Access tiers: Define free, paid, and enterprise tiers with rate cards and SLAs.
- Sponsor experiment: Add sponsorship metadata to summaries for 1 or 2 sections. Keep disclosures clear.
- PMP pilot: With your SSP, set up a small agent_nhi PMP for a topically narrow bundle. Invite 1 or 2 agent buyers.
- Reporting: Launch a simple partner portal for usage and invoices.
Days 61 to 90: scale and refine
- Contracts: Close one enterprise agent agreement with a minimum commit and SLA.
- Coverage: Expand structured endpoints to top sections. Add cache hints to stabilize cost.
- Fraud defense: Add machine learning to detect agent impersonation. Rotate keys. Monitor anomalies.
- Governance: Establish a quarterly review and publish transparency reports.
CTV and mobile: what changes
AI agent patterns differ by surface.
CTV
CTV has minimal autonomous agent consumption today. Edge cases include voice assistant queries on TV OS, program guides, and smart home hubs. If and when assistants fetch show metadata or highlights, apply the same rails.
- Product: Agent metadata feeds for program guides and highlight summaries.
- Pricing: Enterprise license only, no open metered endpoints until there is a clear ecosystem need.
- Signals: Device class labeling and OS-level agent tokens.
Mobile apps
On-device agents may prefetch or summarize content in-app.
- SDK signals: Use SDK-level flags to indicate agent sessions. Avoid device identifiers unless lawful consent is presented.
- Offline caching: Meter payloads and set cache TTLs to control costs.
- Commerce use cases: Mobile assistants for shopping and travel are high-value for affiliate attribution.
Risk management
Every new monetization rail brings risk. Mitigate them upfront.
- Privacy and compliance: Treat agent requests as contextual. Reject or quarantine tokens that claim user consent unless you can verify scope and provenance.
- Data leakage: Watermark or seed structured payloads to detect unauthorized redistribution.
- Competitive fast-follow: Expect rapid imitation. Build moats through publisher coverage, normalized schemas, and buyer integrations.
- Cost overrun: Cap free tiers. Use cache hints and rate shaping to keep egress bills predictable.
- Mislabeling: Maintain a human-only ledger separate from NHI. Audit both quarterly.
How Red Volcano can help
Red Volcano specializes in web, app, and CTV intelligence for the supply side. The platform is well-positioned to accelerate agent monetization for SSPs and publishers.
- Discovery and classification: Use Magma Web and tech stack tracking to map which properties already see material agent activity and which technologies are present across publishers.
- Standards telemetry: Combine ads.txt and sellers.json monitoring to ensure agent PMPs resolve cleanly, with correct seat mappings and no hidden intermediaries.
- Deal packaging intelligence: Leverage taxonomy coverage and freshness signals across properties to assemble topic bundles suitable for agent buyers.
- SDK and CTV signals: Identify SDK footprints that help label agent sessions in mobile and emerging CTV contexts.
- Sales outreach: Target AI platforms and assistants with curated lists of eligible publishers and clear policy documentation, accelerating pilot adoption.
Red Volcano can also steward a reference OpenRTB extension and a lightweight open-source adapter so SSPs and publishers can harmonize implementations quickly.
Putting it together: a reference flow
Below is an end-to-end flow that shows how a summarized article request becomes a monetized, measured agent event.
- Step 1: An AI assistant receives a user prompt to summarize a breaking news article. It calls the publisher’s Agent Access API with an auth token and purpose=summarization.
- Step 2: The publisher edge verifies the token, checks rate limits, and routes to the structured endpoint. If the assistant is registered via the SSP-managed registry, the call is eligible for a premium SLA and sponsor metadata.
- Step 3: The backend returns JSON with title, clean body text, canonical URL, sponsor metadata for the News category, and a cache TTL of 5 minutes.
- Step 4: The agent renders the summary in its UI with attribution and a link-back, including sponsor disclosure. If the user clicks through, referral analytics attribute the traffic to the agent.
- Step 5: The agent event is logged to agent_events, billed per request, and included in a weekly usage report to the agent buyer. For sponsored inclusions, a CPM line item is settled via a PMP deal tagged as agent_nhi.
- Step 6: The publisher and SSP review integrity metrics monthly, adjust rate cards, and decide whether to move the assistant to a higher tier or expand coverage.
Example sponsor-aware agent payload
Keep payloads simple and honest. Include disclosure fields to promote compliant rendering.
{ "version": "1.0", "publisher": "Example Publisher", "canonical_url": "https://examplepublisher.com/article/abc", "title": "Central Bank Signals New Rate Path", "author": "A. Reporter", "published_at": "2025-11-06T08:00:00Z", "topics": ["Finance", "Monetary Policy"], "body": "Clean text body truncated to policy-compliant length...", "sponsor": { "name": "Trusted Brokerage", "disclosure": "Sponsored inclusion for Finance summaries", "click_url": "https://broker.example.com/offer?affid=pub123", "logo_url": "https://cdn.examplepublisher.com/logos/broker.png" }, "policy": { "cache_ttl_seconds": 300, "usage": ["summarization", "snippet"], "redistribution": "no", "attribution_required": true, "link_back_text": "Read full article" }, "agent": { "id": "openai", "declared": true, "purpose": "summarization", "token_scope": "contextual" } }Standards and ecosystem alignment
Agent monetization will work better if it borrows from familiar standards and institutions.
- MRC IVT: Preserve GIVT and SIVT rigor. Non-human impressions are not a loophole. They are a separately counted, policy-compliant class.
- IAB Tech Lab specs: Extend OpenRTB judiciously via ext fields. Coordinate toward a formal proposal once patterns stabilize.
- ads.txt and sellers.json: Continue to anchor supply path transparency. Use sidecar discovery files for agent endpoints until standards emerge.
- TAG programs: Align with TAG anti-fraud best practices to reduce impersonation and spoofing risks.
- Prebid community: Contribute an analytics adapter or reference module for agent_nhi labeling and reporting.
- Privacy frameworks: Treat agent requests as contextual unless consent artifacts are verified under frameworks like GPP and TCF. Document how tokens are validated and scoped.
Editorial and user trust
Monetizing agent traffic should increase the supply of quality journalism, entertainment, and utility content. That means:
- Attribution first: Require clear attribution and link-back. This sustains the open web and gives users a path to source context.
- Disclosure: Sponsors in agent summaries must be disclosed. No covert injection.
- User choice: Provide a path for creators and sections to opt out of agent monetization entirely, or to set stricter policies.
What success looks like in 12 months
If the ecosystem leans in, a realistic 12-month outcome is:
- Coverage: Top 500 publishers have agent policies and structured endpoints. 10 to 20 percent of their non-human traffic is registered and monetized.
- Standards: A draft OpenRTB extension and an agent registry convention exist with open-source reference code.
- SSP products: Agent PMPs by vertical and freshness, with clear pricing and SLAs. Settlement via existing billing rails.
- Trust: Buyers report clean reconciliation and clear separation of human vs agent currencies.
- Publisher economics: Agent revenue offsets infrastructure costs and creates a new editorial budget line for structured content.
Conclusion
AI agents are a new kind of audience: not human, but human-adjacent. They consume and propagate publisher value. The sell side can either treat them as background noise or build a parallel rail that sustains content economics and gives agents what they actually need. The framework in this piece is deliberately pragmatic. Identify and classify reliably. Codify policy. Package SKUs that make sense for agents. Price for utility and freshness. Measure like an API, not a human impression. Keep governance tight and transparent. If SSPs and publishers execute together, non-human impressions can become a durable, trusted revenue stream that protects human experiences and funds the next wave of content innovation.
References and further reading
- MRC Invalid Traffic Detection and Filtration Guidelines
- IAB Tech Lab OpenRTB 2.6 Specification
- IAB Tech Lab ads.txt and sellers.json Specifications
- TAG Certified Against Fraud Program Guidelines
- Prebid.org Documentation on Ext Fields and Analytics
- Privacy frameworks: IAB Tech Lab Global Privacy Platform and TCF Note: Validate specific implementation details and evolving agent identifiers via current documentation from the respective standards bodies and platform providers before deploying at scale.