How parallel auctions replaced sequential waterfalls -- and why it matters for publisher revenue. Updated 2026.
| Feature | Header Bidding | Waterfall |
|---|---|---|
| Auction Type | Parallel (all bidders compete simultaneously) | Sequential (one bidder at a time, in priority order) |
| Revenue Impact | Typically 20-50% higher CPMs than waterfall | Lower CPMs due to limited competition |
| Latency | Higher (multiple simultaneous calls, but timeout-controlled) | Lower per-call, but total latency can add up across fallbacks |
| Transparency | High -- publishers see all bids | Low -- passbacks hide actual demand |
| Setup Complexity | More complex (Prebid.js configuration, key-value targeting) | Simpler (daisy-chain line items in ad server) |
| Fill Rate | High (all bidders compete for each impression) | Variable (depends on passback chain depth) |
| Market Adoption | ~70% of top publishers use header bidding | Legacy; declining but still used in some setups |
| Bias | Unbiased -- highest bid wins regardless of source | Biased toward top-priority demand sources |
| Technology | Prebid.js, Amazon TAM/UAM, Open Bidding | Ad server daisy-chaining, passback tags |
| Future | Industry standard; evolving toward server-side | Largely replaced; some remnants in mobile/app |
In the traditional waterfall (or daisy-chain) model, a publisher's ad server calls demand sources one at a time in a fixed priority order. If the first source (usually the one with the historically highest CPM) cannot fill the impression, the request passes to the second source, then the third, and so on.
The fundamental problem with waterfalls is that priority is based on historical averages, not real-time value. A demand source ranked third in the waterfall might be willing to pay more for a specific impression, but it never gets the chance to compete because sources one and two are called first.
This sequential approach also creates latency: if several sources pass before one fills, the user may have already scrolled past the ad placement or left the page entirely.
Header bidding flips the waterfall on its side. Instead of calling demand sources sequentially, the publisher's page calls all configured SSPs and exchanges simultaneously via JavaScript (typically Prebid.js). All bidders submit their bids within a timeout window, and the highest bid wins.
This parallel auction ensures every demand source has an equal opportunity to win every impression, creating true price competition. The winning bid is then passed to the publisher's ad server (usually Google Ad Manager), where it competes against the ad server's own demand.
The result: publishers typically see 20-50% higher CPMs after implementing header bidding, because every impression is sold through genuine competition rather than a biased sequential process.
Red Volcano tracks adoption of both technologies across 32M+ publishers.
Explore the Data →The revenue advantage of header bidding over waterfalls is well-documented:
Header bidding is not without tradeoffs. Calling multiple SSPs simultaneously from the browser adds page latency, typically 500ms to 3 seconds depending on the number of bidders and timeout settings. Publishers must balance the number of demand partners (more = more revenue) against user experience (more = more latency).
Server-side header bidding (e.g., Prebid Server, Amazon TAM) addresses this by moving the auction to a server, reducing browser-side latency. However, server-side solutions introduce cookie-matching challenges that can reduce bid rates.
The waterfall has lower per-call latency, but if multiple sources pass before one fills, the cumulative latency can actually exceed a well-configured header bidding setup.
Publishers typically see 20-50% higher CPMs after implementing header bidding compared to a waterfall setup. The exact increase depends on factors like traffic volume, geographic mix, number of demand partners, and inventory type.
Not entirely. While header bidding has replaced waterfalls for most web publishers, waterfall-style mediation is still used in some mobile app environments. However, even app mediation platforms are increasingly adopting in-app bidding (the app equivalent of header bidding).
Header bidding does add some page latency (typically 500ms-3s). This is managed through timeout settings, limiting the number of bidders, and implementing lazy loading. Server-side header bidding reduces client-side impact significantly.
Most publishers see optimal results with 5-10 partners. Beyond that, incremental revenue gains diminish while latency increases. The ideal number depends on your traffic volume and geographic distribution.
Server-side header bidding moves the auction from the user's browser to a server (e.g., Prebid Server, Amazon TAM). This reduces page latency but can reduce match rates since server-side auctions have less access to browser cookies. Many publishers use a hybrid approach.
Red Volcano monitors adoption of every major ad technology, SSP, and header bidding solution. See market share trends, publisher-level data, and competitive intelligence.