Viewability measures whether an ad could have been seen. It tells you nothing about whether it was. The advertisers outperforming in 2026 have moved beyond viewability to measure what actually drives brand outcomes: attention time, active dwell, and interaction probability. Here is how to measure them and what to do with the data.
The MRC viewability standard (50 percent of pixels in view for 1 second for display, 2 seconds for video) was designed as a fraud detection mechanism, not a quality metric. It answers the binary question "could this ad theoretically have been seen?" A 50 percent in-view ad held for exactly one second by a user who was looking at a different browser tab passes the viewability standard. An ad that captures 8 seconds of active attention from a highly engaged user passes the same standard. Viewability cannot distinguish between them.
Despite this, viewability remains the dominant quality metric in most media plans. Buyers negotiate "100 percent viewable" inventory guarantees and treat the achievement of those guarantees as evidence of ad quality. The correlation between viewability scores and brand outcomes is weak, which is why many buyers report frustration at seeing strong viewability numbers alongside flat brand metrics.
Viewability tells you whether your ad was on screen. Attention metrics tell you whether anyone was looking. These are different questions, and the second one is the one that drives brand outcomes.
Active attention time measures the duration during which a user is actively engaging with the content surrounding an ad (or the ad itself), defined as eyes-on-screen with no multitasking, page visible, and content interaction signals present. This metric is measured through a combination of computer vision (where panel data is available), engagement proxies (scroll velocity, interaction events, audio state), and predictive models trained on eye-tracking panel data.
The benchmark for meaningful brand impact starts at 2.5 seconds of active attention. Under that threshold, memory encoding for brand elements is significantly impaired. Above 7 seconds, recall and brand association rise sharply. Formats like rewarded video (25 to 30 seconds), high-impact display (3 to 6 seconds), and native content (8 to 15 seconds) consistently outperform standard display on this metric.
Active dwell is the proportion of an ad's viewable exposure during which the user is actively engaged, not just present on the page. It normalises attention time against total viewable time to produce a ratio (0 to 100 percent) that reflects engagement quality independent of ad format duration. A 30-second video ad with 25 seconds of active engagement has an active dwell of 83 percent. A 5-second display ad with 3 seconds of active engagement has an active dwell of 60 percent.
Active dwell is useful for comparing across formats with different inherent durations. It also surfaces inventory quality: high active dwell publishers are those whose editorial environment is holding audience attention. These are the publishers whose adjacency has the most value for brand advertisers.
Interaction probability is a predictive score derived from historical engagement data: the probability that a given impression in a given context will result in some form of deliberate user engagement (not accidental clicking). This metric is more predictive of downstream brand actions than either viewability or raw attention time, because it captures the combination of audience intent and contextual relevance that produces genuine engagement.
The practical challenge with attention metrics is measurement infrastructure. Unlike viewability, which relies on standardised MRC-compliant measurement tags, attention measurement currently requires working with third-party attention vendors (Adelaide, Playground XYZ, Lumen Research, DoubleVerify's attention product) who each use different methodologies.
The most pragmatic starting point is to run attention measurement in parallel with your existing viewability measurement on a portion of your budget, specifically designed to calibrate the relationship between your attention scores and your business outcomes. Once you have that calibration, you can set attention-based KPIs and negotiate against them.
Different attention vendors have different measurement approaches and inventory strengths. Panel-based eye-tracking (Lumen, the Attention Council) is the gold standard for accuracy but has limited scale. Algorithmic attention scoring (Adelaide AU metric, DV's attention) has broad scale but relies on proxy signals rather than direct eye-tracking data. For initial testing, we recommend using a panel-based vendor for calibration and an algorithmic vendor for scale, then using the calibration data to weight the algorithmic scores.
Once you have attention data, three optimisation levers are most impactful. First, shift budget toward publishers with consistently high active dwell scores, even if their viewability scores are similar to lower-dwell alternatives. Second, adjust creative pacing: brands consistently get more attention from ads where the brand is established in the first 3 seconds and the key message is reinforced in the last 5 seconds. Third, reduce frequency to the attention-optimal window (typically 3 to 5 exposures per week) rather than optimising for reach at the expense of engagement quality.
Monthly analysis on programmatic trends, publisher strategies, and advertiser playbooks. No fluff. Just data and insight from the team building at the frontier of adtech.
No spam. Unsubscribe anytime. Monthly cadence.