Reporting Discrepancies

Reporting discrepancies are common and expected when multiple systems are used to measure line item delivery. This can include differences between the user interface and offline reporting.

Also, when an ad server delivers line items that are hosted by a third party, reporting discrepancies between the two systems will occur, and it is common to see campaign variances of up to 20%.

Discrepancies may result from:

  • Latency: Lag between an initial line item request and the appearance of the creative can lead to differences in counts. For instance:
    • A user will often navigate away after the browser receives the Display & Video 360 line item request but before the third party responds with the requested creative.
    • A user may click on a link but navigate elsewhere before the landing page has loaded.
  • Network connection and server reliability: A third-party ad server may fail briefly or encounter an issue that prevents it from logging an impression.
  • Ad blockers: Ad-blocking software can prevent the line item from being delivered by the third party after Display & Video 360 has already counted an impression.
  • Low impression goals: A small numerical discrepancy can cause a high percentage discrepancy if the line item delivered few total impressions.
    • For example, if you have a campaign delivering 100 impressions per day, a single-day discrepancy of 30 impressions will lead to a single-day discrepancy of 30% even though the actual number of missed impressions is low.
  • Filtering: Ad servers have different methods for filtering impressions from spammers, bots, spiders, back-to-back clicks, link analyzers, and other automated or non-representative web traffic.
  • Different measurement providers and methodologies: Discrepancies can arise when different measurement providers have different methodologies for their measurement.
  • Attribution models: Attribution models define how conversions are counted and which impressions or clicks get credit for those conversions. Because the counting method is different, there will be discrepancies between different attribution models. When comparing two data sources (including the user interface vs. downloaded reports), ensure the attribution models are identical.

Was this helpful?

How can we improve it?

Need more help?

Try these next steps:

Clear search
Close search
Google apps
Main menu