Guide: How to read your SmartAds lift report

Chartable’s incremental lift report offers insight into how an ad performs against a control group customized to the advertiser’s campaign.

Incremental lift is especially useful for brands that advertise across a wide variety of media. Pixel-based attribution will pick up conversions attributed to podcast listeners—but without incremental lift, one can’t determine whether being exposed to a podcast ad campaign specifically influenced a response. Because this methodology creates a comparison against a control group running on the same infrastructure but  without podcast advertising, incremental lift allows advertisers to precisely isolate the effect of a podcast campaign. More detail available at this link.

About the control group

We create a control group for your campaign via  geographic and temporal filters:

  • Geographically, the impressions from the control group are selected based on the geographic distribution of your ad-exposed audience (down to the city level).
  • Temporally, the system also ensures the impressions come from the same period of time as your ad-exposed audience, which can be seen particularly in your report’s “by-date” file.
  • We also filter based on IP address to ensure zero IP address overlap between your control group and ad-exposed group audiences for 2x your lookback window.

We do not filter based on show genre, because our system is matching on the  impression-level rather than on the show-level: each of your ad campaign impressions was served at a particular place and time, and our goal is to reconstruct that place and time as closely as possible as we generate the control group.

This may mean that your control group’s impressions come from many more individual podcasts than your ad-exposed group, because again we are seeking relevance based on the impression, not the podcast.

Report files structure

The lift report will output two types of files:

  1. Lift Report: The main lift file, which provides an overall picture of your exposed vs. control group performance, segmented by day.
  2. Lift Report by Placement: A breakdown of your exposed vs. control group performance by campaign line-item and the placements (baked-in spots or dynamic-insertion tags) within that campaign.


In format, the primary difference between the two files is that the Lift Report By Placement is sub-divided by-campaign and by-placement, making the Lift Report by Placement a closer comparison against the Timeseries report.

So, why two files instead of just one? The main reason (as detailed in "About the control group" above) is that the control group audience is based on all impressions served across all of your campaigns, not based on a per-campaign sub-division. Because of that, you can think of the Lift Report as illustrating the true performance of exposed vs. control, and the Lift Report by Placement as scaling that overall data to the size of each of your placements to give an indication of how each placement performed against the expectations set by the overall control group.

A note on Integrations

If you’ve integrated Chartable measurement into multiple sources, you’ll receive these reports for each.

For example, if you’re running our Javascript SDK on your website (often referred to as the "web pixel") and have Chartable integrated with your mobile app’s MMP, you’ll receive one set of these reports for your website and a second set for your mobile app.


Examining the Lift Report file

Example file for download here: Lift Report example

You'll find that the layout of the Overall lift report closely aligns with the SmartAds Timeseries report: at a high-level, the report breaks out performance by day, with columns for all of your metrics. All advertisers will see the following metric columns:

Metric Name Definition
Date Reporting date (note: Chartable reporting is set based on the UTC time zone)
Impressions Impressions from your ad-exposed group. For more on how Chartable defines impressions, see this article.
Reach Number of unique devices that downloaded your ad.
Baseline Impressions Number of impressions (downloads) seen by the Control group
Baseline Reach Number of unique devices associated with Control group impressions
Confirmed Unique Visitors Number of unique page visits to the advertiser's website that are de-duped down to the individual level. "Confirmed" is a subset of the expected total activity. 
Estimated Unique Visitors Number of unique page visits to the advertiser's website that are de-duped down to the individual level. "Estimated" represents the expected total activity based on advertiser-specific ad delivery and performance. For more on "Confirmed" vs. "Estimated", see this article.
Baseline Unique Visitors Number of Estimated Unique Visitors seen by the Control group.
Baseline Unique Visitors (Scaled) Number of Estimated Unique Visitors seen by the Control group, after adjusting for any variance between Control Group and Exposed Group Impressions
Incremental Unique Visitors Number of Incremental Unique Visitors Driven, calculated by subtracting "Baseline Unique Visitors (Scaled)" from "Estimated Unique Visitors"

The format seen for "Unique Visitors" will follow for any other category of metric you're sending to Chartable—for example, advertisers who have installed the "Purchase" Advanced Endpoint will also see the following columns:

  • Confirmed Purchase
  • Estimated Purchase
  • Baseline Purchase
  • Baseline Purchase (Scaled)
  • Incremental Purchase

Examining the Lift Report by Placement file 

Example file for download here: Lift Report by Placement example

As detailed above in the "Report files structure" section, the Lift Report by Placement file is one step closer to the structure of the normal Timeseries report. All advertisers will see the following columns:

Metric Name Definition
Ad Campaign Name Name of the overall campaign, based on the name you entered when creating the campaign.
Ad Campaign Id A Chartable-generated ID for this campaign
Ad Campaign Placement Name Name of the placement. For Baked-in campaigns, this will be the name of the episode where an ad aired. For Dynamic campaigns, this will be the name of the pixel—which you can specify.
Ad Campaign Placement Id A Chartable-generated ID for this placement
Date Reporting date (note: Chartable reporting is set based on the UTC time zone)
Impressions Impressions from your ad-exposed group. For more on how Chartable defines impressions, see this article.
Reach Number of unique devices that downloaded your ad.
Confirmed Unique Visitors Number of unique page visits to the advertiser's website that are de-duped down to the individual level. "Confirmed" is a subset of the expected total activity. 
Estimated Unique Visitors Number of unique page visits to the advertiser's website that are de-duped down to the individual level. "Estimated" represents the expected total activity based on advertiser-specific ad delivery and performance. For more on "Confirmed" vs. "Estimated", see this article.
Incremental Unique Visitors Number of Incremental Unique Visitors Driven, similar to the methodology seen in the Lift Report file.

As seen in the Lift Report file, the format shown for "Unique Visitors" would continue for any other attributable metrics you're sending to Chartable.

Note that the "Baseline" columns are not present in this report, due to all by-placement calculations being based on the scaled control group. We do not re-create control groups for each campaign, rather the control group as a whole is mimicking the impression distribution of your overall exposed group audience.

It's very common to see a mixed bag of results depending on how many campaigns you have run, with some campaigns and placements mirroring the overall trend and others showing very different positive/negative incrementality. This variance is entirely normal, and we recommend focusing your evaluation of this data based on the items that show particularly consistent trends, meaning positive or negative results throughout most or all of your funnel events.

Looking at these higher-signal results can be a launching point for planning conversations: are you over-extended on a particular audience demographic? Would it be valuable to capitalize on a particular show’s performance by creating a show-specific promotional offer? Could you adjust your funding priorities in upcoming campaigns, or opt to test a handful of new shows in the next quarter to evaluate your performance against a new audience?

Understanding positive vs. negative incrementality


Positive incremental results indicate that your campaign had a positive effect on this behavior against a Control group that didn’t receive your ads.

Negative results usually indicate that your audience may be over-saturated with advertising, which can often happen if you’re aggressively messaging a similar audience between podcast, social and other marketing channels.

Larger results on a percentage basis (in either the positive or negative direction) are more reliable than smaller results, and should be evaluated accordingly. Single-digit percentage differences sometimes happen and should be noted as being effectively flat/no major effect.

It’s common to see a mix of incremental results, with a particularly common pattern being mild or even slightly negative incrementality at the top of the funnel, with stronger positive results in lower-funnel events. The cross-advertiser trend we’ve seen on several lift reports suggests that podcast advertising’s most valuable effect is a qualifying influence on your audience—or to put it another way, podcast ads aren’t necessarily the strongest website visitor drivers, but they can be quite useful at helping to ensure the visitors that show up are engaged and more likely to convert.

Did this answer your question? Thanks for the feedback There was a problem submitting your feedback. Please try again later.

Still need help? Contact Us Contact Us