How do I interpret my incremental lift files?

Chartable’s incremental lift report offers insight into how an ad performs against a control group customized to the advertiser’s campaign.

Incremental lift is especially useful for brands that advertise across a wide variety of media. Pixel-based attribution will pick up conversions attributed to podcast listeners—but without incremental lift, one can’t determine whether being exposed to a podcast ad campaign specifically influenced a response. Because this methodology creates a comparison against a control group running on the same infrastructure but  without podcast advertising, incremental lift allows advertisers to precisely isolate the effect of a podcast campaign. More detail available at this link.

About the control group

We create a control group for your campaign via  geographic and temporal filters:

  • Geographically, the impressions from the control group are selected based on the geographic distribution of your ad-exposed audience (down to the city level).
  • Temporally, the system also ensures the impressions come from the same period of time as your ad-exposed audience, which can be seen particularly in your report’s “by-date” file.
  • We also filter based on IP address to ensure zero IP address overlap between your control group and ad-exposed group audiences for 2x your lookback window.

We do not filter based on show genre, because our system is matching on the  impression-level rather than on the show-level: each of your ad campaign impressions was served at a particular place and time, and our goal is to reconstruct that place and time as closely as possible as we generate the control group.

This may mean that your control group’s impressions come from many more individual podcasts than your ad-exposed group, because again we are seeking relevance based on the impression, not the podcast.

Examining your report

The lift report will output three types of files:

  1. Overall report: An overall snapshot of your exposed vs. control group performance.
  2. By-day report: A breakdown of your exposed vs. control group performance by day.
  3. By-campaign report: A breakdown of your exposed vs. control group performance by campaign.

A note on Integrations

If you’ve integrated Chartable measurement into multiple sources, you’ll receive these reports for each.

For example, if you’re running our Javascript SDK on your website  and have Chartable integrated with your mobile app’s MMP, you’ll receive one set of these reports for your website and a second set for your mobile app.

Examining the Overall report

The simplest place to start understanding your lift data is by opening the Overall report. This report illustrates the total results from both your control group and exposed group for the full period of the lift study, so you’ll only see one line of data, broken into columns such as the following:

  • Control Group Impressions
  • Exposed Group Impressions
  • Control Group Conversions
  • Exposed Group Conversions
  • Along with other columns based on which advanced endpoints you’ve opted to send to Chartable (like “Add to Cart” or “Checkout” events)

Calculating rates and incremental lift based on the Overall report’s data

To calculate rates:

You can use these columns to calculate the response, or conversion rates for each of your attributed events, for example:

  1. Control Group Conversion Rate = Control Group Conversions / Control Group Impressions
  1. Exposed Group Conversion Rate = Exposed Group Conversions / Exposed Group Impressions

To calculate incremental lift:

Based on those rates, you can also calculate the percentage lift on those metrics:

Conversion Rate Lift = (Exposed Group Conversion Rate - Control Group Conversion Rate) / Control Group Conversion Rate

You can repeat this process throughout all of your attributed metrics, calculating the “rate” of each against impressions, then using the Exposed and Control Group results to calculate incrementality.

Calculating expected performance vs. actual to understand incremental results

One of the most useful aspects of the lift report’s data is that it gives you the ability to play out the hypothetical scenario of “What would have happened if I didn’t run these podcast campaigns?” in order to better understand the actual incremental effect of your campaign.

To calculate expected results:

To calculate the expected result, simply apply the Control Group’s “rate” against the Exposed Group’s “impressions”, like so:

Expected Conversions = Exposed Group Impressions x Control Group Conversion Rate

To calculate incremental events:

Once that’s done, it’s straightforward to figure out how many incremental events your campaign drove:

Incremental Conversions = Exposed Group Conversions - Expected Conversions

As before, you can calculate this incrementality for all of your attributed metrics.

Understanding positive vs. negative incrementality

Positive incremental results indicate that your campaign had a positive effect on this behavior against a Control group that didn’t receive your ads.

Negative results usually indicate that your audience may be over-saturated with advertising, which can often happen if you’re aggressively messaging a similar audience between podcast, social and other marketing channels.

Larger results on a percentage basis (in either the positive or negative direction) are more reliable than smaller results, and should be evaluated accordingly. Single-digit percentage differences sometimes happen and should be noted as being effectively flat/no major effect.

It’s common to see a mix of incremental results, with a particularly common pattern being mild or even slightly negative incrementality at the top of the funnel, with stronger positive results in lower-funnel events. The cross-advertiser trend we’ve seen on several lift reports suggests that podcast advertising’s most valuable effect is a qualifying influence on your audience—or to put it another way, podcast ads aren’t necessarily the strongest website visitor drivers, but they can be quite useful at helping to ensure the visitors that show up are engaged and more likely to convert.

Reviewing the By-Show and By-Date reports

With a solid grounding in how the lift report works based on your review of the Overall report, the By-Show and By-Date files are quite straightforward: they’re simply a more granular segmentation of the same data.

In the  By-Date report, you can follow the control vs. exposed data to examine how your podcast audience’s behavior differs from control around events such as sales, product launches, holidays or even simply the day of the week from the days within your lift study window.

In the  By-Show report, expect to see a mixed bag of results depending on how many campaigns you have run, with some campaigns mirroring the overall trend and others showing very different positive/negative incrementality. This variance is entirely normal, and we recommend focusing your evaluation of this data based on the campaigns that show particularly consistent trends, meaning positive or negative results throughout most or all of your funnel events.

Looking at these higher-signal results can be a launching point for planning conversations: are you over-extended on a particular audience demographic? Would it be valuable to capitalize on a particular show’s performance by creating a show-specific promotional offer? Could you adjust your funding priorities in upcoming campaigns, or opt to test a handful of new shows in the next quarter to evaluate your performance against a new audience?

It’s important to note that your by-show exposed group performance is being calculated against the overall control group’s performance, scaled to the number of impressions on each campaign. We do not re-create control groups for each campaign, rather the control group as a whole is mimicking the impression distribution of your overall exposed group audience.

Did this answer your question? Thanks for the feedback There was a problem submitting your feedback. Please try again later.

Still need help? Contact Us Contact Us