Skip to main content
Interstitial ad frequency is the single most-requested test from mobile gaming teams — and for good reason. Show ads too often and you earn more per session but drive players away; show them too rarely and you leave revenue on the table. There is no theoretical right answer. Fleack lets you run a controlled experiment on the actual backend config your app already fetches, with no SDK changes, no binary update, and no risk of breaking existing sessions.

What you’ll measure

VariantInterstitial frequencyHypothesis
ControlWhatever your backend currently returnsBaseline.
Variant A+30% (more frequent)Higher ad revenue per session; may hurt D7 retention.
Variant B−30% (less frequent)Lower revenue per session; may improve D7 retention and lift long-term ARPDAU.
Primary metric: D7 retention (binary) Secondary metric: ARPDAU delta at D7 (continuous scalar)

Pre-flight check

Before creating any levers, confirm that your backend exposes interstitial frequency as a discrete field in a config-candidate endpoint. Open the Fleack backoffice, navigate to Endpoints, and look for your monetization config endpoint. Click into its body sample and scan for paths like:
  • data.ads.interstitial_freq
  • data.monetization.interstitial.frequency
  • ads.interstitial.cooldown_seconds
If the frequency value is computed client-side from a level number or hardcoded in the binary, you cannot test it with Fleack yet. Move the value to a backend config endpoint first, then return here.
The endpoint’s classification must read config-candidate — the same response goes to every user, which is what makes a lever testable. If it reads user-data or mixed, check whether you’re looking at the right endpoint.

Main workflow

1

Declare or confirm the lever

Open the Levers page. If Fleack’s AI enrichment already detected an interstitial frequency lever, it will appear under a label like “Frequency” or “Interstitial cadence”. Click it, verify the path and current value match your reality, and edit the label if needed.If no lever exists yet, click + New lever:
  1. Pick the endpoint that returns your monetization config.
  2. Use the path picker to search for interstitial and click the matching path.
  3. Fill in the lever details:
    • Label: Interstitial frequency
    • Type: frequency
    • Description: e.g. Number of rounds between interstitial ads.
    • Test suggestions: values reasonable for your genre, such as 3, 4, 5, 6, 8
The lever now appears in the catalog with the live backend value shown alongside it.
2

Set up the test

From the lever detail page, click Test.Configure the test as follows:
  • Variant A value: current value × 1.3, rounded to the nearest integer.
  • Variant B value: current value × 0.7, rounded to the nearest integer.
  • Allocation: 33% / 33% / 34% (control gets the remainder automatically).
  • Segment: leave at All users for your first run, or scope to Tier-1 countries if you want a cleaner monetization read.
  • Primary metric: Retention day 7 — select the endpoint your app calls on session start (e.g. GET /api/get_appsettings or GET /api/session/start).
  • Secondary metric: Scalar delta on arpu, observation window 7 days. This requires profile attributes from your user-data endpoints — see Segments for how Fleack builds those.
Click Launch. The test goes live immediately on every shipped app version.
3

Watch the results

The test detail page updates in real time. For a mobile game with 50K+ DAU you can expect:
  • First exposure rows within minutes of launch
  • 1,000+ exposures per variant within an hour
  • An early D2–D3 retention signal within 2–3 days
  • A statistically meaningful D7 read between days 7 and 14
Per-variant panels show exposure counts, the running D7 retention rate (marked not yet eligible for users exposed fewer than 7 days ago), and Bayesian win probability vs control once each variant accumulates 30+ exposures.
Early D2 reads are useful for spotting a disaster (e.g. Variant A tanking retention by 15+ points) but are not grounds for promotion. Wait for the D7 window.
4

Make the call

Use these thresholds to decide when to act:
VerdictConditionAction
PromoteWin probability vs control ≥ 90% AND ≥ 14 days of dataClick Promote
StopWin probability falls below 10%Click Stop variant
WaitNo clear difference yetGive it more exposures
D7 retention requires the full 7-day window per user, so the earliest you can have a clean read is day 8 of the test — and you need enough cohorts to stabilise variance, hence the 14-day minimum.
A common outcome: Variant A (more ads) wins on per-session revenue but loses on D7 retention; Variant B (fewer ads) is flat on retention but up on long-term ARPDAU. Which variant is the “real winner” depends on your LTV model — that’s a business decision, not a statistics question.
5

Promote the winner

From the test detail page, click Promote on the winning variant. Fleack immediately routes 100% of traffic to that value, moves the test to completed, and updates the lever’s effective value in the catalog.If you prefer a staged rollout — say 50% first, then 100% — pause the test and adjust the allocation manually instead of using one-click promote.

Common pitfalls

  • Don’t test before D2 retention is healthy. If your game has a 30% D2 drop-off baseline, ad frequency tests will produce noisy, uninterpretable swings. Fix the funnel first; test the cadence second.
  • Watch for genre confound. Hypercasual players tolerate more interstitials than mid-core players. A cadence that wins on a hypercasual title does not automatically win on a mid-core one — run the test on each title separately.
  • Account for ad fill rate, not just frequency. Increasing the number of interstitial slots doesn’t automatically increase revenue if your mediation partner’s fill rate at higher slot counts drops below 70%. Check the fill report before declaring victory.

A/B test in-app pricing

Test bundle composition and bonus content for a fixed-price IAP without touching the SKU tier.

A/B test an onboarding flow

Reorder and adjust onboarding steps to lift D1 retention on new installs.