What you’ll measure
| Variant | Interstitial frequency | Hypothesis |
|---|---|---|
| Control | Whatever your backend currently returns | Baseline. |
| Variant A | +30% (more frequent) | Higher ad revenue per session; may hurt D7 retention. |
| Variant B | −30% (less frequent) | Lower revenue per session; may improve D7 retention and lift long-term ARPDAU. |
Pre-flight check
Before creating any levers, confirm that your backend exposes interstitial frequency as a discrete field in aconfig-candidate endpoint. Open the Fleack backoffice, navigate to Endpoints, and look for your monetization config endpoint. Click into its body sample and scan for paths like:
data.ads.interstitial_freqdata.monetization.interstitial.frequencyads.interstitial.cooldown_seconds
config-candidate — the same response goes to every user, which is what makes a lever testable. If it reads user-data or mixed, check whether you’re looking at the right endpoint.
Main workflow
Declare or confirm the lever
Open the Levers page. If Fleack’s AI enrichment already detected an interstitial frequency lever, it will appear under a label like “Frequency” or “Interstitial cadence”. Click it, verify the path and current value match your reality, and edit the label if needed.If no lever exists yet, click + New lever:
- Pick the endpoint that returns your monetization config.
- Use the path picker to search for
interstitialand click the matching path. - Fill in the lever details:
- Label:
Interstitial frequency - Type:
frequency - Description: e.g.
Number of rounds between interstitial ads. - Test suggestions: values reasonable for your genre, such as
3, 4, 5, 6, 8
- Label:
Set up the test
From the lever detail page, click Test.Configure the test as follows:
- Variant A value: current value × 1.3, rounded to the nearest integer.
- Variant B value: current value × 0.7, rounded to the nearest integer.
- Allocation: 33% / 33% / 34% (control gets the remainder automatically).
- Segment: leave at All users for your first run, or scope to Tier-1 countries if you want a cleaner monetization read.
- Primary metric: Retention day 7 — select the endpoint your app calls on session start (e.g.
GET /api/get_appsettingsorGET /api/session/start). - Secondary metric: Scalar delta on
arpu, observation window 7 days. This requires profile attributes from youruser-dataendpoints — see Segments for how Fleack builds those.
Watch the results
The test detail page updates in real time. For a mobile game with 50K+ DAU you can expect:
- First exposure rows within minutes of launch
- 1,000+ exposures per variant within an hour
- An early D2–D3 retention signal within 2–3 days
- A statistically meaningful D7 read between days 7 and 14
Early D2 reads are useful for spotting a disaster (e.g. Variant A tanking retention by 15+ points) but are not grounds for promotion. Wait for the D7 window.
Make the call
Use these thresholds to decide when to act:
A common outcome: Variant A (more ads) wins on per-session revenue but loses on D7 retention; Variant B (fewer ads) is flat on retention but up on long-term ARPDAU. Which variant is the “real winner” depends on your LTV model — that’s a business decision, not a statistics question.
| Verdict | Condition | Action |
|---|---|---|
| Promote | Win probability vs control ≥ 90% AND ≥ 14 days of data | Click Promote |
| Stop | Win probability falls below 10% | Click Stop variant |
| Wait | No clear difference yet | Give it more exposures |
D7 retention requires the full 7-day window per user, so the earliest you can have a clean read is day 8 of the test — and you need enough cohorts to stabilise variance, hence the 14-day minimum.
Promote the winner
From the test detail page, click Promote on the winning variant. Fleack immediately routes 100% of traffic to that value, moves the test to
completed, and updates the lever’s effective value in the catalog.If you prefer a staged rollout — say 50% first, then 100% — pause the test and adjust the allocation manually instead of using one-click promote.Common pitfalls
- Don’t test before D2 retention is healthy. If your game has a 30% D2 drop-off baseline, ad frequency tests will produce noisy, uninterpretable swings. Fix the funnel first; test the cadence second.
- Watch for genre confound. Hypercasual players tolerate more interstitials than mid-core players. A cadence that wins on a hypercasual title does not automatically win on a mid-core one — run the test on each title separately.
- Account for ad fill rate, not just frequency. Increasing the number of interstitial slots doesn’t automatically increase revenue if your mediation partner’s fill rate at higher slot counts drops below 70%. Check the fill report before declaring victory.
A/B test in-app pricing
Test bundle composition and bonus content for a fixed-price IAP without touching the SKU tier.
A/B test an onboarding flow
Reorder and adjust onboarding steps to lift D1 retention on new installs.