Before you start
You need three things:- An app whose backend is reachable at a public hostname you control — for example,
api.your-game.com. - DNS access to that hostname so you can add a CNAME record.
- A Fleack account. If you don’t have one yet, request access.
If your backend is bundled into your app binary, or hosted on a third-party domain you don’t control (such as
firebaseio.com), Fleack can’t intercept it. That’s a deliberate limit, not a bug — see When Fleack is not the right tool.Setup
Create your organisation
After signing in to the backoffice, your organisation is created automatically with a default
tenant_id. You’ll see it on the dashboard — it’s the identifier Fleack uses to route your traffic and store your data.If you have multiple apps (different brands or bundle IDs), create one tenant per app using the org switcher. Each tenant gets its own DNS endpoint and its own isolated set of tests.Point your DNS at Fleack
This is the only infrastructure change required. Add a For example, if your app calls The response should be identical to what your backend returns directly. Check the headers — Fleack adds an
CNAME record on the hostname your app already calls, pointing to your Fleack endpoint:api.your-game.com, update that hostname’s CNAME target to api-acme.fleack.io (using your actual tenant ID).Once DNS propagates, Fleack receives your app’s traffic, forwards every request unchanged to your real backend, and returns the response byte-for-byte. Until you launch a test, nothing is modified.Verify the proxy is working with a quick curl:x-fleack-tenant header so you can confirm the request flowed through.Send some traffic
Open your app and navigate through the screens that make API calls to your backend — config screens, home screens, any view that fetches settings or game parameters. Within a few seconds those requests appear in the backoffice under Endpoints.Fleack classifies each endpoint automatically based on observed responses:
Only
| Classification | What it means | Testable? |
|---|---|---|
config-candidate | Same response across all users | Yes |
user-data | Response varies per user | No — used for segmentation only |
mixed | Some shared, some per-user fields | Case by case |
unknown | Not enough traffic to classify yet | Send more traffic and wait |
config-candidate endpoints can carry tests. This is a safety guard — rewriting per-user transactional data would be a product bug, not an experiment.If an endpoint you expect to be testable shows up as
unknown or mixed, trigger it from a few more devices or user accounts. Classification improves as Fleack sees more response variety.Pick a lever
Open the Levers page. For each
config-candidate endpoint, Fleack either auto-detects testable parameters (when AI enrichment is enabled) or lets you create them manually.To create a lever manually:- Click + New lever.
- Select the endpoint that returns the parameter you want to test.
- Click the parameter in the response tree — for example,
data.gems_reward = 30. - Give it a label (e.g. “Gems reward amount”), choose a type (
number,price,frequency,color,toggle, ortext), and optionally set starter variant values. - Click Save.
Launch your first test
From the lever detail page, click Test. The test creation dialog opens with the lever pre-selected.Configure the test:
- Variants — Fleack proposes a control (your current value) and one or two alternatives. Adjust the values to whatever you want to test, for example
gems_reward = 30(control) vsgems_reward = 50(variant A). - Allocation — split traffic between variants. Defaults to an even split.
- Segment — leave this as “All users” for your first test. You can target by platform, country, or user-level attributes once you have a baseline.
- Metric — define what success looks like: conversion on a specific endpoint (e.g. the user hits the purchase endpoint), day-N retention, or a revenue attribute.
- Click Launch.
Read the results
Open the test detail page. Within minutes you’ll see exposures counted per variant. As conversions accumulate, Fleack displays a Bayesian win probability for each variant:
- 87% likely to beat control — keep running, build more confidence.
- 96% likely to beat control — ready to promote.
- No clear winner — wait for more data, or stop and revisit your hypothesis.
- Winner — variant reaches ≥ 90% win probability vs the control.
- Control wins — the control reaches ≥ 90% probability of beating the variant.
- No difference — neither side has hit the threshold yet.
- Not enough data — fewer than 30 total exposures; Fleack declines to give a verdict.
What you just did
You added one DNS record. You picked a parameter your backend already returns. You shipped a test that reaches every user on every app version — instantly, without a release. That’s the entire integration.If anything in this guide didn’t work end-to-end in five minutes, that’s on us. Email contact@fleack.io and describe where you got stuck.
Next steps
Core concepts
Understand what endpoints, levers, segments, exposures, and the results engine actually do under the hood.
How integration works
DNS details, fail-open behaviour, latency budget, and how Fleack handles your traffic at the edge.