Skip to main content
Fleack is built on five primitives. Once you understand how each one fits into the flow — from captured traffic to a statistically validated winner — you have a complete mental model of the platform. This page walks through each primitive at a high level and shows how they connect; the linked detail pages go deeper on each one.

How the primitives connect

Every A/B test in Fleack starts with traffic your app already sends to your backend. Fleack captures that traffic, classifies what it sees, and lets you declare parameters worth testing. From there, tests run automatically against the users you choose, and the results engine tells you when you have a winner.
                                ┌─────────────┐
   captured traffic  ─────────► │  endpoints  │
                                └──────┬──────┘
                                       │ classified

                                ┌─────────────┐
                                │   levers    │ ← AI + heuristic + manual
                                └──────┬──────┘


                  ┌────────────┐ ┌─────────────┐ ┌────────────┐
                  │  segments  │►│    tests    │ │  metrics   │
                  └────────────┘ └──────┬──────┘ └────────────┘
                                       │ live

                                ┌─────────────┐
                                │  exposures  │ → results engine
                                └─────────────┘
Read each box left to right and you have the entire platform in one view.

The five primitives

Endpoints are the API URL patterns your app calls. Fleack observes them and classifies each one — only endpoints with consistent, non-user-specific responses are eligible for testing. Learn more → Levers are individual parameters inside an endpoint’s response that you’ve declared as testable — identified by a JSON path like data.gems_reward or ads.interstitial_frequency. Learn more → Tests bind a lever to a set of variant values and a target audience. Each test runs with sticky per-user assignment so every user consistently sees the same variant. Learn more → Segments define which users are eligible for a test, using built-in attributes like platform and country, plus profile attributes drawn from your own user-data endpoints. Learn more → Results are computed by the Bayesian engine from exposure records — giving you a win probability per variant so you know exactly when to promote a winner. Learn more →

Explore each primitive

Endpoints

How Fleack classifies your API traffic and which endpoints are testable.

Levers

Declaring individual response parameters as testable targets.

Tests

Building variant experiments and managing their lifecycle.

Segments

Targeting specific audiences with built-in and profile attributes.

Results

Bayesian win probability and when to promote a winning variant.

Quickstart

Go from zero to a live test in under five minutes.