Skip to main content

What is A/B Testing?

A/B testing lets you route a percentage of live traffic to a test version while the rest continues to the current live version. Both versions produce execution logs, so you can compare their performance with real data before fully switching.

Prerequisites

Before starting an A/B test, you need:
  • A policy group with a currently deployed (live) version
  • A test version that has been published (ACTIVE status)
  • Both versions must belong to the same policy group

Starting an A/B Test

Console

Navigate to the policy group → click Start A/B Test → select the test version → set the traffic percentage.

CLI

lexq groups ab-test start \
  --group-id <groupId> \
  --version-id <testVersionId> \
  --traffic-rate 10
The --traffic-rate is the percentage (1–99) of traffic routed to the test version. The remaining traffic stays on the live version.

Monitoring Results

While the A/B test is running, both versions generate execution logs. Compare them using:
lexq history stats --group-id <groupId>
Check success rates, match rates, latency, and output variable distributions across both versions.

Adjusting Traffic

Gradually increase the test version’s traffic as you gain confidence:
lexq groups ab-test adjust --group-id <groupId> --traffic-rate 30
lexq groups ab-test adjust --group-id <groupId> --traffic-rate 50

Stopping an A/B Test

lexq groups ab-test stop --group-id <groupId>
lexq groups ab-test stop --group-id <groupId> --force
Stopping reverts all traffic to the live version. The test version remains ACTIVE but no longer receives traffic.

Typical Workflow

1. Create a new DRAFT version with rule changes
2. Dry-run to validate the changes
3. Publish the new version (DRAFT → ACTIVE)
4. Start A/B test at 10% traffic
5. Monitor execution logs and compare metrics
6. Gradually increase: 10% → 30% → 50%
7. If satisfied, deploy the test version as the new live version
8. Stop the A/B test
A/B tests affect production traffic. Start with a low traffic percentage (5–10%) and monitor closely before scaling up.

Next Steps

Batch Simulation

Test against historical data before running an A/B test.

Deployments

Deploy the winning version to production.