Skip to main content

What is Batch Simulation?

A Batch Simulation runs a policy version against a large set of historical execution data, then compares the results against a baseline version. Think of it as a regression test suite — before deploying a new version, you can see exactly how it would have performed on past traffic.
Batch Simulation vs Dry Run: A Dry Run tests one input at a time for quick iteration. A batch simulation tests hundreds or thousands of inputs at once and produces aggregate metrics (match rate, metric deltas, per-rule statistics). Use dry runs during development; use batch simulation before deployment.

What You Get

MetricDescription
Match RatePercentage of inputs where at least one rule matched
Per-Rule StatsHow many times each rule matched and its metric contribution
Metric DeltaDifference in aggregate output between baseline and candidate
Policy ImpactMatched count delta, match rate delta, metric value delta

Running a Simulation

Console

Navigate to a policy group → Simulation tab → select candidate version, baseline version, date range → click Start.

CLI

lexq analytics simulation start --json '{
  "policyVersionId": "<candidate-version-id>",
  "dataset": {
    "type": "HISTORICAL",
    "source": "EXECUTION_LOGS",
    "from": "2025-01-01",
    "to": "2025-01-31"
  },
  "options": {
    "baselinePolicyVersionId": "<baseline-version-id>",
    "includeRuleStats": true,
    "maxRecords": 10000,
    "metricConfig": {
      "targetVariable": "discount_amount",
      "aggregationType": "SUM"
    }
  }
}'
Simulations run asynchronously. Poll until status is COMPLETED or FAILED:
lexq analytics simulation status --id <simulationId>

Dataset Types

TypeSourceDescription
HISTORICALEXECUTION_LOGSRe-run past execution inputs from the specified date range
MANUALREQUEST_BODYProvide inputs directly in the request body
UPLOADEDS3_BUCKETUse a pre-uploaded CSV or JSON dataset from S3

File Upload Dataset

Upload a CSV or JSON file to use as simulation input. This is useful when you have custom test data that doesn’t come from execution history.

Supported Formats

First row must be a header with fact keys. Each subsequent row is one test record.
payment_amount,customer_tier,is_first_purchase
150000,VIP,true
50000,REGULAR,false
80000,VIP,false
Type inference: Numbers, booleans (true/false), and strings are automatically detected. Empty values become null. Quoted fields with commas are supported.
Max file size: 10 MB

Download a Template

Don’t know the expected columns? Download a template pre-filled with example values based on the version’s required facts:
In the New Simulation dialog, select File Upload (CSV / JSON) as dataset type. Click CSV or JSON under “Download template.”

Upload & Run Simulation

  1. Go to Batch SimulationsNew Simulation
  2. Select policy group and target version
  3. Set Dataset Type to File Upload (CSV / JSON)
  4. Drag & drop your file or click to select
  5. Wait for the Uploaded badge
  6. Click Start Simulation

Managing Simulations

lexq analytics simulation list
lexq analytics simulation list --status COMPLETED
lexq analytics simulation cancel --id <simulationId>
lexq analytics simulation export --id <simulationId> --format json
lexq analytics simulation export --id <simulationId> --format csv

Best Practices

  1. Always set a baseline. Without a baseline version, you only see absolute numbers — no deltas.
  2. Use a meaningful date range. At least 7 days of data for statistically significant results.
  3. Check maxRecords. Start with 5,000–10,000 for quick validation.
  4. Run simulation before every production deployment.
Batch simulation re-executes rules against historical input facts — it does not replay external side effects (webhooks, notifications). Integration calls are always mocked during simulation.

Next Steps

Dry Run

Quick single-input validation during development.

A/B Testing

Compare versions with live production traffic.