What is a Dry Run?
A Dry Run tests how your rules evaluate a single set of input facts against a specific policy version — without any side effects. No execution logs are written, no integrations are called, and no quotas are consumed. Think of it as a unit test for your rules.Dry Run vs Batch Simulation: A dry run tests one input at a time for quick validation. Batch Simulation tests a version against hundreds or thousands of historical inputs and compares the results against a baseline version. Use dry runs for iterative development; use batch simulation for regression testing before deployment.
When to Use Dry Run
- After creating or modifying rules — verify they match expected inputs
- Before publishing a DRAFT version — your safety net
- When debugging unexpected execution results — reproduce the scenario
- When comparing two versions with the same input (Dry Run Compare)
Running a Dry Run
Console
Navigate to a policy version → Dry Run tab → enter input facts → click Run.CLI
| Flag | Description |
|---|---|
--debug | Include execution traces and decision traces in the response |
--mock | Mock external integration calls (webhooks, notifications, etc.) |
API
Response
Output Variables
TheoutputVariables map contains all values produced by the matched rules’ actions — discounts calculated, facts set, tags added, etc.
Execution Traces
Each rule produces an execution trace showing whether its condition matched:Decision Traces
Decision traces show the final outcome for each rule — whether it was selected, skipped, or blocked. Possible reasons:CONDITION_MATCHED, CONDITION_NOT_MATCHED, BLOCKED_BY_MUTEX, DISABLED.
Checking Requirements
Before running a dry run, check which facts the version expects:Dry Run Compare
Compare how two versions evaluate the same input facts, side by side:Next Steps
Batch Simulation
Test against hundreds of historical inputs at once.
Deployments
Deploy your validated version to production.

