📘 Simulator & Testing Guide
Audience: Control authors, compliance officers, security engineers Time: ~10 min to run your first simulation
The Control Simulator lets you validate a control against realistic inputs before it goes anywhere near production traffic. It uses semantic metadata from your connected data sources to automatically generate test payloads — no manual JSON required.
📌 Prerequisites
- At least one data source configured in Settings → Data Sources (recommended for semantic test generation)
- Semantic metadata refreshed at least once via Settings → Semantic Explorer
- The target control saved in draft or sandbox state
Troubleshooting: "Generate Test Case" is greyed out?
- Confirm the control has at least one valid condition referencing an input attribute
- Confirm at least one data source has synced (Settings → Data Sources → last sync timestamp)
- If still failing, check browser network console for
POST /v1/pis/policies/{id}/generate-test-case
🏗️ Workflow
Step 1 — Generate a semantic test payload
- Open a control → click Test (or go to Controls → Simulator)
- Click Generate Test Case
The simulator maps the control's attribute conditions to semantic-tagged columns and pre-fills realistic values — for example, PII emails, financial amounts, identity roles. You get a meaningful input payload without writing JSON.
- Review the generated payload in the Input panel
- Adjust any values you want to test specifically (e.g. change
user.departmentto test a different team)
Step 2 — Run the simulation
Click Run to evaluate the control against the input payload.
The result panel shows:
- Decision:
AlloworDeny - Matched condition path: which conditions evaluated to true
- Explainability output: plain-language summary (see below)
Troubleshooting: Decision is
undefinedorNo Match? The control evaluates to undefined when none of its conditions match the input. This is treated as an implicit deny by the enforcement engine. Check:
- Attribute names in the control exactly match field paths in the input (e.g.
input.subject.department)- Add a
print()statement in the Rego editor to trace evaluation- Open Semantic Explorer to confirm field names from the data source
Step 3 — Replay historical denials
Click Fetch Last 100 Denials to load real denied requests from your audit history.
Run these inputs against your new or modified control. The simulator shows a traffic-light result for each:
| Indicator | Meaning |
|---|---|
| 🟢 Would Allow | This control would permit this request |
| 🔴 Would Deny | This control would block this request |
| ⚫ No Match | This control does not apply to this request |
Use this to:
- Confirm your control catches the right denials
- Detect regressions (a modified control that now allows previously-denied requests)
- Calibrate conditions before promoting
Troubleshooting: "Fetch Last 100 Denials" returns empty?
- Confirm the Bouncer is generating audit events (Settings → PEPs → Bouncer status)
- Send a few test requests through the Bouncer in sandbox, then retry
- Check that the Bouncer's
ENVIRONMENTmatches the selected environment in the Simulator
Step 4 — Policy comparison (baseline vs candidate)
Use Policy Comparison to run a side-by-side evaluation of two control versions:
- Select the Baseline version (current production or last promoted)
- Select the Candidate version (your modified draft)
- Run comparison — outputs a per-input diff
This surfaces any change in enforcement behaviour between the two versions.
Step 5 — Review the deny explanation
For any Deny result, click Explain to open the Explainability output.
Example:
"Request denied because
user.departmentwas'Engineering'but the control requires'Finance'in condition group 1."
The explanation identifies:
- The first failed condition
- The observed input value
- The expected value or operator
- Which control group contains the failing condition
🔌 SCCA integration
The SCCA copilot (AI-assisted authoring) can drive the simulator directly:
- "Generate a semantic test case for this control."
- "Replay the last 100 denials and run a what-if comparison."
- "Explain why this deny occurred in plain language."
- "Before promoting this control, compare sandbox vs production metadata and list parity gaps."
SCCA can also open Semantic Explorer to surface missing PII or attribute mappings.
📌 Pre-promotion checklist
Before promoting a control from sandbox to production:
- Generated test case runs and produces expected decision
- Denial replay shows no unexpected regressions
- Explain output for any deny result reads correctly
- If control uses metadata attributes: SCCA verified sandbox vs production metadata parity
- Key names are identical across environments (e.g.
region,risk_level,kyc_status) - Value enums match (e.g. if sandbox uses
"Finance", production PIP also returns"Finance")
- Key names are identical across environments (e.g.
- Post-decision actions (notifications, SIEM) tested in sandbox
Common promotion failure: Controls that depend on PIP attributes can fail in production when the production data source returns different field names or value formats than sandbox. Use the SCCA prompt: "Compare sandbox vs production metadata values and list parity gaps" before promoting.
🤖 Operational guardrails
- Keep data source credentials least-privilege and read-only where possible
- Validate TLS for each connector before activating in production
- Review high-sensitivity tag detections (PII/financial/auth-critical) in Semantic Explorer before promoting
- Preserve manual tag overrides with a rationale note for audit traceability