📌 Network Bouncer Configuration
Control Core bouncers enforce authorization at the network layer. They act as Policy Enforcement Points that support any resource type (APIs, databases, web applications, microservices, etc.).
📌 Overview
Bouncers provide:
- Network-layer interception - Traffic passes through the bouncer before reaching your application
- Policy evaluation - Authorization decisions in real time
- No application changes - Works with any application without code modifications
- Automatic policy sync - Policies are distributed from the Control Plane via the Policy Bridge automatically
For deployment patterns and pairing, see the Bouncer Deployment Guide.
Compatibility with existing deployments
In documentation and the Control Plane UI, this bouncer is referred to as Control Core AI Pilot (or AI Pilot). All technical identifiers (bouncer_type, image names, this guide's URL path, Helm/Compose service names, and config keys) are unchanged so existing deployments keep working. No changes are required to existing bouncers or integrations.
Policy sync (Policy Bridge)
The Control Plane distributes policies to bouncers through the Policy Bridge. The bouncer syncs with the Control Plane on a configurable interval (default 30 seconds) so policy changes are picked up without restarting the bouncer. You can also trigger a sync on demand from the Control Plane UI (see Verification).
Click to enlarge
🏗️ Request Flow
The following diagram shows how each request is evaluated and forwarded or denied.
Click to enlarge
Steps:
- Client sends request to the bouncer
- Bouncer evaluates policy based on user, resource, and action
- If allowed, request is forwarded to the target application
- If denied, 403 Forbidden is returned
- Response may be modified (e.g., data masking) based on policy
📌 Bypass Prevention Checklist
Use this checklist whenever you deploy a bouncer as sidecar or reverse proxy.
| Check | Sidecar | Reverse proxy |
|---|---|---|
| Frontend URL points to bouncer | Required (:8080 path) | Required (public gateway URL) |
| Server-side API URL points to bouncer | Required | Required |
| Direct app upstream port exposed publicly | Not allowed | Not allowed |
| Ingress/Service/LB target points to app direct port | Not allowed | Not allowed |
Step-by-step validation
- Confirm browser-facing API URL is bouncer-routed.
- Confirm server-side API URL is bouncer-routed.
- Confirm app upstream port is private-only.
- Confirm ingress/LB routes to bouncer, not app.
- Run synthetic checks: bouncer endpoint passes, direct upstream endpoint blocked.
📌 Configuration
Configuration is provided via your deployment (Docker, Kubernetes, Helm). Key settings:
Required
| Setting | Description |
|---|---|
| Control Plane URL | API endpoint for registration and policy sync |
| API Key | Credentials from Settings → Environments |
| Bouncer ID | Unique identifier |
| Environment | sandbox or production |
| Target host | Protected application hostname |
| Target port | Protected application port |
Optional
| Setting | Description |
|---|---|
| Bouncer name | Display name in Control Plane |
| Resource name | For auto-discovery and pairing |
| Health check URL | For immediate connection verification |
| Policy sync interval | How often the bouncer checks for policy updates (seconds). Default: 30 |
AI Gateway / LLM routes (optional)
When the bouncer image is built with Control Core AI Pilot support and LLM routes are enabled, the bouncer can proxy GenAI traffic (e.g. OpenAI-compatible APIs) in addition to standard API traffic. Configure via environment variables on the bouncer:
| Setting | Description |
|---|---|
| USE_LLM_ROUTES | Set to true to enable routes for /v1/chat/completions, /v1/completions, /v1/embeddings |
| OPENAI_BASE_URL | Base URL for OpenAI (defaults to https://api.openai.com) |
| OPENAI_API_KEY | API key for OpenAI (set securely; not in plain config) |
| AZURE_OPENAI_ENDPOINT | Azure OpenAI endpoint URL when routing to Azure |
| REDIS_URL | Optional. Required for global token rate limiting across replicas; also used for policy bundle cache when set |
| CACHE_POLICY_TTL_SECONDS | Optional. TTL for policy bundle cache when using file or Redis cache |
| CACHE_BUNDLE_PATH | Optional. Fallback file path for policy bundle when Redis is not set |
Paths such as /v1/chat/completions are then proxied to the configured LLM provider(s). OPA still enforces who can call which model; the bouncer can apply request-body PII redaction, prompt guard (jailbreak protection), and token rate limits. See AI Governance for details.
🚀 Deployment
Download the bouncer package from Settings → Bouncers → Download Center in the Control Plane. Choose your deployment format (Docker Compose, Helm, or Kubernetes manifest) and configure the required settings.
Registration happens automatically when the bouncer starts with the required settings. No separate setup script is needed.
📌 Verification
- Check bouncer logs for successful registration and policy sync.
- Verify in Settings → Bouncers — status should show Connected.
- If you set a health check URL, use Confirm Connection for immediate verification.
- Trigger a manual sync (optional): In Settings → Bouncers, open the bouncer and use Sync policies (or equivalent) to pull the latest policies immediately instead of waiting for the next interval.
- Send a test request through the bouncer.
🛠️ Troubleshooting
| Issue | What to check |
|---|---|
| Bouncer not connecting | Control Plane URL, API key, and network connectivity from bouncer to Control Plane. Ensure firewall allows outbound HTTPS. |
| 403 on all requests | Policies are created, enabled, and assigned to the correct resource and environment. Check Settings → Resources and policy status. |
| Policies not updating | Policy sync interval and Control Plane connectivity. Use Sync policies from Settings → Bouncers for an immediate sync. Check bouncer logs for sync errors. |
| No activity in bouncer logs | Ensure API key and Control Plane URL are set in the bouncer deployment. Without them, the bouncer cannot register or receive policy updates. |
| High latency | Enable decision caching and tune cache TTL. Deploy bouncer closer to your application or scale storage/Postgres if the Control Plane is slow. |
For more scenarios, see the main Troubleshooting guide.
📌 Related
- AI Governance - LLM routes, PII redaction, prompt guard, token limits, and AI Pilot settings
- Bouncer Deployment - Full deployment guide
- Multiple Bouncers - Scaling and configuration
- Troubleshooting - Common issues