📌 Network Bouncer Configuration

Control Core bouncers enforce authorization at the network layer. They act as Policy Enforcement Points that support any resource type (APIs, databases, web applications, microservices, etc.).

📌 Overview

Bouncers provide:

  • Network-layer interception - Traffic passes through the bouncer before reaching your application
  • Policy evaluation - Authorization decisions in real time
  • No application changes - Works with any application without code modifications
  • Automatic policy sync - Policies are distributed from the Control Plane via the Policy Bridge automatically

For deployment patterns and pairing, see the Bouncer Deployment Guide.

Compatibility with existing deployments

In documentation and the Control Plane UI, this bouncer is referred to as Control Core AI Pilot (or AI Pilot). All technical identifiers (bouncer_type, image names, this guide's URL path, Helm/Compose service names, and config keys) are unchanged so existing deployments keep working. No changes are required to existing bouncers or integrations.

Policy sync (Policy Bridge)

The Control Plane distributes policies to bouncers through the Policy Bridge. The bouncer syncs with the Control Plane on a configurable interval (default 30 seconds) so policy changes are picked up without restarting the bouncer. You can also trigger a sync on demand from the Control Plane UI (see Verification).

Click to enlarge

🏗️ Request Flow

The following diagram shows how each request is evaluated and forwarded or denied.

Click to enlarge

Steps:

  1. Client sends request to the bouncer
  2. Bouncer evaluates policy based on user, resource, and action
  3. If allowed, request is forwarded to the target application
  4. If denied, 403 Forbidden is returned
  5. Response may be modified (e.g., data masking) based on policy

📌 Bypass Prevention Checklist

Use this checklist whenever you deploy a bouncer as sidecar or reverse proxy.

CheckSidecarReverse proxy
Frontend URL points to bouncerRequired (:8080 path)Required (public gateway URL)
Server-side API URL points to bouncerRequiredRequired
Direct app upstream port exposed publiclyNot allowedNot allowed
Ingress/Service/LB target points to app direct portNot allowedNot allowed

Step-by-step validation

  1. Confirm browser-facing API URL is bouncer-routed.
  2. Confirm server-side API URL is bouncer-routed.
  3. Confirm app upstream port is private-only.
  4. Confirm ingress/LB routes to bouncer, not app.
  5. Run synthetic checks: bouncer endpoint passes, direct upstream endpoint blocked.

📌 Configuration

Configuration is provided via your deployment (Docker, Kubernetes, Helm). Key settings:

Required

SettingDescription
Control Plane URLAPI endpoint for registration and policy sync
API KeyCredentials from Settings → Environments
Bouncer IDUnique identifier
Environmentsandbox or production
Target hostProtected application hostname
Target portProtected application port

Optional

SettingDescription
Bouncer nameDisplay name in Control Plane
Resource nameFor auto-discovery and pairing
Health check URLFor immediate connection verification
Policy sync intervalHow often the bouncer checks for policy updates (seconds). Default: 30

AI Gateway / LLM routes (optional)

When the bouncer image is built with Control Core AI Pilot support and LLM routes are enabled, the bouncer can proxy GenAI traffic (e.g. OpenAI-compatible APIs) in addition to standard API traffic. Configure via environment variables on the bouncer:

SettingDescription
USE_LLM_ROUTESSet to true to enable routes for /v1/chat/completions, /v1/completions, /v1/embeddings
OPENAI_BASE_URLBase URL for OpenAI (defaults to https://api.openai.com)
OPENAI_API_KEYAPI key for OpenAI (set securely; not in plain config)
AZURE_OPENAI_ENDPOINTAzure OpenAI endpoint URL when routing to Azure
REDIS_URLOptional. Required for global token rate limiting across replicas; also used for policy bundle cache when set
CACHE_POLICY_TTL_SECONDSOptional. TTL for policy bundle cache when using file or Redis cache
CACHE_BUNDLE_PATHOptional. Fallback file path for policy bundle when Redis is not set

Paths such as /v1/chat/completions are then proxied to the configured LLM provider(s). OPA still enforces who can call which model; the bouncer can apply request-body PII redaction, prompt guard (jailbreak protection), and token rate limits. See AI Governance for details.

🚀 Deployment

Download the bouncer package from Settings → Bouncers → Download Center in the Control Plane. Choose your deployment format (Docker Compose, Helm, or Kubernetes manifest) and configure the required settings.

Registration happens automatically when the bouncer starts with the required settings. No separate setup script is needed.

📌 Verification

  1. Check bouncer logs for successful registration and policy sync.
  2. Verify in Settings → Bouncers — status should show Connected.
  3. If you set a health check URL, use Confirm Connection for immediate verification.
  4. Trigger a manual sync (optional): In Settings → Bouncers, open the bouncer and use Sync policies (or equivalent) to pull the latest policies immediately instead of waiting for the next interval.
  5. Send a test request through the bouncer.

🛠️ Troubleshooting

IssueWhat to check
Bouncer not connectingControl Plane URL, API key, and network connectivity from bouncer to Control Plane.
Ensure firewall allows outbound HTTPS.
403 on all requestsPolicies are created, enabled, and assigned to the correct resource and environment.
Check Settings → Resources and policy status.
Policies not updatingPolicy sync interval and Control Plane connectivity.
Use Sync policies from Settings → Bouncers for an immediate sync.
Check bouncer logs for sync errors.
No activity in bouncer logsEnsure API key and Control Plane URL are set in the bouncer deployment.
Without them, the bouncer cannot register or receive policy updates.
High latencyEnable decision caching and tune cache TTL.
Deploy bouncer closer to your application or scale storage/Postgres if the Control Plane is slow.

For more scenarios, see the main Troubleshooting guide.