π Sidecar Deployment Pattern
The sidecar pattern deploys a Policy Enforcement Point (Bouncer) as a co-located container alongside your application pod in Kubernetes. This approach provides zero-trust security with minimal latency and simplified network configuration.
π What is the Sidecar Pattern?
Traditional Standalone Bouncer
User Request
β
βΌ
βββββββββββββββββββββββ
β Load Balancer β
ββββββββββββ¬βββββββββββ
β
βΌ
βββββββββββββββββββββββ
β Bouncer (PEP) β β Separate pods
β (Multiple pods) β
ββββββββββββ¬βββββββββββ
β
βΌ
βββββββββββββββββββββββ
β Your Application β
β (Multiple pods) β
βββββββββββββββββββββββ
Sidecar Pattern
User Request
β
βΌ
βββββββββββββββββββββββ
β Load Balancer β
ββββββββββββ¬βββββββββββ
β
βΌ
βββββββββββββββββββββββββββββββββββ
β Kubernetes Pod β
β β
β ββββββββββββββ ββββββββββββββ β
β β Bouncer ββ β Your β β
β β Sidecar β β App β β
β ββββββββββββββ ββββββββββββββ β
β β
β (localhost communication) β
βββββββββββββββββββββββββββββββββββ
π Benefits of Sidecar Pattern
Security Benefits
- Zero Trust: Every application instance has its own enforcement point
- No direct app bypass: Service routing can be configured so requests must pass through the Bouncer path
- Reduced attack surface: Communication over localhost only
- Isolation: Policy violations in one pod don't affect others
- Defense in depth: Even if network security fails, pod-level protection remains
Performance Benefits
- Ultra-low latency: localhost communication (< 1ms)
- No network hops: Application and Bouncer in same pod
- Reduced bandwidth: No cross-pod traffic for authorization
- Better caching: Each sidecar caches decisions for its specific app instance
Operational Benefits
- Automatic scaling: Bouncer scales with application automatically
- Simplified deployment: Single Kubernetes deployment manifest
- No separate service: No need to manage Bouncer service separately
- Consistent versioning: Application and Bouncer always deployed together
- Easy rollback: Both components roll back together
Network Benefits
- No service mesh required: Built-in traffic management
- Simplified DNS: No separate Bouncer service to resolve
- Works with any CNI: Compatible with all Kubernetes networking
- No network policies needed: Traffic never leaves pod
π When to Use Sidecar vs Standalone
Use Sidecar When:
- β Maximum security required (zero trust, compliance)
- β Application-specific policies (each app has unique requirements)
- β Microservices architecture (many small services)
- β High performance critical (ultra-low latency needed)
- β Dynamic scaling (pods scale up/down frequently)
- β Cloud-native applications (Kubernetes-first)
- β Regulatory compliance (FINTRAC, OSFI, HIPAA, PCI-DSS)
Use Standalone When:
- β Shared policies (multiple apps use same policies)
- β Gateway pattern (API gateway, reverse proxy)
- β Legacy applications (cannot modify deployment)
- β Resource constraints (limited CPU/memory)
- β Centralized management (single point of control)
- β Simple architecture (few applications)
π€ Production Availability Caveat
Sidecar is strong for isolation and latency, but enterprise teams must validate runtime failure behavior.
- If traffic is pinned to a sidecar listener and that sidecar listener is unavailable, requests can fail even while the app process is healthy.
- For strict uptime SLOs, use resilient routing and failover design, or place a reverse-proxy bouncer fleet behind a load balancer.
- Define policy-driven degraded behavior for approved low-risk routes; keep sensitive routes fail-closed.
π Sidecar Deployment Architecture
Pod Architecture
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Kubernetes Pod β
β β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β Bouncer Sidecar Container β β
β β β β
β β ββββββββββββββββ ββββββββββββββββ β β
β β βPolicy Engine β β Local Cache β β β
β β β (OPA) β β (Decisions) β β β
β β ββββββββββββββββ ββββββββββββββββ β β
β β β β
β β Listens on: localhost:8080 β β
β βββββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββ β
β β localhost β
β βΌ β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β Application Container β β
β β β β
β β Your application code β β
β β Listens on: localhost:3000 β β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β
β Shared: β
β - Network namespace (localhost) β
β - Volumes (optional) β
β - Environment variables β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
β WSS (Policy Sync)
βΌ
ββββββββββββββββββββ
β Policy Bridge β
β (External) β
ββββββββββββββββββββ
π Deployment Guide
Prerequisites
- Kubernetes cluster (1.19+)
- kubectl configured
- Control Core account (for policy bridge connection)
- Container registry access
- Helm (optional, but recommended)
Step 1: Prepare Your Application
Ensure your application is containerized and has a Kubernetes deployment.
Example application deployment (app-only.yaml):
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
namespace: production
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: app
image: myregistry/my-app:v1.0.0
ports:
- containerPort: 3000
name: http
resources:
requests:
cpu: 100m
memory: 256Mi
limits:
cpu: 500m
memory: 512Mi
Step 2: Add Bouncer Sidecar
Modify your deployment to include the Bouncer sidecar.
With Bouncer sidecar (app-with-sidecar.yaml):
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
namespace: production
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
annotations:
# Optional: metrics scraping
prometheus.io/scrape: "true"
prometheus.io/port: "8080"
prometheus.io/path: "/metrics"
spec:
containers:
# Application container (unchanged)
- name: app
image: myregistry/my-app:v1.0.0
ports:
- containerPort: 3000
name: http
env:
- name: PORT
value: "3000"
resources:
requests:
cpu: 100m
memory: 256Mi
limits:
cpu: 500m
memory: 512Mi
# Important: App now connects via sidecar
livenessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 30
periodSeconds: 10
# Bouncer sidecar container (NEW)
- name: bouncer-sidecar
image: controlcore/bouncer:latest
ports:
- containerPort: 8080
name: proxy
protocol: TCP
env:
# Policy bridge connection
- name: POLICY_SYNC_URL
value: "wss://policy-bridge.your-controlcore.com"
- name: POLICY_SYNC_CLIENT_TOKEN
valueFrom:
secretKeyRef:
name: controlcore-secrets
key: policy-bridge-client-token
# Target application (localhost)
- name: TARGET_URL
value: "http://localhost:3000"
# Bouncer configuration
- name: POLICY_STORE
value: "opa"
- name: CACHE_TTL
value: "300" # 5 minutes
- name: LOG_LEVEL
value: "info"
# Security
- name: ENABLE_AUDIT_LOG
value: "true"
- name: AUDIT_LOG_DESTINATION
value: "stdout"
resources:
requests:
cpu: 50m
memory: 128Mi
limits:
cpu: 200m
memory: 256Mi
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 10
periodSeconds: 5
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 3
Step 3: Create Configuration Secret
Create a Kubernetes secret with policy bridge credentials:
kubectl create secret generic controlcore-secrets \
--from-literal=policy-sync-client-token='YOUR_POLICY_SYNC_CLIENT_TOKEN' \
--namespace=production
Step 4: Update Service Configuration
Important: The Service should point to the Bouncer sidecar port (8080), not the application port (3000).
apiVersion: v1
kind: Service
metadata:
name: my-app
namespace: production
spec:
selector:
app: my-app
ports:
- name: http
port: 80
targetPort: 8080 # β Point to Bouncer sidecar, not app
protocol: TCP
type: ClusterIP
Step 5: Deploy
# Create namespace
kubectl create namespace production
# Deploy secret
kubectl apply -f controlcore-secrets.yaml
# Deploy application with sidecar
kubectl apply -f app-with-sidecar.yaml
# Deploy service
kubectl apply -f service.yaml
# Verify deployment
kubectl get pods -n production
kubectl logs -n production <pod-name> -c bouncer-sidecar
kubectl logs -n production <pod-name> -c app
Step 6: Configure Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-app
namespace: production
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
ingressClassName: nginx
tls:
- hosts:
- app.example.com
secretName: my-app-tls
rules:
- host: app.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-app
port:
number: 80 # Service port (forwards to 8080)
π Cloud Provider Examples
AWS EKS
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
namespace: production
annotations:
# AWS-specific annotations
eks.amazonaws.com/compute-type: "ec2"
spec:
template:
metadata:
annotations:
# IAM role for service account (IRSA)
eks.amazonaws.com/role-arn: arn:aws:iam::ACCOUNT:role/my-app-role
spec:
serviceAccountName: my-app-sa
containers:
- name: app
# ... app config
- name: bouncer-sidecar
image: controlcore/bouncer:latest
env:
# AWS-specific configuration
- name: AWS_REGION
value: "us-east-1"
- name: POLICY_SYNC_URL
value: "wss://policy-bridge.us-east-1.your-domain.com"
# ... other bouncer config
Google Cloud GKE
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
namespace: production
spec:
template:
metadata:
annotations:
# GKE Workload Identity
iam.gke.io/gcp-service-account: my-app@PROJECT.iam.gserviceaccount.com
spec:
serviceAccountName: my-app-ksa
containers:
- name: app
# ... app config
- name: bouncer-sidecar
image: controlcore/bouncer:latest
env:
# GCP-specific configuration
- name: GCP_PROJECT
value: "my-project"
- name: POLICY_SYNC_URL
value: "wss://policy-bridge.us-central1.your-domain.com"
# ... other bouncer config
Azure AKS
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
namespace: production
spec:
template:
metadata:
labels:
aadpodidbinding: my-app-identity
spec:
containers:
- name: app
# ... app config
- name: bouncer-sidecar
image: controlcore/bouncer:latest
env:
# Azure-specific configuration
- name: AZURE_TENANT_ID
value: "your-tenant-id"
- name: POLICY_SYNC_URL
value: "wss://policy-bridge.eastus.your-domain.com"
# ... other bouncer config
π Service Mesh Integration
Istio Integration
When using Istio, the sidecar pattern provides additional security with mTLS between services.
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
namespace: production
spec:
template:
metadata:
annotations:
# Istio sidecar injection
sidecar.istio.io/inject: "true"
# Traffic routing
traffic.sidecar.istio.io/includeInboundPorts: "8080"
traffic.sidecar.istio.io/excludeInboundPorts: "3000"
spec:
containers:
- name: app
ports:
- containerPort: 3000 # Only accessible via Bouncer
- name: bouncer-sidecar
ports:
- containerPort: 8080 # Exposed to Istio
Architecture with Istio:
External Traffic
β
βΌ
βββββββββββββββββββββββββββββββββββββββββββ
β Istio Ingress Gateway β
ββββββββββββββββββββ¬βββββββββββββββββββββββ
β mTLS
βΌ
βββββββββββββββββββββββββββββββββββββββββββ
β Pod β
β ββββββββββββββββββββββββββββββββββ β
β β Bouncer Proxy (Istio Sidecar) β β
β ββββββββββββ¬ββββββββββββββββββββββ β
β β mTLS β
β βΌ β
β ββββββββββββββββββββββββββββββββββ β
β β Bouncer Sidecar β β
β ββββββββββββ¬ββββββββββββββββββββββ β
β β localhost β
β βΌ β
β ββββββββββββββββββββββββββββββββββ β
β β Application β β
β ββββββββββββββββββββββββββββββββββ β
βββββββββββββββββββββββββββββββββββββββββββ
Linkerd Integration
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
namespace: production
annotations:
linkerd.io/inject: enabled
spec:
template:
metadata:
annotations:
config.linkerd.io/skip-inbound-ports: "3000" # App port not exposed
config.linkerd.io/skip-outbound-ports: "7000" # Policy bridge connection
spec:
containers:
- name: app
ports:
- containerPort: 3000
- name: bouncer-sidecar
ports:
- containerPort: 8080
π Advanced Configuration
Init Container for Policy Pre-loading
Pre-load policies before the application starts:
spec:
initContainers:
- name: policy-loader
image: controlcore/policy-loader:latest
env:
- name: POLICY_SYNC_URL
value: "https://policy-bridge.your-domain.com"
- name: POLICY_BUNDLE_URL
value: "https://policies.your-domain.com/bundle.tar.gz"
volumeMounts:
- name: policy-cache
mountPath: /policies
containers:
- name: bouncer-sidecar
volumeMounts:
- name: policy-cache
mountPath: /policies
readOnly: true
volumes:
- name: policy-cache
emptyDir: {}
Shared Volume for Audit Logs
Share audit logs between Bouncer and a log collector:
spec:
containers:
- name: bouncer-sidecar
env:
- name: AUDIT_LOG_FILE
value: "/var/log/audit/decisions.log"
volumeMounts:
- name: audit-logs
mountPath: /var/log/audit
- name: log-collector
image: fluent/fluent-bit:latest
volumeMounts:
- name: audit-logs
mountPath: /var/log/audit
readOnly: true
- name: fluent-bit-config
mountPath: /fluent-bit/etc
volumes:
- name: audit-logs
emptyDir: {}
- name: fluent-bit-config
configMap:
name: fluent-bit-config
Resource Requests and Limits
Recommended resource allocation:
| Deployment Size | Bouncer CPU | Bouncer Memory | Total Pod Overhead |
|---|---|---|---|
| Small (< 100 req/s) | 50m-100m | 64Mi-128Mi | +10-15% |
| Medium (100-500 req/s) | 100m-200m | 128Mi-256Mi | +15-20% |
| Large (500-1000 req/s) | 200m-500m | 256Mi-512Mi | +20-25% |
| XLarge (> 1000 req/s) | 500m-1000m | 512Mi-1Gi | +25-30% |
containers:
- name: bouncer-sidecar
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 512Mi
ποΈ Monitoring and Observability
Prometheus Metrics
Expose metrics from the Bouncer sidecar:
apiVersion: v1
kind: Service
metadata:
name: my-app-metrics
namespace: production
labels:
app: my-app
spec:
selector:
app: my-app
ports:
- name: metrics
port: 9090
targetPort: 9090 # Bouncer metrics port
clusterIP: None # Headless for direct pod access
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: my-app
namespace: production
spec:
selector:
matchLabels:
app: my-app
endpoints:
- port: metrics
interval: 30s
path: /metrics
Log Aggregation
Forward logs to centralized logging:
containers:
- name: bouncer-sidecar
env:
- name: LOG_FORMAT
value: "json"
- name: LOG_DESTINATION
value: "stdout" # Captured by cluster logging
Distributed Tracing
Enable OpenTelemetry tracing:
containers:
- name: bouncer-sidecar
env:
- name: OTEL_ENABLED
value: "true"
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: "http://jaeger-collector:14268/api/traces"
- name: OTEL_SERVICE_NAME
value: "bouncer-sidecar-my-app"
π οΈ Troubleshooting
Common Issues
Issue 1: Sidecar Cannot Connect to Policy Bridge
# Check sidecar logs
kubectl logs -n production <pod-name> -c bouncer-sidecar
# Common causes:
# - Incorrect POLICY_SYNC_URL
# - Invalid POLICY_SYNC_CLIENT_TOKEN
# - Network policy blocking egress
# - Firewall rules
# Test policy bridge connectivity
kubectl exec -n production <pod-name> -c bouncer-sidecar -- \
curl -v wss://policy-bridge.your-domain.com
Issue 2: Application Cannot Reach Sidecar
# Verify localhost communication
kubectl exec -n production <pod-name> -c app -- \
curl http://localhost:8080/health
# Check if Bouncer is listening
kubectl exec -n production <pod-name> -c bouncer-sidecar -- \
netstat -tuln | grep 8080
# Verify service points to correct port
kubectl get svc my-app -n production -o yaml | grep targetPort
Issue 3: High Latency
# Check resource utilization
kubectl top pod <pod-name> -n production --containers
# Increase resource limits
kubectl set resources deployment my-app -n production \
-c=bouncer-sidecar --limits=cpu=500m,memory=512Mi
# Adjust cache TTL
kubectl set env deployment my-app -n production \
-c=bouncer-sidecar CACHE_TTL=600 # 10 minutes
Issue 4: Policies Not Updating
# Check policy bridge connection
kubectl logs -n production <pod-name> -c bouncer-sidecar | grep "policy bridge"
# Force policy refresh
kubectl exec -n production <pod-name> -c bouncer-sidecar -- \
curl -X POST http://localhost:8080/v1/policy/refresh
# Restart pod to re-sync
kubectl delete pod <pod-name> -n production
π Security Best Practices
1. Network Policies
Restrict sidecar network access:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: my-app-netpol
namespace: production
spec:
podSelector:
matchLabels:
app: my-app
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: ingress-nginx
ports:
- port: 8080 # Bouncer port
egress:
- to:
- namespaceSelector: {}
ports:
- port: 7000 # Policy Bridge
- port: 443 # HTTPS
- to: # DNS
- namespaceSelector:
matchLabels:
name: kube-system
ports:
- port: 53
protocol: UDP
2. Pod Security
Apply pod security standards:
apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
securityContext:
runAsNonRoot: true
runAsUser: 1000
fsGroup: 1000
seccompProfile:
type: RuntimeDefault
containers:
- name: bouncer-sidecar
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
3. Secret Management
Use external secret managers:
# AWS Secrets Manager
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: controlcore-secrets
namespace: production
spec:
refreshInterval: 1h
secretStoreRef:
name: aws-secrets-manager
kind: SecretStore
target:
name: controlcore-secrets
data:
- secretKey: policy-bridge-client-token
remoteRef:
key: controlcore/policy-bridge-token
π Migration from Standalone to Sidecar
Step-by-Step Migration
Current Architecture (Standalone):
- Standalone Bouncer deployment
- Applications connect via service
Target Architecture (Sidecar):
- Bouncer deployed as sidecar
- Direct localhost communication
Migration Steps:
- Deploy sidecar alongside standalone (zero downtime):
# Keep existing standalone Bouncer
# Add sidecar to new pods
spec:
replicas: 6 # 3 old + 3 new
template:
metadata:
labels:
version: v2-sidecar # New version with sidecar
- Gradually shift traffic:
# Monitor both versions
kubectl get pods -l app=my-app --show-labels
# If successful, scale down standalone
kubectl scale deployment bouncer-standalone --replicas=0
- Remove standalone deployment:
kubectl delete deployment bouncer-standalone
kubectl delete service bouncer-standalone
β‘ Performance Benchmarks
Latency Comparison
| Pattern | P50 | P95 | P99 |
|---|---|---|---|
| Standalone Bouncer | 15ms | 45ms | 80ms |
| Sidecar (localhost) | 2ms | 8ms | 15ms |
| Improvement | -87% | -82% | -81% |
Resource Utilization
| Metric | Standalone (Shared) | Sidecar (Per Pod) |
|---|---|---|
| Bouncer Memory | 512Mi Γ 3 = 1.5Gi | 128Mi Γ 10 = 1.28Gi |
| Bouncer CPU | 1 CPU Γ 3 = 3 CPU | 100m Γ 10 = 1 CPU |
| Network Traffic | High (cross-pod) | Low (localhost) |
π Next Steps
- Enterprise Configuration: Configure your sidecar deployment
- Security Best Practices: Harden your sidecar
- Monitoring: Monitor sidecar performance
- Enterprise Architecture: Advanced patterns
The sidecar pattern provides the strongest security posture for cloud-native applications with minimal performance overhead.