Pavan Rangani

HomeBlogKubernetes 1.32 Sidecar Containers Production Implementation Guide

Kubernetes 1.32 Sidecar Containers Production Implementation Guide

By Pavan Rangani · May 7, 2026 · DevOps & Cloud

Kubernetes 1.32 Sidecar Containers Production Implementation Guide

Kubernetes 1.32 Sidecar Containers: Native Support Finally Arrives Stable

The Kubernetes 1.32 sidecar containers feature graduated to stable in late 2024, and after a year of production use across multiple clusters I can report it has delivered on its promise. Specifically, the long-standing pain of coordinating sidecar startup, shutdown, and Job completion is finally solved at the API level rather than through fragile shell scripts and emptyDir-based ready signals.

Moreover, the migration story is straightforward for teams already running service meshes or log shippers. Therefore, this guide covers the lifecycle semantics that actually matter, real migration patterns from the legacy approach, and the operational gotchas — particularly around PreStop hooks and Job pods — that you only learn by breaking them in production.

The Old Way and Why It Hurt

Before native sidecars, every team rolled their own coordination. Specifically, you would launch your Istio proxy, Fluent Bit shipper, or Cloud SQL Auth Proxy as a regular container in spec.containers, then wrestle with three problems: ensuring the sidecar is ready before the main app starts, preventing the sidecar from being killed before the app finishes draining, and handling Job pods that never terminated because the sidecar kept running.

Common workarounds included emptyDir flag files, preStop sleep loops, and watcher containers that polled the main process. As a result, every sidecar had its own quirky bootstrap protocol, and every postmortem mentioned race conditions during pod shutdown.

Kubernetes pod with multiple containers in production
Native sidecars eliminate the bootstrap race conditions that plagued legacy multi-container pods.

How Native Sidecars Work

The mechanism is elegantly minimal. You declare a sidecar inside initContainers with restartPolicy: Always. Kubernetes then treats it specially: it starts before regular containers (in init order), continues running alongside them, and crucially terminates after the main containers exit. Furthermore, the sidecar is restarted if it crashes, unlike standard init containers which run to completion exactly once.

Consequently, the lifecycle is now: init containers run first in declared order, then sidecars start (in declared order, blocking subsequent steps until ready), then regular containers start. On termination, regular containers receive SIGTERM first, drain, then exit, and only afterward do sidecars receive their termination signal.

Kubernetes 1.32 Sidecar Containers: Migration Pattern for Fluent Bit

Below is the manifest pattern I use for log shipping. Notably, the same structure works for Istio proxies, Cloud SQL Auth Proxy, and any sidecar that previously fought the lifecycle:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: orders-api
  namespace: production
spec:
  replicas: 6
  selector:
    matchLabels:
      app: orders-api
  template:
    metadata:
      labels:
        app: orders-api
    spec:
      terminationGracePeriodSeconds: 60
      initContainers:
        # Native sidecar: log shipper
        - name: fluent-bit
          image: fluent/fluent-bit:3.2.4
          restartPolicy: Always   # this makes it a sidecar
          imagePullPolicy: IfNotPresent
          resources:
            requests:
              cpu: 50m
              memory: 64Mi
            limits:
              memory: 128Mi
          volumeMounts:
            - name: app-logs
              mountPath: /var/log/app
              readOnly: true
            - name: fluent-bit-config
              mountPath: /fluent-bit/etc
          startupProbe:
            httpGet:
              path: /api/v2/health
              port: 2020
            failureThreshold: 30
            periodSeconds: 1

        # Classic init container: schema migrations
        - name: db-migrate
          image: registry.acme.io/orders-migrator:1.4.7
          command: ["/app/migrate", "up"]
          envFrom:
            - secretRef:
                name: orders-db-credentials

      containers:
        - name: orders-api
          image: registry.acme.io/orders-api:2026.05.07
          ports:
            - containerPort: 8080
          volumeMounts:
            - name: app-logs
              mountPath: /var/log/app
          lifecycle:
            preStop:
              exec:
                command: ["/bin/sh", "-c", "kill -TERM 1 && sleep 30"]
          readinessProbe:
            httpGet:
              path: /actuator/health/readiness
              port: 8080
            periodSeconds: 5
          resources:
            requests:
              cpu: 250m
              memory: 512Mi
            limits:
              memory: 1Gi

      volumes:
        - name: app-logs
          emptyDir: {}
        - name: fluent-bit-config
          configMap:
            name: fluent-bit-config

The restartPolicy: Always on the init container is the entire trick. Furthermore, notice that the migration container remains a classic init container — it runs once, exits, and the sidecar continues. As a result, you get clean separation of one-shot setup work from long-running auxiliary processes.

Service Mesh Integration

Istio 1.24 and Linkerd 2.18 both support native sidecar injection. Specifically, the mesh control plane now writes the proxy as an init container with restartPolicy: Always rather than appending to spec.containers. Consequently, your application container reliably has a working proxy on its first packet, eliminating the classic Istio race where early egress traffic bypassed mTLS.

Moreover, on shutdown the proxy receives SIGTERM only after the main app drains, which means in-flight requests complete with mTLS intact. In contrast, the old approach often killed the proxy mid-flight, leading to TLS handshake errors during rolling deploys. For deeper context on cluster security boundaries, see my Kubernetes network policies guide.

Service mesh sidecar architecture diagram
Service mesh proxies as native sidecars eliminate startup and shutdown races.

Job Pods and the Termination Problem

This is the killer feature for batch workloads. Previously, Job pods with sidecars stayed in Running forever because the sidecar never exited. Therefore, teams resorted to creative shell scripts that watched for the main container’s exit and signaled the sidecar.

With native sidecars, Job termination works correctly out of the box. Specifically, when the main container exits, Kubernetes sends SIGTERM to all sidecars in reverse declaration order, waits for them to exit cleanly, and marks the Job as Complete. Subsequently, your CronJob-based ETL pipelines stop accumulating zombie pods.

PreStop Hook Gotchas

One subtle behavior bit me twice in the first month. The terminationGracePeriodSeconds applies to the entire pod, not per container. Consequently, if your main container’s preStop hook sleeps for 45 seconds and your sidecar needs 20 seconds to flush logs, you must set the pod-level grace period to at least 65 seconds.

Furthermore, sidecars do not have an independent grace period — they share the pod’s. As a result, if the main container exhausts the pod’s grace period, sidecars get SIGKILL with no time to clean up. The official Kubernetes sidecar containers documentation covers the precise ordering rules.

Resource Requests and Cost Implications

Sidecars count toward your pod’s resource requests. Specifically, the scheduler sums requests across init containers (taking the max for non-sidecar inits) and adds sidecar requests on top. Therefore, a deployment with a 100m CPU sidecar across 200 replicas reserves 20 CPU cores cluster-wide just for log shipping.

Moreover, this is not new with native sidecars, but the explicit lifecycle makes capacity planning more honest. As a result, several teams I work with discovered they were over-provisioning sidecar resources by 3x. For multi-cluster fleet patterns, see my multi-cluster management guide.

In conclusion, Kubernetes 1.32 sidecar containers represent a rare upgrade where the API change is small but the operational benefit is enormous. The lifecycle guarantees eliminate an entire category of bugs, Job pods finally terminate correctly, and service mesh integrations become race-free. As a result, I recommend migrating all sidecar workloads as soon as your control plane reaches 1.32, and updating your Helm charts and Kustomize bases to make the new pattern the default for new services.

← Back to all articles