TLS Configuration
Overview
When your S3 endpoint uses TLS with certificates signed by a private or internal CA, the CSI driver needs access to the CA certificate to validate the connection. The Scality CSI Driver supports injecting custom CA certificates via Kubernetes ConfigMaps.
This is required when:
- Your RING S3 endpoint uses HTTPS with a self-signed or internally-signed certificate
- Your organization uses a private CA for internal services
- The S3 endpoint's certificate chain is not in the default system trust store
Prerequisites
- A PEM-encoded CA certificate file (the root or intermediate CA that signed your S3 server certificate)
- The CSI driver Helm chart installed or ready to install
Configuration (Helm-Managed)
The recommended approach uses --set-file to pass the CA certificate content directly to Helm.
Helm creates the ConfigMap in both required namespaces automatically.
Step 1: Install or Upgrade with CA Certificate Data
1 2 3 4 5 6 | |
This single command:
- Creates a ConfigMap named
s3-ca-certin the controller namespace (kube-system) - Creates the same ConfigMap in the mounter pod namespace (
mount-s3) - Configures the controller and mounter pods to use the CA certificate
Key Name
The ConfigMap key is automatically set to ca-bundle.crt, which is the key the driver expects.
Step 2: Verify
Check that the controller pod has the CA certificate mounted:
1 2 | |
Expected output: ca-bundle.crt
Verify the ConfigMap exists in the mounter pod namespace:
1 | |
Certificate Rotation
To rotate the CA certificate, update the Helm release with the new certificate file:
1 2 3 4 5 | |
Helm updates the ConfigMap in both namespaces. Existing pods will pick up the change on their next restart.
Manual Mode
If you cannot pass the certificate data via Helm values (e.g., policy restrictions),
you can create the ConfigMaps manually. In this mode, set only tls.caCertConfigMap
without tls.caCertData.
Why Two Namespaces?
The CA certificate ConfigMap must exist in two namespaces because the controller and mounter pods run in separate namespaces:
- Controller namespace (e.g.,
kube-system) — mounted by thes3-csi-controllerfor AWS SDK S3 API calls (bucket creation/deletion during dynamic provisioning). - Mounter pod namespace (e.g.,
mount-s3) — mounted by mounter pod init containers that inject the CA into themount-s3trust store.
Step 1: Create the CA Certificate ConfigMap in the Controller Namespace
1 2 3 | |
Key Name
The ConfigMap key must be ca-bundle.crt. This is the key the driver expects.
Step 2: Install or Upgrade the Helm Chart
1 2 3 4 5 | |
Step 3: Create the CA Certificate ConfigMap in the Mounter Namespace
After Helm creates the mount-s3 namespace, create the same ConfigMap there:
1 2 3 | |
Namespace Ordering
Do not attempt to create the ConfigMap in the mount-s3 namespace before the Helm install —
the namespace does not exist yet. If a ConfigMap is missing from either namespace, the
respective pod will be stuck in ContainerCreating with a configmap not found event.
Switching from Manual to Helm-Managed Mode
If you previously created ConfigMaps manually and want to switch to Helm-managed mode, delete the manually created ConfigMaps first — Helm cannot adopt resources it did not create:
1 2 3 4 5 6 7 8 | |
How It Works
The TLS configuration operates at two levels:
Controller Pod (Dynamic Provisioning)
The controller pod uses the CA certificate for S3 API calls (bucket creation/deletion) during dynamic provisioning:
- The ConfigMap is mounted at
/etc/ssl/custom-ca/in thes3-csi-controllercontainer - The
AWS_CA_BUNDLEenvironment variable is set to/etc/ssl/custom-ca/ca-bundle.crt - AWS SDK Go v2 reads this variable and uses the CA certificate for TLS validation
Mounter Pods (Volume Mounting)
Mounter pods use mount-s3 (which uses s2n-tls) to mount S3 buckets.
s2n-tls reads CA certificates from the system trust store (/etc/ssl/certs/),
so a simple volume mount is not sufficient. Instead:
- An initContainer (
install-ca-cert) runs before the mainmountpointcontainer - The initContainer copies the system CA bundle from the Alpine image to a shared emptyDir volume
- It appends the custom CA certificate from the ConfigMap to the combined bundle
- The main container mounts the shared volume at
/etc/ssl/certs/(read-only) mount-s3reads the combined trust store and validates the S3 endpoint certificate
The initContainer runs as non-root and complies with the PodSecurity restricted policy
enforced on the mounter pod namespace.
Helm Values Reference
| Parameter | Description | Default |
|---|---|---|
tls.caCertConfigMap |
Name of the ConfigMap containing the CA certificate | "" (disabled) |
tls.caCertData |
PEM-encoded CA certificate content (enables Helm-managed mode) | "" |
tls.initImage.repository |
Image repository for the CA cert init container | alpine |
tls.initImage.tag |
Image tag for the CA cert init container | 3.21 |
tls.initImage.pullPolicy |
Pull policy for the init image | IfNotPresent |
tls.initResources.requests.cpu |
CPU request for the init container | 10m |
tls.initResources.requests.memory |
Memory request for the init container | 16Mi |
tls.initResources.limits.memory |
Memory limit for the init container | 64Mi |
Why ConfigMap Instead of Secret
CA certificates are public configuration data, not confidential information. Using ConfigMaps instead of Secrets:
- Follows the Kubernetes convention of using ConfigMaps for non-sensitive configuration
- Avoids unnecessary RBAC complexity for managing Secrets
- Makes the certificates easier to inspect and manage
Troubleshooting
Pod Stuck in ContainerCreating
If a controller or mounter pod is stuck in ContainerCreating after enabling TLS, the CA
certificate ConfigMap is likely missing from that pod's namespace. Check the pod events:
1 | |
Look for an event like: configmap "s3-ca-cert" not found.
To fix, either switch to Helm-managed mode (--set-file tls.caCertData=...) or create the
ConfigMap manually in the correct namespace:
1 2 3 4 5 6 7 8 9 | |
Certificate Not Found
If mounter pods fail with TLS errors, verify the ConfigMap exists in both namespaces:
-
Controller namespace (default:
kube-system):1kubectl get configmap s3-ca-cert -n kube-system -
Mounter pod namespace (default:
mount-s3):1kubectl get configmap s3-ca-cert -n mount-s3 -
The ConfigMap has the correct key:
1kubectl get configmap s3-ca-cert -n mount-s3 -o jsonpath='{.data}' | head -c 100
Certificate Chain Issues
If you see certificate verification errors despite having the CA cert configured:
- Ensure you are providing the root CA certificate, not the server certificate
- If using an intermediate CA, include the full chain in the
ca-bundle.crtfile - Verify the certificate is in PEM format (starts with
-----BEGIN CERTIFICATE-----)
Init Container Failures
If the init container fails, check its logs:
1 | |
Common issues:
- The init image must include a system CA bundle at
/etc/ssl/certs/ca-certificates.crt(Alpine includes this by default via theca-certificatespackage) - The ConfigMap may not be mounted correctly