Multiple Buckets in One Pod
This example shows how to mount multiple S3 buckets in a single pod.
Features
- Two separate S3 buckets mounted in one pod
- Unique volume handles for each bucket
- Different mount paths (
/data
and /data2
)
Deploy
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94 | kubectl apply -f - <<EOF
apiVersion: v1
kind: PersistentVolume
metadata:
name: s3-pv
spec:
capacity:
storage: 1200Gi # ignored, required
accessModes:
- ReadWriteMany # supported options: ReadWriteMany
storageClassName: "" # Required for static provisioning
claimRef: # To ensure no other PVCs can claim this PV
namespace: default # Namespace is required even though it's in "default" namespace.
name: s3-pvc # Name of your PVC
mountOptions:
- allow-delete
- region us-west-2
csi:
driver: s3.csi.scality.com # required
volumeHandle: s3-csi-multi-bucket-1-volume # Must be unique across all PVs
volumeAttributes:
bucketName: s3-csi-driver
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: s3-pvc
spec:
accessModes:
- ReadWriteMany # Supported options: ReadWriteMany
storageClassName: "" # Required for static provisioning
resources:
requests:
storage: 1200Gi # Ignored, required
volumeName: s3-pv # Name of your PV
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: s3-pv-2
spec:
capacity:
storage: 1200Gi # ignored, required
accessModes:
- ReadWriteMany # supported options: ReadWriteMany
storageClassName: "" # Required for static provisioning
claimRef: # To ensure no other PVCs can claim this PV
namespace: default # Namespace is required even though it's in "default" namespace.
name: s3-pvc-2 # Name of your PVC
mountOptions:
- allow-delete
- region us-west-2
csi:
driver: s3.csi.scality.com # required
volumeHandle: s3-csi-multi-bucket-2-volume # Must be unique across all PVs
volumeAttributes:
bucketName: s3-csi-driver-2
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: s3-pvc-2
spec:
accessModes:
- ReadWriteMany # supported options: ReadWriteMany
storageClassName: "" # required for static provisioning
resources:
requests:
storage: 1200Gi # ignored, required
volumeName: s3-pv-2 # Name of your PV
---
apiVersion: v1
kind: Pod
metadata:
name: s3-app
spec:
containers:
- name: app
image: ubuntu
command: ["/bin/sh"]
args: ["-c", "echo 'Hello from the container!' >> /data/$(date -u).txt; echo 'Hello from the container!' >> /data2/$(date -u).txt; tail -f /dev/null"]
volumeMounts:
- name: persistent-storage
mountPath: /data
- name: persistent-storage-2
mountPath: /data2
volumes:
- name: persistent-storage
persistentVolumeClaim:
claimName: s3-pvc
- name: persistent-storage-2
persistentVolumeClaim:
claimName: s3-pvc-2
EOF
|
Key Points
- Each PV must have a unique
volumeHandle
- Each bucket requires separate PV and PVC resources
- Pod mounts both volumes at different paths
Check Pod-Level Access to the Mounted S3 Volume
| kubectl get pod s3-app
kubectl exec s3-app -- ls -la /data
kubectl exec s3-app -- ls -la /data2
|
Cleanup
| kubectl delete pod s3-app
kubectl delete pvc s3-pvc s3-pvc-2
kubectl delete pv s3-pv s3-pv-2
|
Download YAML
📁 multiple_buckets_one_pod.yaml