Upgrade Guide
This guide provides instructions for upgrading the Scality CSI Driver for S3 from version 1.2.0 to 2.0.0.
Version Compatibility
This upgrade guide is specifically for upgrading from v1.2.0 to v2.0.0. Upgrading from earlier versions: Versions earlier than v1.2.0 must first be upgraded to v1.2.0 before proceeding with this guide. Follow the standard upgrade procedure to reach v1.2.0, then use this guide to upgrade to v2.0.0.
Prerequisites
Before upgrading, ensure all requirements outlined in the Prerequisites guide are met for the target version.
Pre-Upgrade Steps
Step 1. Set namespace variable:
Set the namespace where the driver is currently installed:
1 |
|
Step 2. Check current installation:
1 2 3 4 5 |
|
Step 3. Review changes:
Check the Release Notes for version-specific changes and breaking changes.
Step 4. Install/Update CRDs (Required for v2.0.0):
CRD Installation Required
Version 2.0.0 introduces the MountpointS3PodAttachment
CRD for tracking volume attachments.
Helm v3 does not automatically update CRDs on upgrades, so you must install/update CRDs manually before upgrading.
Install CRDs using kustomize (recommended):
1 2 |
|
Or, if the repository has been cloned locally:
1 2 |
|
Verify CRD installation:
1 |
|
Upgrade Path
Step 1: Ensure Running v1.2.0
Prerequisite Version Required
Before upgrading to v2.0.0, the driver must be running version 1.2.0. If already on v1.2.0, skip to Upgrading to v2.0.0.
If running a version earlier than v1.2.0, upgrade to v1.2.0 first:
Version Specification Required
Once v2.0.0 is released, the Helm chart repository will default to v2.0.0. Version 1.2.0 must be explicitly specified in the upgrade command.
1 2 3 4 5 |
|
Verify the upgrade to v1.2.0:
1 2 |
|
Step 2: Dry Run Upgrade to v2.0.0 (Recommended)
1 2 3 4 5 6 7 |
|
Upgrading to v2.0.0
Important Notes for v2.0.0 Upgrade
- Pod Restart Impact: If any application pods using the S3 buckets as filesystems are restarted during the upgrade, they will lose access to the buckets. Once the upgrade is complete, the application pods will automatically regain access.
- Mounter Strategy Change: Version 2.0.0 changes the default mounter from systemd to pod-based mounter. Existing systemd mounts will continue working until pods restart.
- Automatic Transition: When application pods restart after the upgrade, mounts will automatically transition to the new pod-based mounter with zero downtime.
- Mount-s3 Namespace: The new pod mounter creates pods in the
mount-s3
namespace. This namespace is automatically created on first mount.
Choose one of the following upgrade options:
Option A: Upgrade with Default Values
For installations using existing configuration:
1 2 3 4 5 |
|
Option B: Upgrade with Custom Values
For installations with custom configuration file:
1 2 3 4 5 |
|
Post-Upgrade Verification
Step 1. Check upgrade status:
1 |
|
Step 2. Verify pods are running:
1 |
|
Step 3. Check driver version:
1 |
|
Step 4. Verify v2.0.0 specific components:
Check CRD installation:
1 |
|
Check mount-s3 namespace (created on first mount):
1 |
|
If volumes are currently mounted, verify mounter pods:
1 |
|
Check MountpointS3PodAttachment resources (if volumes are mounted):
1 2 3 |
|
Rollback (If Needed)
Warning
If any application pods using the S3 buckets as filesystems are restarted during the rollback they will lose access to the buckets. Once the rollback is complete, the application pods will automatically regain access to the buckets.
If issues occur after upgrade, rollback to the previous version using the following steps:
1 2 3 4 5 |
|
Troubleshooting
These are quick checks to verify the upgrade was successful. For detailed troubleshooting, refer to the troubleshooting guide.
Check pod status:
The driver pod should be in a Running
state.
1 |
|
Check CSI driver registration:
1 |
|