Mv pvc k8s
Jump to navigation
Jump to search
Scripts
Alt https://github.com/utkuozdemir/pv-migrate
Basically create tmp pvc and then copies data from original pvc to tmp pvc, deletes original pvc and then creates new pvc again and copies data back on it.
This is useful for converting disk sizes, storage tier, zones, regions etc. It's a little sloppy now refine in the future
Combined
#!/bin/bash
set -eux
usage() {
echo "Usage: $0 <old_pvc_name> <new_pvc_size_Gi> <new_storage_class> <namespace>"
exit 1
}
if [ "$#" -ne 4 ]; then
usage
fi
OLD_PVC_NAME="$1"
TMP_PVC_NAME="${OLD_PVC_NAME}-tmp"
NEW_PVC_SIZE_GI="$2"
NEW_STORAGE_CLASS="$3"
NAMESPACE="$4"
DATA_COPY_POD_NAME="data-copy-pod"
# Detect the workload using the old PVC and scale it down
POD_NAME=$(kubectl get pods -n "$NAMESPACE" --field-selector=status.phase=Running \
-o jsonpath="{.items[?(@.spec.volumes[*].persistentVolumeClaim.claimName=='$OLD_PVC_NAME')].metadata.name}" | awk '{print $1}')
if [ -z "$POD_NAME" ]; then
echo "No running pod found using PVC: $OLD_PVC_NAME"
exit 1
fi
OWNER_KIND=$(kubectl get pod "$POD_NAME" -n "$NAMESPACE" -o jsonpath="{.metadata.ownerReferences[0].kind}")
OWNER_NAME=$(kubectl get pod "$POD_NAME" -n "$NAMESPACE" -o jsonpath="{.metadata.ownerReferences[0].name}")
case "$OWNER_KIND" in
"Deployment") kubectl scale deployment "$OWNER_NAME" --replicas=0 -n "$NAMESPACE" ;;
"StatefulSet") kubectl scale statefulset "$OWNER_NAME" --replicas=0 -n "$NAMESPACE" ;;
"DaemonSet") kubectl delete daemonset "$OWNER_NAME" -n "$NAMESPACE" ;;
*) echo "Unsupported workload type: $OWNER_KIND"; exit 1 ;;
esac
echo "Scaled $OWNER_KIND $OWNER_NAME to 0."
# Validate PVC size input
if [[ ! "$NEW_PVC_SIZE_GI" =~ ^[0-9]+$ ]]; then
echo "Error: New PVC size must be a positive integer (Gi)"
usage
fi
# Create temporary PVC
kubectl create -f - <<EOF
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: $TMP_PVC_NAME
namespace: $NAMESPACE
spec:
accessModes: [ReadWriteOnce] # Or ReadWriteMany if needed
resources:
requests:
storage: ${NEW_PVC_SIZE_GI}Gi
storageClassName: $NEW_STORAGE_CLASS
EOF
# Create data copy pod
kubectl create -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: $DATA_COPY_POD_NAME
namespace: $NAMESPACE
spec:
restartPolicy: Never
containers:
- name: data-copy
image: alpine:latest
# command: ["sh", "-c", "apk add --no-cache rsync && rsync --bwlimit=10240 -aHAXv --progress /mnt/old/ /mnt/new/ && sleep infinity"]
command: ["sh", "-c", "apk add --no-cache rsync && rsync --bwlimit=10240 -aHAXv --progress /mnt/old/ /mnt/new/"]
volumeMounts:
- name: old-volume
mountPath: /mnt/old
- name: new-volume
mountPath: /mnt/new
volumes:
- name: old-volume
persistentVolumeClaim:
claimName: $OLD_PVC_NAME
- name: new-volume
persistentVolumeClaim:
claimName: $TMP_PVC_NAME
EOF
# Wait for copy to complete
while true; do
PHASE=$(kubectl get pod "$DATA_COPY_POD_NAME" -n "$NAMESPACE" -o jsonpath='{.status.phase}')
if [ "$PHASE" = "Succeeded" ]; then
echo "Data copy to temporary PVC completed."
break
else
echo "Pod $DATA_COPY_POD_NAME is in phase $PHASE. Waiting 20 seconds..."
sleep 20
fi
done
kubectl logs $DATA_COPY_POD_NAME -n $NAMESPACE
kubectl delete pod $DATA_COPY_POD_NAME -n $NAMESPACE
kubectl delete pvc $OLD_PVC_NAME -n $NAMESPACE
# Create new PVC with original name
kubectl create -f - <<EOF
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: $OLD_PVC_NAME
namespace: $NAMESPACE
spec:
accessModes: [ReadWriteOnce] # Or ReadWriteMany if needed
resources:
requests:
storage: ${NEW_PVC_SIZE_GI}Gi
storageClassName: $NEW_STORAGE_CLASS
EOF
# Create data copy pod for final migration
kubectl create -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: $DATA_COPY_POD_NAME
namespace: $NAMESPACE
spec:
restartPolicy: Never
containers:
- name: data-copy
image: alpine:latest
# command: ["sh", "-c", "apk add --no-cache rsync && rsync --bwlimit=10240 -aHAXv --progress /mnt/old/ /mnt/new/ && sleep infinity"]
command: ["sh", "-c", "apk add --no-cache rsync && rsync --bwlimit=10240 -aHAXv --progress /mnt/old/ /mnt/new/"]
volumeMounts:
- name: old-volume
mountPath: /mnt/old
- name: new-volume
mountPath: /mnt/new
volumes:
- name: old-volume
persistentVolumeClaim:
claimName: $TMP_PVC_NAME
- name: new-volume
persistentVolumeClaim:
claimName: $OLD_PVC_NAME
EOF
# Wait for final copy
while true; do
PHASE=$(kubectl get pod "$DATA_COPY_POD_NAME" -n "$NAMESPACE" -o jsonpath='{.status.phase}')
if [ "$PHASE" = "Succeeded" ]; then
echo "Data copy to new PVC completed."
break
else
echo "Pod $DATA_COPY_POD_NAME is in phase $PHASE. Waiting 20 seconds..."
sleep 20
fi
done
kubectl logs $DATA_COPY_POD_NAME -n $NAMESPACE
kubectl delete pod $DATA_COPY_POD_NAME -n $NAMESPACE
kubectl delete pvc $TMP_PVC_NAME -n $NAMESPACE
sleep 20
# Scale workload back up
case "$OWNER_KIND" in
"Deployment") kubectl scale deployment "$OWNER_NAME" --replicas=1 -n "$NAMESPACE" ;;
"StatefulSet") kubectl scale statefulset "$OWNER_NAME" --replicas=1 -n "$NAMESPACE" ;;
*) echo "Skipping scaling for $OWNER_KIND";;
esac
echo "PVC migration completed successfully."
Go code combined
Note that if rsync bandwidth isn't throttled the pod rsync will finish prematurely and not all files will be in copy hence the limit in rsync command.
update-size-tier-pvc.go
package main
import (
"context"
"fmt"
"log"
"os"
"os/exec"
"strconv"
"strings"
"time"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
corev1 "k8s.io/api/core/v1"
// appsv1 "k8s.io/api/apps/v1"
"k8s.io/apimachinery/pkg/api/resource"
"k8s.io/apimachinery/pkg/types"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
// "context"
// "log"
// corev1 "k8s.io/api/core/v1"
// metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
// "k8s.io/apimachinery/pkg/api/resource"
// "k8s.io/client-go/kubernetes"
)
func main() {
if len(os.Args) != 5 {
fmt.Println("Usage: <old_pvc_name> <new_pvc_size_Gi> <new_storage_class> <namespace>")
os.Exit(1)
}
oldPVC := os.Args[1]
tmpPVC := oldPVC + "-tmp"
newSizeGi := os.Args[2]
newStorageClass := os.Args[3]
namespace := os.Args[4]
dataCopyPod := "data-copy-pod"
config, err := clientcmd.BuildConfigFromFlags("", os.Getenv("KUBECONFIG"))
if err != nil {
log.Fatalf("Error building kubeconfig: %v", err)
}
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
log.Fatalf("Error creating Kubernetes client: %v", err)
}
// Detect workload using old PVC and scale it down
podList, err := clientset.CoreV1().Pods(namespace).List(context.TODO(), metav1.ListOptions{})
if err != nil {
log.Fatalf("Error getting pods: %v", err)
}
var podName, ownerKind, ownerName string
for _, pod := range podList.Items {
for _, vol := range pod.Spec.Volumes {
if vol.PersistentVolumeClaim != nil && vol.PersistentVolumeClaim.ClaimName == oldPVC {
podName = pod.Name
if len(pod.OwnerReferences) > 0 {
ownerKind = pod.OwnerReferences[0].Kind
ownerName = pod.OwnerReferences[0].Name
}
break
}
}
}
if podName == "" {
log.Fatalf("No running pod found using PVC: %s", oldPVC)
}
scaleWorkload(clientset, ownerKind, ownerName, namespace, 0)
// Validate new PVC size
sizeInt, err := strconv.Atoi(newSizeGi)
if err != nil {
log.Fatalf("Invalid PVC size: %s", newSizeGi)
}
createPVC(clientset, tmpPVC, int64(sizeInt), newStorageClass, namespace)
copyData(oldPVC, tmpPVC, namespace, dataCopyPod)
deletePVC(clientset, oldPVC, namespace)
createPVC(clientset, oldPVC, int64(sizeInt), newStorageClass, namespace)
copyData(tmpPVC, oldPVC, namespace, dataCopyPod)
deletePVC(clientset, tmpPVC, namespace)
scaleWorkload(clientset, ownerKind, ownerName, namespace, 1)
log.Println("PVC migration completed successfully.")
}
func scaleWorkload(clientset *kubernetes.Clientset, kind, name, namespace string, replicas int32) {
switch kind {
case "Deployment":
_, err := clientset.AppsV1().Deployments(namespace).Patch(context.TODO(), name, types.StrategicMergePatchType,
[]byte(fmt.Sprintf(`{"spec":{"replicas":%d}}`, replicas)), metav1.PatchOptions{})
if err != nil {
log.Fatalf("Failed to scale Deployment %s: %v", name, err)
}
log.Printf("Scaled Deployment %s to %d replicas", name, replicas)
case "StatefulSet":
_, err := clientset.AppsV1().StatefulSets(namespace).Patch(context.TODO(), name, types.StrategicMergePatchType,
[]byte(fmt.Sprintf(`{"spec":{"replicas":%d}}`, replicas)), metav1.PatchOptions{})
if err != nil {
log.Fatalf("Failed to scale StatefulSet %s: %v", name, err)
}
log.Printf("Scaled StatefulSet %s to %d replicas", name, replicas)
case "DaemonSet":
err := clientset.AppsV1().DaemonSets(namespace).Delete(context.TODO(), name, metav1.DeleteOptions{})
if err != nil {
log.Fatalf("Failed to delete DaemonSet %s: %v", name, err)
}
log.Printf("Deleted DaemonSet %s", name)
default:
log.Printf("Unsupported workload type: %s", kind)
}
}
func createPVC(clientset *kubernetes.Clientset, name string, size int64, storageClass, namespace string) {
storageQuantity := resource.NewQuantity(size*1024*1024*1024, resource.DecimalSI)
pvc := &corev1.PersistentVolumeClaim{
ObjectMeta: metav1.ObjectMeta{Name: name, Namespace: namespace},
Spec: corev1.PersistentVolumeClaimSpec{
AccessModes: []corev1.PersistentVolumeAccessMode{corev1.ReadWriteOnce},
Resources: corev1.VolumeResourceRequirements{ // Corrected line
Requests: corev1.ResourceList{"storage": *storageQuantity},
},
StorageClassName: &storageClass,
},
}
_, err := clientset.CoreV1().PersistentVolumeClaims(namespace).Create(context.TODO(), pvc, metav1.CreateOptions{})
if err != nil {
log.Fatalf("Error creating PVC %s: %v", name, err)
}
log.Printf("Created PVC %s", name)
}
func deletePVC(clientset *kubernetes.Clientset, name, namespace string) {
err := clientset.CoreV1().PersistentVolumeClaims(namespace).Delete(context.TODO(), name, metav1.DeleteOptions{})
if err != nil {
log.Fatalf("Error deleting PVC %s: %v", name, err)
}
log.Printf("Deleted PVC %s", name)
}
func copyData(srcPVC, dstPVC, namespace, podName string) {
log.Printf("Creating data copy pod %s to copy data from %s to %s", podName, srcPVC, dstPVC)
// Delete any existing pod with the same name
exec.Command("kubectl", "delete", "pod", podName, "-n", namespace, "--ignore-not-found=true").Run()
// Define the pod YAML
podYAML := fmt.Sprintf(`
apiVersion: v1
kind: Pod
metadata:
name: %s
namespace: %s
spec:
containers:
- name: rsync
image: alpine
command: ["sh", "-c", "apk add --no-cache rsync && rsync -aHAXv --bwlimit=10240 --progress /mnt/old/ /mnt/new/"]
# command: ["sh", "-c", "apk add --no-cache rsync && rsync -aHAXv --progress /mnt/old/ /mnt/new/"]
volumeMounts:
- name: src-volume
mountPath: /mnt/old
- name: dst-volume
mountPath: /mnt/new
restartPolicy: Never
volumes:
- name: src-volume
persistentVolumeClaim:
claimName: %s
- name: dst-volume
persistentVolumeClaim:
claimName: %s
`, podName, namespace, srcPVC, dstPVC)
// Apply the pod YAML
cmd := exec.Command("kubectl", "apply", "-f", "-")
cmd.Stdin = strings.NewReader(podYAML)
err := cmd.Run()
if err != nil {
log.Fatalf("Error creating data copy pod: %v", err)
}
// Wait for the pod to complete
for {
out, err := exec.Command("kubectl", "get", "pod", podName, "-n", namespace, "-o", "jsonpath={.status.phase}").Output()
if err != nil {
log.Fatalf("Error checking pod status: %v", err)
}
status := strings.TrimSpace(string(out))
if status == "Succeeded" {
log.Println("Data copy completed successfully.")
break
} else if status == "Failed" {
log.Fatalf("Data copy pod %s failed.", podName)
}
log.Println("Waiting for data copy to complete...")
time.Sleep(10 * time.Second)
}
// Cleanup after copy
exec.Command("kubectl", "delete", "pod", podName, "-n", namespace).Run()
log.Printf("Deleted data copy pod %s", podName)
}
Split
main.sh
#!/bin/bash set -eu # kubectl delete ns zabbix # velero restore create --from-backup zabbix-tobucket --include-namespaces zabbix # sleep 120 kubectl scale sts/zabbix-postgresql --replicas=0 ./mv-pvc.sh postgresql-data-zabbix-postgresql-0 postgresql-data-zabbix-postgresql-0-tmp 7 default zabbix kubectl delete pvc postgresql-data-zabbix-postgresql-0 ./mv-pvc.sh postgresql-data-zabbix-postgresql-0-tmp postgresql-data-zabbix-postgresql-0 7 default zabbix kubectl scale sts/zabbix-postgresql --replicas=1
recreate-pvc-with-original-data.sh
#!/bin/bash
set -eux
# Function to display usage instructions
usage() {
echo "Usage: $0 <old_pvc_name> <new_pvc_name> <new_pvc_size_Gi> <new_storage_class> <namespace>"
exit 1
}
# Check for correct number of arguments
if [ "$#" -ne 5 ]; then
usage
fi
check_pod_status(){
POD_NAME=$1
while true; do
PHASE=$(kubectl get pod "$POD_NAME" -n "$NAMESPACE" -o jsonpath='{.status.phase}')
if [ "$PHASE" = "Succeeded" ]; then
echo "Pod $POD_NAME has succeeded."
break
else
echo "Pod $POD_NAME is in phase $PHASE. Waiting 5 seconds..."
sleep 5
fi
done
}
# Assign arguments to variables
OLD_PVC_NAME="$1"
NEW_PVC_NAME="$2"
NEW_PVC_SIZE_GI="$3"
NEW_STORAGE_CLASS="$4"
NAMESPACE="$5"
# Validate PVC size
if [[ ! "$NEW_PVC_SIZE_GI" =~ ^[0-9]+$ ]]; then
echo "Error: New PVC size must be a positive integer (Gi)"
usage
fi
DATA_COPY_POD_NAME="data-copy-pod"
# 1. Create the new PVC
kubectl create -f - <<EOF
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: $NEW_PVC_NAME
namespace: $NAMESPACE
spec:
accessModes: [ReadWriteOnce] # Or ReadWriteMany if needed
resources:
requests:
storage: ${NEW_PVC_SIZE_GI}Gi
storageClassName: $NEW_STORAGE_CLASS
EOF
# 2. Create the data copy pod
kubectl create -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: $DATA_COPY_POD_NAME
namespace: $NAMESPACE
spec:
restartPolicy: Never # Important: Only run once
containers:
- name: data-copy
image: alpine:latest
# command: ["sh", "-c", "apk add --no-cache rsync && rsync -av /mnt/old/ /mnt/new/"]
command: ["sh", "-c", "apk add --no-cache rsync && rsync -aHAXv --progress /mnt/old/ /mnt/new/"]
volumeMounts:
- name: old-volume
mountPath: /mnt/old
- name: new-volume
mountPath: /mnt/new
volumes:
- name: old-volume
persistentVolumeClaim:
claimName: $OLD_PVC_NAME
- name: new-volume
persistentVolumeClaim:
claimName: $NEW_PVC_NAME
EOF
# 3. Wait for the data copy to finish (adjust timeout as needed)
# kubectl wait --for=condition=Succeeded --timeout=5m pod/$DATA_COPY_POD_NAME -n $NAMESPACE
# kubectl wait --for=condition=Completed --timeout=5m pod/$DATA_COPY_POD_NAME -n $NAMESPACE
check_pod_status ${DATA_COPY_POD_NAME}
# 4. Check the status of the copy pod
kubectl logs $DATA_COPY_POD_NAME -n $NAMESPACE
kubectl get pod $DATA_COPY_POD_NAME -n $NAMESPACE
# 5. (Optional) Update your application deployment to use the new PVC
# 6. (After verifying everything is working) Delete the data copy pod
kubectl delete pod $DATA_COPY_POD_NAME -n $NAMESPACE
# 7. Delete the old PVC (after you're *absolutely sure* you don't need it)
# kubectl delete pvc $OLD_PVC_NAME -n $NAMESPACE
# 8. Delete the old PV (if reclaimPolicy was Retain)
# kubectl get pv # Find the PV associated with the old PVC
# kubectl delete pv <pv-name>
# command: ["/bin/bash", "-c", "sleep infinity"]
PVC release
#!/bin/bash
set -eux
if [ "$#" -ne 3 ]; then
echo "Usage: $0 <old-storage-class> <new-storage-class> <namespace|'all'>"
exit 1
fi
OLD_SC=$1
NEW_SC=$2
NS_FILTER=$3
if [ "$NS_FILTER" == "all" ]; then
PVC_LIST=$(kubectl get pvc -A -o jsonpath="{range .items[?(@.spec.storageClassName=='$OLD_SC')]}{.metadata.namespace} {.metadata.name}{'\n'}{end}")
else
PVC_LIST=$(kubectl get pvc -n "$NS_FILTER" -o jsonpath="{range .items[?(@.spec.storageClassName=='$OLD_SC')]}{.metadata.namespace} {.metadata.name}{'\n'}{end}")
fi
echo "$PVC_LIST" | while read -r NS PVC; do
PV=$(kubectl get pvc "$PVC" -n "$NS" -o jsonpath="{.spec.volumeName}")
STORAGE_SIZE=$(kubectl get pvc "$PVC" -n "$NS" -o jsonpath="{.spec.resources.requests.storage}")
kubectl patch pv "$PV" --type='json' -p='[{"op": "replace", "path": "/spec/persistentVolumeReclaimPolicy", "value": "Retain"}]'
kubectl patch pvc "$PVC" --type=merge -p '{"metadata":{"finalizers": []}}' || true
kubectl delete pvc "$PVC" -n "$NS" --wait=true
kubectl patch pv "$PV" --type='json' -p="[{'op': 'replace', 'path': '/spec/storageClassName', 'value':'$NEW_SC'}]"
kubectl patch pv "$PV" --type=merge -p '{"spec":{"claimRef": null}}'
kubectl apply -f - <<EOF
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: $PVC
namespace: $NS
spec:
storageClassName: $NEW_SC
accessModes:
- ReadWriteOnce
resources:
requests:
storage: $STORAGE_SIZE
volumeName: $PV
EOF
# Restore original reclaim policy (optional)
kubectl patch pv "$PV" --type='json' -p='[{"op": "replace", "path": "/spec/persistentVolumeReclaimPolicy", "value": "Delete"}]'
done
WIP
#!/bin/bash
# Script to take a snapshot of an AKS disk and create a new disk from it in a different zone, then rename it back to the original name.
# Usage: ./snapshot_and_clone_rename.sh <resource_group> <disk_name> <source_zone> <target_zone>
# Check for required arguments
if [ $# -ne 4 ]; then
echo "Usage: $0 <resource_group> <disk_name> <source_zone> <target_zone>"
exit 1
fi
RESOURCE_GROUP="$1"
DISK_NAME="$2"
SOURCE_ZONE="$3"
TARGET_ZONE="$4"
SNAPSHOT_NAME="${DISK_NAME}-snapshot-$(date +%Y%m%d%H%M%S)"
NEW_DISK_NAME="${DISK_NAME}-temp-cloned-${TARGET_ZONE}" # Temporary name for the cloned disk
# Check if Azure CLI is installed
if ! command -v az &> /dev/null; then
echo "Azure CLI is not installed. Please install it."
exit 1
fi
# Check if logged in
if ! az account show &> /dev/null; then
echo "Please log in to Azure CLI using 'az login'."
exit 1
fi
# Check if the disk exists
if ! az disk show --resource-group "$RESOURCE_GROUP" --name "$DISK_NAME" &> /dev/null; then
echo "Disk '$DISK_NAME' not found in resource group '$RESOURCE_GROUP'."
exit 1
fi
# Check if the source zone is valid
if [[ -z "$(az disk show --resource-group "$RESOURCE_GROUP" --name "$DISK_NAME" --query location -o tsv)" ]]; then
echo "Disk '$DISK_NAME' does not appear to be zonal. Please ensure that the disk is in a specific zone."
exit 1
fi
# Check if the target zone is valid
if [[ "$SOURCE_ZONE" == "$TARGET_ZONE" ]]; then
echo "Source and target zones cannot be the same."
exit 1
fi
# Get the disk ID
DISK_ID=$(az disk show --resource-group "$RESOURCE_GROUP" --name "$DISK_NAME" --query id -o tsv)
# Create the snapshot
echo "Creating snapshot '$SNAPSHOT_NAME'..."
az snapshot create \
--resource-group "$RESOURCE_GROUP" \
--name "$SNAPSHOT_NAME" \
--source "$DISK_ID" \
--location "$(az disk show --resource-group "$RESOURCE_GROUP" --name "$DISK_NAME" --query location -o tsv)" \
--zone "$SOURCE_ZONE"
if [ $? -ne 0 ]; then
echo "Failed to create snapshot."
exit 1
fi
# Get the snapshot ID
SNAPSHOT_ID=$(az snapshot show --resource-group "$RESOURCE_GROUP" --name "$SNAPSHOT_NAME" --query id -o tsv)
# Create the new disk from the snapshot in the target zone with a temporary name
echo "Creating new disk '$NEW_DISK_NAME' in zone '$TARGET_ZONE'..."
az disk create \
--resource-group "$RESOURCE_GROUP" \
--name "$NEW_DISK_NAME" \
--source "$SNAPSHOT_ID" \
--location "$(az disk show --resource-group "$RESOURCE_GROUP" --name "$DISK_NAME" --query location -o tsv)" \
--zone "$TARGET_ZONE"
if [ $? -ne 0 ]; then
echo "Failed to create new disk."
# Optionally delete the snapshot if disk creation fails
az snapshot delete --resource-group "$RESOURCE_GROUP" --name "$SNAPSHOT_NAME"
exit 1
fi
# Delete the original disk
echo "Deleting original disk '$DISK_NAME'..."
az disk delete --resource-group "$RESOURCE_GROUP" --name "$DISK_NAME" --yes
if [ $? -ne 0 ]; then
echo "Failed to delete original disk."
# Optionally delete the snapshot and new disk if original disk deletion fails
az snapshot delete --resource-group "$RESOURCE_GROUP" --name "$SNAPSHOT_NAME"
az disk delete --resource-group "$RESOURCE_GROUP" --name "$NEW_DISK_NAME" --yes
exit 1
fi
# Rename the new disk to the original name
echo "Renaming new disk '$NEW_DISK_NAME' to '$DISK_NAME'..."
az disk update --resource-group "$RESOURCE_GROUP" --name "$NEW_DISK_NAME" --set name="$DISK_NAME"
if [ $? -ne 0 ]; then
echo "Failed to rename new disk."
# Optionally delete the snapshot and new disk if rename fails
az snapshot delete --resource-group "$RESOURCE_GROUP" --name "$SNAPSHOT_NAME"
az disk delete --resource-group "$RESOURCE_GROUP" --name "$DISK_NAME" --yes
exit 1
fi
# Clean up the snapshot (optional)
echo "Deleting snapshot '$SNAPSHOT_NAME'..."
az snapshot delete --resource-group "$RESOURCE_GROUP" --name "$SNAPSHOT_NAME"
echo "Disk '$DISK_NAME' created successfully in zone '$TARGET_ZONE'."
exit 0
Key Changes:
* Temporary Disk Name:
* The cloned disk is now created with a temporary name (e.g., myAKSDisk-temp-cloned-westus2). This avoids a name collision with the original disk.
* Original Disk Deletion:
* The script now deletes the original disk after the cloned disk is successfully created.
* Added --yes flag to avoid prompt.
* Renaming the Cloned Disk:
* The cloned disk is then renamed to the original disk's name.
* Uses az disk update --set name to rename the disk.
* Enhanced Error Handling:
* Added error checking for the original disk deletion and rename operation.
* Added cleanup routines in case of deletion or rename failures.
Important Considerations:
* Data Loss Risk: Be extremely careful when using this script, as it involves deleting the original disk. Make sure you have backups or understand the implications before running it.
* Downtime: This process will cause downtime for any services that rely on the disk.
* Testing: Thoroughly test this script in a non-production environment before using it in production.
* AKS Integration: If the disk is used by an AKS node pool, you will likely need to perform additional steps to update the node pool configuration after the disk is moved. Consider using node pool scaling to replace the nodes using the old disks with the new disks.