Azure Disk
980 PRO is backward compatible with PCIe 3.0. Sequential performances (up to): 3500MB/s for reads, 3450MB/s (1TB) for writes. Random (up to): 690K IOPS (500GB/1TB) for reads, 660K IOPS (1TB) for writes.
SAMSUNG 980 PRO SSD 1TB PCIe 4.0 NVMe Gen 4 Gaming M.2 Internal Solid State Hard Drive Memory Card, Maximum Speed, Thermal Control, MZ-V8P1T0B https://www.amazon.com/dp/B08GLX7TNT/ref=cm_sw_r_apan_i_KKPHVBMZ36XY6RMTP9W5
- https://link.medium.com/yOpivdQralb
- https://link.medium.com/A0eLq1ssalb
- https://github.com/kubernetes-sigs/azurefile-csi-driver/blob/master/deploy/example/e2e_usage.md
https://github.com/kubernetes-sigs/azuredisk-csi-driver/blob/master/docs/read-from-secret.md
sudo vim /etc/kubernetes/azure.json # This is on every k8s master/controller node # - https://github.com/kubernetes-sigs/azuredisk-csi-driver/blob/master/deploy/example/azure.json sudo snap install helm --classic helm repo add azuredisk-csi-driver https://raw.githubusercontent.com/kubernetes-sigs/azuredisk-csi-driver/master/charts helm install azuredisk-csi-driver azuredisk-csi-driver/azuredisk-csi-driver --namespace kube-system --version v1.8.0 sudo ls /var/lib/kubelet/plugins/disk.csi.azure.com kubectl get pods -n kube-system -o wide kubectl get events kubectl create -f https://raw.githubusercontent.com/kubernetes-sigs/azuredisk-csi-driver/master/deploy/example/storageclass-azuredisk-csi.yaml kubectl create -f https://raw.githubusercontent.com/kubernetes-sigs/azuredisk-csi-driver/master/deploy/example/statefulset.yaml kubectl exec -it statefulset-azuredisk-0 sh -- df -h kubectl get pvc kubectl get pv kubectl get events
Delete statefulset-azuredisk-0 pod & pvc
kubectl create -f https://raw.githubusercontent.com/kubernetes-sigs/azuredisk-csi-driver/master/deploy/example/statefulset.yaml kubectl delete pvc persistent-storage-statefulset-azuredisk-0
https://raw.githubusercontent.com/kubernetes-sigs/ https://github.com/kubernetes-sigs/azuredisk-csi-driver/blob/master/deploy/example/azure.json https://github.com/kubernetes-sigs/azuredisk-csi-driver/blob/master/deploy/example/e2e_usage.md
/etc/kubernetes/azure.json every k8s master node before helm install
{
"cloud":"AzurePublicCloud",
"tenantId": "<YOURID>",
"subscriptionId": "<YOURID>",
"resourceGroup": "<YOURGROUP>",
"location": "westus2",
"aadClientId": "<YOURSERVICEID>",
"aadClientSecret": "<YOURSERVICESECRET>",
"useManagedIdentityExtension": false,
"userAssignedIdentityID": "",
"useInstanceMetadata": true,
"vmType": "standard",
"subnetName": "<YOURSUBNETNAME>",
"vnetName": "<YOURVNETNAME>",
"vnetResourceGroup": "",
"cloudProviderBackoff": true
}
Testing
Azure disk built in
https://kubernetes.io/docs/concepts/storage/storage-classes/
More
https://blog.mycloudit.com/4-differences-between-the-azure-vm-storage-types
https://docs.microsoft.com/en-us/azure/virtual-machines/windows/convert-disk-storage
Restore disk
#Create snapshot
osdiskid=$(az vm show \
-g myResourceGroupDisk \
-n myVM \
--query "storageProfile.osDisk.managedDisk.id" \
-o tsv)
az snapshot create \
--resource-group myResourceGroupDisk \
--source "$osdiskid" \
--name osDisk-backup
#Create disk from snapshot
az disk create \
--resource-group myResourceGroupDisk \
--name mySnapshotDisk \
--source osDisk-backup
#Create a new virtual machine from the snapshot disk.
az vm create \
--resource-group myResourceGroupDisk \
--name myVM \
--attach-os-disk mySnapshotDisk \
--os-type linux
https://github.com/kubernetes-sigs/azuredisk-csi-driver/tree/master/charts