Difference between revisions of "Ceph objectstore test"
Jump to navigation
Jump to search
(Created page with "Install tool s5cmd - mc and aws cli are other options ``` curl -LO https://github.com/peak/s5cmd/releases/download/v2.3.0/s5cmd_2.3.0_linux_amd64.deb && sudo apt install -y ./...") |
|||
Line 1: | Line 1: | ||
+ | # Rook Ceph objectstore test | ||
+ | |||
+ | ## Test via s5cmd | ||
+ | |||
Install tool s5cmd - mc and aws cli are other options | Install tool s5cmd - mc and aws cli are other options | ||
``` | ``` | ||
Line 4: | Line 8: | ||
``` | ``` | ||
− | + | test-cephobjectstore-upload-download.sh | |
``` | ``` | ||
#!/bin/bash | #!/bin/bash | ||
Line 38: | Line 42: | ||
--set toolbox.enabled=true \ | --set toolbox.enabled=true \ | ||
rook-release/rook-ceph-cluster -f rook-ceph-cluster.vaules.yaml | rook-release/rook-ceph-cluster -f rook-ceph-cluster.vaules.yaml | ||
+ | ``` | ||
+ | |||
+ | ## python boto3 | ||
+ | ``` | ||
+ | import base64 | ||
+ | import boto3 | ||
+ | |||
+ | # Example: Replace these with your decoded values | ||
+ | access_key = "YOUR_DECODED_ACCESS_KEY" | ||
+ | secret_key = "YOUR_DECODED_SECRET_KEY" | ||
+ | endpoint_url = "http://your-ceph-rgw-endpoint:port" | ||
+ | bucket_name = "my-bucket" # or as provided in the secret | ||
+ | |||
+ | # Create a session and S3 resource | ||
+ | session = boto3.session.Session() | ||
+ | s3 = session.resource( | ||
+ | 's3', | ||
+ | aws_access_key_id=access_key, | ||
+ | aws_secret_access_key=secret_key, | ||
+ | endpoint_url=endpoint_url | ||
+ | ) | ||
+ | |||
+ | # Now you can interact with the bucket | ||
+ | bucket = s3.Bucket(bucket_name) | ||
+ | for obj in bucket.objects.all(): | ||
+ | print(obj.key) | ||
+ | |||
``` | ``` |
Revision as of 05:02, 22 February 2025
Rook Ceph objectstore test
Test via s5cmd
Install tool s5cmd - mc and aws cli are other options
curl -LO https://github.com/peak/s5cmd/releases/download/v2.3.0/s5cmd_2.3.0_linux_amd64.deb && sudo apt install -y ./s5cmd_2.3.0_linux_amd64.deb
test-cephobjectstore-upload-download.sh
#!/bin/bash set -eu export AWS_HOST="https://my-objectstore-ingress.example.com" export PORT=443 export BUCKET_NAME=$(kubectl get cm my-bucket -o jsonpath='{.data.BUCKET_NAME}') export AWS_ACCESS_KEY_ID=$(kubectl get secret my-bucket -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 --decode) export AWS_SECRET_ACCESS_KEY=$(kubectl get secret my-bucket -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 --decode) echo "This is the S3 storage test $(date)" > /tmp/myFile s5cmd --endpoint-url $AWS_HOST:$PORT cp /tmp/myFile s3://$BUCKET_NAME s5cmd --endpoint-url $AWS_HOST:$PORT cp s3://$BUCKET_NAME/myFile /tmp/myDownloadedFile cat /tmp/myDownloadedFile
Example of helm deploy for
helm -n rook-ceph-cluster show values rook-release/rook-ceph-cluster > default.rook-ceph-cluster.values.yaml helm upgrade --install rook-ceph-cluster \ --set 'cephFileSystems[0].storageClass.enabled=false' \ --set 'cephObjectStores[0].storageClass.enabled=true' \ --set 'cephObjectStores[0].storageClass.parameters.region=us-west-1' \ --set 'cephClusterSpec.dashboard.enabled=false' \ --set 'cephObjectStores[0].ingress.enabled=true' \ --set 'cephObjectStores[0].ingress.host.name=my-objectstore-ingress.example.com' \ --set 'cephObjectStores[0].ingress.host.path=/' \ --set 'cephObjectStores[0].ingress.tls[0].hosts[0]=my-objectstore-ingress.example.com' \ --set 'cephObjectStores[0].ingress.tls[0].secretName=use-default' \ --set 'cephObjectStores[0].ingress.ingressClassName=nginx' \ --set operatorNamespace=rook-ceph \ --set toolbox.enabled=true \ rook-release/rook-ceph-cluster -f rook-ceph-cluster.vaules.yaml
python boto3
import base64 import boto3 # Example: Replace these with your decoded values access_key = "YOUR_DECODED_ACCESS_KEY" secret_key = "YOUR_DECODED_SECRET_KEY" endpoint_url = "http://your-ceph-rgw-endpoint:port" bucket_name = "my-bucket" # or as provided in the secret # Create a session and S3 resource session = boto3.session.Session() s3 = session.resource( 's3', aws_access_key_id=access_key, aws_secret_access_key=secret_key, endpoint_url=endpoint_url ) # Now you can interact with the bucket bucket = s3.Bucket(bucket_name) for obj in bucket.objects.all(): print(obj.key)