We have developed a guide for our Binero Cloud customers to protect their backups with secure cloud storage. This guide is for all cloud users who want to ensure that their backups cannot be deleted accidentally or by unauthorised persons. In the guide, we describe a solution that uses Binero Cloud's object storage, which is API-compatible with AWS S3, to create a secure and locked backup environment.
Compare your current setup with our process – there may be something more you can do to protect your backups. Follow the steps below to see how you can implement the same solution and increase the security of your data.
Background
Our customers were looking for a solution to store their backups securely. The requirement was that the backups had to be protected so that they could not be deleted, either accidentally or by unauthorised persons who gained access to the system.
Secure cloud storage with object storage in Binero Cloud
As part of our ambition to always ensure secure cloud storage for our customers, Binero Cloud offers object storage that is API-compatible with AWS S3. This makes it possible to lock objects. The feature requires that the bucket where the files are stored is configured for this.
By following the steps below, we were able to ensure that the data in the cloud was stored securely – with extra security in backups.
Example
Here is an example of how deleted objects continue to exist without being visible in the interface and cannot be deleted until a configured time limit has passed. Commands that interact with Binero Cloud are flagged to return data in JSON format using the tool jq.
Preparations
Genrerate authentication credentials
Authentication credentials are generated in EC2 format for use with the aws client and assigned to environment variables for interaction with the object store.
$ read -r AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY \
<<<$(jq -r '"\(.access) \(.secret)"' \
<(openstack ec2 credentials create -f json))
Read API addresses
The API address for the object storage is retrieved from Binero Cloud's service catalogue. The last part of the URL is removed as it points to the Swift API, which is not relevant.
$ AWS_ENDPOINT_URL=$(openstack catalog list -f json \
| jq -r '.[] |select(.Type=="object-store") |.Endpoints[] |select(.interface=="public").url' \
| cut -d/ -f1-3)
Export environmental variables
All variables are made available to the aws client, which must be installed. The bucket name is also stored in a variable. Note that aws is only used as a client to interact with S3 in Binero Cloud (AWS_ENDPOINT_URL - where all data is stored).
$ export AWS_ENDPOINT_URL AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY
$ export BACKUP_BUCKET=bup
Verify preparations
For future manual interaction or automation, the three AWS-specific values can be stored on the file system. Ensure that the file permissions are not open. File system configuration is ignored if the active environment is configured correctly.
- Information in the API:
$ openstack ec2 credentials show $AWS_ACCESS_KEY_ID -c access -c secret
+--------+----------------------------------+
| Field | Value |
+--------+----------------------------------+
| access | d7855ef4cce44ec692c966e6cc10262d |
| secret | fd3004c7e25240b2b07a931b76b757e5 |
+--------+----------------------------------+ - Information in the environment:
$ env | grep AWS
AWS_SECRET_ACCESS_KEY=
AWS_ACCESS_KEY_ID=
AWS_ENDPOINT_URL=https://object-eu-se-1a.binero.cloud
$ env |grep BACKUP
BACKUP_BUCKET=bup - Information in the client:
$ aws configure list
Name Value Type Location
---- ----- ---- --------
profile None None
access_key ****************262d env
secret_key ****************57e5 env
region None None
Create bucket
A bucket is created as usual, with a storage policy and LocationConstraint. We select europe-se-1 in our example to enable replication if a policy that supports it has been selected.
$ aws s3api create-bucket \
--bucket $BACKUP_BUCKET \
--create-bucket-configuration LocationConstraint=europe-se-1 \
--object-lock-enabled-for-bucket
Configure lock mechanism
The storage mode is configured to one of two options: COMPLIANCE or GOVERNANCE. COMPLIANCE provides the highest level of security, as even administrators cannot delete objects within the set time limit, which in this case is one day.
$ aws s3api put-object-lock-configuration --bucket $BACKUP_BUCKET \
--object-lock-configuration '{"ObjectLockEnabled": "Enabled", "Rule": {"DefaultRetention": {"Mode": "COMPLIANCE", "Days": 1}}}'
Verify bucket
The ability to lock objects is based on object versioning.
Existence
$ aws s3api list-buckets | jq '.Buckets[] | select(.Name == env.BACKUP_BUCKET)'
{
"Name": "bup",
"CreationDate": "2024-04-10T10:28:18.146000+00:00"
}
Version management
$ jq .Status <(aws s3api get-bucket-versioning --bucket $BACKUP_BUCKET)
"Enabled"
Upload file
The file tux.svg is stored in the bucket.
$ aws s3 cp tux.svg s3://$BACKUP_BUCKET
upload: ./tux.svg to s3://bup/tux.svg
Verify existence of the object
Ensure that the object is returned as the only object in the store.
$ aws s3 ls $BACKUP_BUCKET
2024-04-10 12:31:11 49983 tux.svg
We then inspect the metadata stored by the API.
$ aws s3api list-objects --bucket $BACKUP_BUCKET
{
"Contents": [
{
"Key": "tux.svg",
"LastModified": "2024-04-10T10:31:11.177000+00:00",
"ETag": "\"8a6487c7872a9b825c8f2d4533067c6d\"",
"Size": 49983,
"StorageClass": "STANDARD",
"Owner": {
"DisplayName": "5002424",
"ID": ""
}
}
],
"RequestCharged": null
}
Delete object
When the object is deleted, a DeleteMarker is created. The object appears to be deleted but is actually hidden. Note the VersionId that refers to the backup copy.
$ aws s3api delete-object --bucket $BACKUP_BUCKET --key tux.svg
{
"DeleteMarker": true,
"VersionId": "mc-Yai46SHaLrf4xqeJGgT1.3ry6bUZ"
}
If the objects are listed in the bucket, either as below or using another interface, the object will no longer exist – the object tux.svg appears to have been deleted. Nothing is returned, not even by the API.
$ aws s3 ls $BACKUP_BUCKET
$ aws s3api list-objects --bucket $BACKUP_BUCKET
{
"RequestCharged": null
}
Backup is intact
If the versions of the objects are listed, we can see that the object still exists. What happened when the object was deleted was that a DeleteMarker was created.
$ aws s3api list-object-versions --bucket $BACKUP_BUCKET
{
"Versions": [
{
"ETag": "\"8a6487c7872a9b825c8f2d4533067c6d\"",
"Size": 49983,
"StorageClass": "STANDARD",
"Key": "tux.svg",
"VersionId": "-IrCCqAYQt5ySttlBnI6VEw2jpw6f2.",
"IsLatest": false,
"LastModified": "2024-04-10T10:31:11.177000+00:00",
"Owner": {
"DisplayName": "5002424",
"ID": ""
}
}
],
"DeleteMarkers": [
{
"Owner": {
"DisplayName": "5002424",
"ID": ""
}
"Key": "tux.svg",
"VersionId": "mc-Yai46SHaLrf4xqeJGgT1.3ry6bUZ",
"IsLatest": true,
"LastModified": "2024-04-11T08:25:42.985000+00:00"
}
],
"RequestCharged": null
}
Restore deleted item
The clever thing about a DeleteMarker is that if it is deleted, the file is restored!
$ aws s3api delete-object --bucket $BACKUP_BUCKET --key tux.svg \
--version-id "mc-Yai46SHaLrf4xqeJGgT1.3ry6bUZ"
{
"DeleteMarker": true,
"VersionId": "mc-Yai46SHaLrf4xqeJGgT1.3ry6bUZ"
}
$ aws s3api list-objects --bucket $BACKUP_BUCKET { "Contents": [ { "Key": "tux.svg", "LastModified": "2024-04-10T10:31:11.177000+00:00", "ETag": "\"8a6487c7872a9b825c8f2d4533067c6d\"", "Size": 49983, "StorageClass": "STANDARD", "Owner": { "DisplayName": "5002424", "ID": "" } } ], "RequestCharged": null }
If a hacker– even with knowledge of version management – attempts to delete the backup, the attempt will fail. Once the deadline has passed, the version can be deleted – but never before!
$ aws s3api delete-object --bucket $BACKUP_BUCKET \ --key tux.svg \ --version-id "\-IrCCqAYQt5ySttlBnI6VEw2jpw6f2\." An error occurred (AccessDenied) when calling the
DeleteObject operation: forbidden by object lock
Conclusion: secure cloud storage for backups
This example shows how you can securely store your backups in Binero Cloud's object storage and ensure that they remain intact even if unauthorised persons gain access to them. Secure cloud storage – world class!