The first uses the Snapshot
object returned by the create_snapshot
method:
snapshot = conn.create_snapshot(volume_id, "This shows up in the description column")
snapshot.add_tags({
'foo': 'bar',
'fie': 'bas'
})
Or, you can use the generic create_tags
method which can be used to add tags to any tag-able resource:
conn.create_tags('snap-12345678', {
'foo': 'bar',
'fie': 'baz'
})
Here is a snippet doing just that:
snap = ec.create_snapshot(
Description = "Recent Snapshot",
VolumeId = volume_id,
TagSpecification = [{
'ResourceType': 'snapshot',
'Tags': [{
'Key': 'Name',
'Value': snapshot
},
{
'Key': 'InstanceId',
'Value': instance_id
}
]
}]
)
Example:
snapshot = conn.create_snapshot(volume_id, "This shows up in the description column")
conn.create_tags(
Resources = [
snapshot['SnapshotId'],
],
Tags = [{
'Key': 'Name',
'Value': 'myTagValue'
}]
)
DestinationRegion - The name of the Amazon Web Services Region that the DB cluster snapshot is to be created in.,TargetDBClusterSnapshotIdentifier - The identifier for the new copy of the DB cluster snapshot in the destination Amazon Web Services Region.,DestinationRegion - The name of the Amazon Web Services Region that Aurora read replica will be created in.,SourceDBClusterSnapshotIdentifier - The DB cluster snapshot identifier for the encrypted DB cluster snapshot to be copied. This identifier must be in the ARN format for the source Amazon Web Services Region and is the same value as the SourceDBClusterSnapshotIdentifier in the presigned URL.
import boto3
client = boto3.client('rds')
response = client.add_role_to_db_cluster(
DBClusterIdentifier = 'string',
RoleArn = 'string',
FeatureName = 'string'
)
response = client.add_role_to_db_instance(
DBInstanceIdentifier = 'string',
RoleArn = 'string',
FeatureName = 'string'
)
response = client.add_source_identifier_to_subscription(
SubscriptionName = 'string',
SourceIdentifier = 'string'
)
{
'EventSubscription': {
'CustomerAwsId': 'string',
'CustSubscriptionId': 'string',
'SnsTopicArn': 'string',
'Status': 'string',
'SubscriptionCreationTime': 'string',
'SourceType': 'string',
'SourceIdsList': [
'string',
],
'EventCategoriesList': [
'string',
],
'Enabled': True | False,
'EventSubscriptionArn': 'string'
}
}
response = client.add_source_identifier_to_subscription(
SourceIdentifier = 'mymysqlinstance',
SubscriptionName = 'mymysqleventsubscription',
)
print(response)
Working with S3 in Python using Boto3,Working with EBS volumes in Python using Boto3,This article covered how to use Python to interact with the Amazon EC2 service to create, list, describe, search, and delete EBS volume Snapshots and AMIs using the Boto3 library.,The EBS volume snapshot is a long-running operation. The immediate response that you’re getting in the code above means only that the snapshot operation has been started and will continue in the background.
EBS snapshots are commonly used for EBS volume backups; they help you copy EBS volumes data between the regions or save data before shutting down an instance.
#!/usr/bin/env python3 import boto3 AWS_REGION = "us-east-2" EC2_RESOURCE = boto3.resource('ec2', region_name = AWS_REGION) VOLUME_ID = 'vol-0c0ce77e0b27ed800' snapshot = EC2_RESOURCE.create_snapshot( VolumeId = VOLUME_ID, TagSpecifications = [{ 'ResourceType': 'snapshot', 'Tags': [{ 'Key': 'Name', 'Value': 'hands-on-cloud-ebs-snapshot' }, ] }, ] ) print(f 'Snapshot {snapshot.id} created for volume {VOLUME_ID}')
Here’s an extended example with the waiter implementation:
#!/usr/bin/env python3 import boto3 AWS_REGION = "us-east-2" EC2_RESOURCE = boto3.resource('ec2', region_name = AWS_REGION) VOLUME_ID = 'vol-0c0ce77e0b27ed800' snapshot = EC2_RESOURCE.create_snapshot( VolumeId = VOLUME_ID, TagSpecifications = [{ 'ResourceType': 'snapshot', 'Tags': [{ 'Key': 'Name', 'Value': 'hands-on-cloud-ebs-snapshot' }, ] }, ] ) snapshot.wait_until_completed() print(f 'Snapshot {snapshot.id} created for volume {VOLUME_ID}')
Attention: the all()
method return snapshots owned by you and all publicly available snapshots (more than 17,000 snapshots). We strongly recommend you use the filter()
method (described in the next section) to reduce the amount of returned records.
#!/usr/bin/env python3 import boto3 AWS_REGION = "us-east-2" EC2_RESOURCE = boto3.resource('ec2', region_name = AWS_REGION) snapshots = EC2_RESOURCE.snapshots.all() for snapshot in snapshots: print(f 'Snapshot {snapshot.id} created for volume {snapshot.volume_id}')
To filter EBS volume Snapshots by Volume ID, you need to use the same filter()
method but specify the Filters
argument:
#!/usr/bin/env python3 import boto3 AWS_REGION = "us-east-2" EC2_RESOURCE = boto3.resource('ec2', region_name = AWS_REGION) STS_CLIENT = boto3.client('sts') CURRENT_ACCOUNT_ID = STS_CLIENT.get_caller_identity()['Account'] snapshots = EC2_RESOURCE.snapshots.filter( Filters = [{ 'Name': 'volume-id', 'Values': [ 'vol-0c0ce77e0b27ed800' ] }] ) for snapshot in snapshots: print(f 'Snapshot {snapshot.id} created for volume {snapshot.volume_id}')
To filter EBS volume Snapshots by Tag, you need to use the same filter()
method but specify the Filters
argument:
#!/usr/bin/env python3 import boto3 AWS_REGION = "us-east-2" EC2_RESOURCE = boto3.resource('ec2', region_name = AWS_REGION) STS_CLIENT = boto3.client('sts') CURRENT_ACCOUNT_ID = STS_CLIENT.get_caller_identity()['Account'] snapshots = EC2_RESOURCE.snapshots.filter( Filters = [{ 'Name': 'tag:Name', 'Values': [ 'hands-on-cloud-ebs-snapshot' ] }] ) for snapshot in snapshots: print(f 'Snapshot {snapshot.id} created for volume {snapshot.volume_id}')
The code will create snapshots for any in-use volumes across all regions. It will also add the name of the volume to the snapshot name tag so it's easier for us to identify whenever we view the list of snapshots. ,In this post, we'll cover how to automate EBS snapshots for your AWS infrastructure using Lambda and CloudWatch. We'll build a solution that creates nightly snapshots for volumes attached to EC2 instances and deletes any snapshots older than 10 days. This will work across all AWS regions.,The function we wrote for creating snapshots used a filter when calling ec2.describe_volumes that looked for status of in-use:,The default timeout for Lambda functions is 3 seconds, which is too short for our task. Let's increase the timeout to 1 minute under Advanced Settings. This will give our function enough time to kick off the snapshot process for each volume.
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": [
"logs:*"
],
"Resource": "arn:aws:logs:*:*:*"
},
{
"Effect": "Allow",
"Action": "ec2:Describe*",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"ec2:CreateSnapshot",
"ec2:DeleteSnapshot",
"ec2:CreateTags",
"ec2:ModifySnapshotAttribute",
"ec2:ResetSnapshotAttribute"
],
"Resource": [
"*"
]
}
]
}
# Backup all in -use volumes in all regions import boto3 def lambda_handler(event, context): ec2 = boto3.client('ec2') # Get list of regions regions = ec2.describe_regions().get('Regions', []) # Iterate over regions for region in regions: print "Checking region %s " % region['RegionName'] reg = region['RegionName'] # Connect to region ec2 = boto3.client('ec2', region_name = reg) # Get all in -use volumes in all regions result = ec2.describe_volumes(Filters = [{ 'Name': 'status', 'Values': ['in-use'] }]) for volume in result['Volumes']: print "Backing up %s in %s" % (volume['VolumeId'], volume['AvailabilityZone']) # Create snapshot result = ec2.create_snapshot(VolumeId = volume['VolumeId'], Description = 'Created by Lambda backup function ebs-snapshots') # Get snapshot resource ec2resource = boto3.resource('ec2', region_name = reg) snapshot = ec2resource.Snapshot(result['SnapshotId']) volumename = 'N/A' # Find name tag for volume if it exists if 'Tags' in volume: for tags in volume['Tags']: if tags["Key"] == 'Name': volumename = tags["Value"] # Add volume name to snapshot for easier identification snapshot.create_tags(Tags = [{ 'Key': 'Name', 'Value': volumename }])
11: 00: 19 START RequestId: bb6def8d - f26d - 11e6 - 8983 - 89 eca50275e0 Version: $LATEST 11: 00: 21 Backing up volume vol - 0 c0b66f7fd875964a in us - east - 2 a 11: 00: 22 END RequestId: bb6def8d - f26d - 11e6 - 8983 - 89 eca50275e0 11: 00: 22 REPORT RequestId: bb6def8d - f26d - 11e6 - 8983 - 89 eca50275e0 Duration: 3256.15 ms Billed Duration: 3300 ms Memory Size: 128 MB Max Memory Used: 40 MB
# Delete snapshots older than retention period import boto3 from botocore.exceptions import ClientError from datetime import datetime, timedelta def delete_snapshot(snapshot_id, reg): print "Deleting snapshot %s " % (snapshot_id) try: ec2resource = boto3.resource('ec2', region_name = reg) snapshot = ec2resource.Snapshot(snapshot_id) snapshot.delete() except ClientError as e: print "Caught exception: %s" % e return def lambda_handler(event, context): # Get current timestamp in UTC now = datetime.now() # AWS Account ID account_id = '1234567890' # Define retention period in days retention_days = 10 # Create EC2 client ec2 = boto3.client('ec2') # Get list of regions regions = ec2.describe_regions().get('Regions', []) # Iterate over regions for region in regions: print "Checking region %s " % region['RegionName'] reg = region['RegionName'] # Connect to region ec2 = boto3.client('ec2', region_name = reg) # Filtering by snapshot timestamp comparison is not supported # So we grab all snapshot id 's result = ec2.describe_snapshots(OwnerIds = [account_id]) for snapshot in result['Snapshots']: print "Checking snapshot %s which was created on %s" % (snapshot['SnapshotId'], snapshot['StartTime']) # Remove timezone info from snapshot in order for comparison to work below snapshot_time = snapshot['StartTime'].replace(tzinfo = None) # Subtract snapshot time from now returns a timedelta # Check if the timedelta is greater than retention days if (now - snapshot_time) > timedelta(retention_days): print "Snapshot is older than configured retention of %d days" % (retention_days) delete_snapshot(snapshot['SnapshotId'], reg) else: print "Snapshot is newer than configured retention of %d days so we keep it" % (retention_days)
result = ec2.describe_volumes(Filters = [{
'Name': 'status',
'Values': ['in-use']
}])
result = ec2.describe_volumes(Filters = [{
'Name': 'tag:Backup',
'Values': ['Yes']
}])
When you create or restore a DB instance, you can specify that the tags from the DB instance are copied to snapshots of the DB instance. Copying tags ensures that the metadata for the DB snapshots matches that of the source DB instance, and that any access policies for the DB snapshots also match those of the source DB instance.,To list the tags on an Amazon RDS resource, use the AWS CLI command list-tags-for-resource.,You can assign a tag to a DB instance using the AWS Management Console, the AWS CLI, or the RDS API. The following examples are for the console and CLI.,To add one or more tags to an Amazon RDS resource, use the AWS CLI command add-tags-to-resource.
When working with XML using the Amazon RDS API, tags use the following schema:
<Tagging>
<TagSet>
<Tag>
<Key>Project</Key>
<Value>Trinity</Value>
</Tag>
<Tag>
<Key>User</Key>
<Value>Jones</Value>
</Tag>
</TagSet>
</Tagging>
The commands and APIs for tagging work with ARNs. That way, they can work seamlessly across AWS Regions, AWS accounts, and
different types of resources that might have identical short names. You can specify the ARN instead of
the DB instance ID in CLI commands that operate on DB instances. Substitute the name of your own DB instances for
dev-test-db-instance
. In subsequent commands that use ARN parameters, substitute
the ARN of your own DB instance. The ARN includes your own AWS account ID and the name of the AWS Region
where your DB instance is located.
$ aws rds describe - db - instances--db - instance - identifier dev - test - db - instance\
--query "*[].{DBInstance:DBInstanceArn}"--output text
arn: aws: rds: us - east - 1: 123456789102: db: dev - test - db - instance
The name for this tag is chosen by you. Using a tag like this is an alternative to devising a naming convention that encodes all
the relevant information in the name of the DB instance (or other types of resources). Because this example
treats the tag as an attribute that is either present or absent, it omits the Value=
part of the --tags
parameter.
$ aws rds add - tags - to - resource\ --resource - name arn: aws: rds: us - east - 1: 123456789102: db: dev - test - db - instance\ --tags Key = stoppable
These commands retrieve the tag information for the DB instance in JSON format and in plain tab-separated text.
$ aws rds list - tags -
for -resource\
--resource - name arn: aws: rds: us - east - 1: 123456789102: db: dev - test - db - instance {
"TagList": [{
"Key": "stoppable",
"Value": ""
}]
}
aws rds list - tags -
for -resource\
--resource - name arn: aws: rds: us - east - 1: 123456789102: db: dev - test - db - instance--output text
TAGLIST stoppable
This Linux example uses shell scripting to save the list of DB instance ARNs to a temporary file and then perform CLI commands for each DB instance.
$ aws rds describe - db - instances--query "*[].[DBInstanceArn]"--output text > /tmp/db_instance_arns.lst $ for arn in $(cat / tmp / db_instance_arns.lst) do match = "$(aws rds list-tags-for-resource --resource-name $arn --output text | grep stoppable)" if [ [!-z "$match"] ] then echo "DB instance $arn is tagged as stoppable. Stopping it now." # Note that you need to get the DB instance identifier from the ARN. dbid = $(echo $arn | sed - e 's/.*://') aws rds stop - db - instance--db - instance - identifier $dbid fi done DB instance arn: arn: aws: rds: us - east - 1: 123456789102: db: dev - test - db - instance is tagged as stoppable.Stopping it now. { "DBInstance": { "DBInstanceIdentifier": "dev-test-db-instance", "DBInstanceClass": "db.t3.medium", ...
For Linux, macOS, or Unix:
aws rds add - tags - to - resource\
--resource - name arn: aws: rds: us - east - 1: 123456789012: db: new - orcl - db\
--tags Key = BackupPlan, Value = Test
For Windows:
aws rds add - tags - to - resource ^
--resource - name arn: aws: rds: us - east - 1: 123456789012: db: new - orcl - db ^
--tags Key = BackupPlan, Value = Test