boto3
AWS SDK for Python
AWS SDK for Python
I wanted to enable the multi-az feature for an rds instance using Boto3.But it is not getting done by using the script.Also I updated my policy for rds instances but still it is not getting updated.I am using the following script.
modified_rds_attributes = rds_conn_boto3.modify_db_instance(DBInstanceIdentifier=id, MultiAZ=True).
Source: (StackOverflow)
I know how to obtain the private key of a AWS Key Pair in boto3:
import boto3
client = boto3.client('ec2')
dict_key_pair = client.create_key_pair(KeyName="temp-1")
private_key = dict_key_pair['KeyMaterial']
But I'd prefer to get a EC2.KeyPair
instance instead of a dict.
I understand that the way to create such an instance is:
service_resource = boto3.resource('ec2')
entity_key_pair = service_resource.create_key_pair(KeyName="temp-2")
Unfortunately I cannot find out how to extract the private key from the newly created object.
Source: (StackOverflow)
AWS sets the limit on number of resources. I need to figure out the number of used resources for my account using boto3
and python
in a script
Is there any way to do this ?
I am beginner in both boto3
and python
For example:
EBS limits which has multiple resources and its limits defined under it.
Source: (StackOverflow)
I am trying to create ec2 spot instances using boto3 api, so far I am able to get the spot instance history price, spin up a spot instance, etc.
But I don't know how to get the price we are paying for spot instance using boto api.
anyone know how to do this ?
Thanks
Source: (StackOverflow)
I am trying to get the public ip address of all the running instances.
I am using boto3 and python version 2.7.6.
>>> instances = ec2.instances.filter(
... Filters=[{'Name': 'instance-state-name', 'Values': ['running']}])
>>> for instance in instances:
... print(instance.public_ip_address,instance.platform,instance.public_dns_name);
It lists all the instances along with instances not having public ip address assigned to it.
(None, None, '')
Is there any way to filter out those instances which do not have a public ip while populating instances using ec2.instances.filter?
Source: (StackOverflow)
I'm trying to do a "hello world" with new boto3 client for AWS.
The use-case I have is fairly simple: get object from S3 and save it to the file.
In boto 2.X I would do it like this:
import boto
key = boto.connect_s3().get_bucket('foo').get_key('foo')
key.get_contents_to_filename('/tmp/foo')
In boto 3 . I can't find a clean way to do the same thing, so I'm manually iterating over the "Streaming" object:
import boto3
key = boto3.resource('s3').Object('fooo', 'docker/my-image.tar.gz').get()
with open('/tmp/my-image.tar.gz', 'w') as f:
chunk = key['Body'].read(1024*8)
while chunk:
f.write(chunk)
chunk = key['Body'].read(1024*8)
or
import boto3
key = boto3.resource('s3').Object('fooo', 'docker/my-image.tar.gz').get()
with open('/tmp/my-image.tar.gz', 'w') as f:
for chunk in iter(lambda: key['Body'].read(4096), b''):
f.write(chunk)
And it works fine. I was wondering is there any "native" boto3 function that will do the same task?
Source: (StackOverflow)
In regular boto 2.38 I used to access instance metadata (e.g. get current stack-name), through boto's
boto.utils.get_instance_metadata()
Is there an equivalent in boto3, or do I need to go to the down level direct http address to fetch metadata about the running instance?
Source: (StackOverflow)
For example, I have this code:
import boto3
ec2 = boto3.resource('ec2')
# Where is the client???
Do I need to call boto3.client('ec2')
or is there another way?
Source: (StackOverflow)
I've written a Python script which takes screenshot of my pc at a certain interval and send that screenshot to my S3 bucket. When I run my script with python command, it works, but when I run this script as a background task with pythonw.exe command, the screenshot capturing operation works- but nothing uploads to S3.
Here is my code:
import os
import sys
import time
import Image
import ImageGrab
import getpass
import boto3
import threading
from random import randint
s3 = boto3.resource('s3')
username = getpass.getuser()
#---------------------------------------------------------
#User Settings:
SaveDirectory=r'C:\Users\Md.Rezaur\Dropbox\Screepy_App\screenshot'
ImageEditorPath=r'C:\WINDOWS\system32\mspaint.exe'
def capture_and_send():
interval = randint(10,30)
threading.Timer(interval, capture_and_send).start ()
img=ImageGrab.grab()
saveas=os.path.join(SaveDirectory,'ScreenShot_'+time.strftime('%Y_%m_%d_%H_%M_%S')+'.jpg')
fname = 'ScreenShot_'+time.strftime('%Y_%m_%d_%H_%M_%S')+'.jpg'
img.save(saveas, quality=50, optimize=True)
editorstring='""%s" "%s"'% (ImageEditorPath,saveas)
data = open(fname, 'rb')
s3.Bucket('screepy').put_object(Key=username+'/'+fname, Body=data)
capture_and_send()
If you didn't configure your aws credentials, install the aws-cli and run the command:
aws configure
Source: (StackOverflow)
I have been able to view the attributes of the PreparedRequest that botocore sends, but I'm wondering how I can view the exact request string that is sent to AWS. I need the exact request string to be able to compare it to another application I'm testing AWS calls with.
Source: (StackOverflow)
I'm new to AWS using Python and I'm trying to learn the boto API however I notice there are two major versions/packages for Python. That would be boto, and boto3.
I haven't been able to find an article with the major advantages/disadvantages or differences between these packages.
Thank you.
Source: (StackOverflow)
How can I see what's inside a bucket in S3 with boto3
? (i.e. do an "ls"
)?
Doing the following:
import boto3
s3 = boto3.resource('s3')
my_bucket = s3.Bucket('some/path/')
returns:
s3.Bucket(name='some/path/')
How do I see its contents?
Source: (StackOverflow)
I want to encrypt my existing rds instance.I am using the Boto Script to modify db instance.
modified_rds_attributes = rds_conn_boto3.modify_db_instance(DBInstanceIdentifier=id,StorageEncrypted=True)
Is it possible to encrypt existing RDS DB Instance?
If Yes,then how can I acheive the task?
Source: (StackOverflow)
I'm using Python3 with the boto3 package and I'm running describe_instances() to describe all of my instances. However the return type is a dictionary, now there are lists and other dictionaries inside the dictionary.
What I want to do for example is return only the "InstanceId" string or if I could return the entire "Instances" list that wouldn't be bad either.
ec2 = boto3.client(Make connection here)
response = ec2.describe_instances()
pp = pprint.PrettyPrinter(indent=2)
pp.pprint(response)
The return type and the response code can be found here. http://boto3.readthedocs.org/en/latest/reference/services/ec2.html#EC2.Client.describe_instances
Source: (StackOverflow)
The Issue
I'm trying to upload images directly to S3 from the browser and am getting stuck applying the content-length-range permission via boto's S3Connection.generate_url method.
There's plenty of information about signing POST forms, setting policies in general and even a heroku method for doing a similar submission. What I can't figure out for the life of me is how to add the "content-length-range" to the signed url.
With boto's generate_url method (example below), I can specify policy headers and have got it working for normal uploads. What I can't seem to add is a policy restriction on max file size.
Server Signing Code
## django request handler
from boto.s3.connection import S3Connection
from django.conf import settings
from django.http import HttpResponse
import mimetypes
import json
conn = S3Connection(settings.S3_ACCESS_KEY, settings.S3_SECRET_KEY)
object_name = request.GET['objectName']
content_type = mimetypes.guess_type(object_name)[0]
signed_url = conn.generate_url(
expires_in = 300,
method = "PUT",
bucket = settings.BUCKET_NAME,
key = object_name,
headers = {'Content-Type': content_type, 'x-amz-acl':'public-read'})
return HttpResponse(json.dumps({'signedUrl': signed_url}))
On the client, I'm using the ReactS3Uploader which is based on tadruj's s3upload.js script. It shouldn't be affecting anything as it seems to just pass along whatever the signedUrls covers, but copied below for simplicity.
ReactS3Uploader JS Code (simplified)
uploadFile: function() {
new S3Upload({
fileElement: this.getDOMNode(),
signingUrl: /api/get_signing_url/,
onProgress: this.props.onProgress,
onFinishS3Put: this.props.onFinish,
onError: this.props.onError
});
},
render: function() {
return this.transferPropsTo(
React.DOM.input({type: 'file', onChange: this.uploadFile})
);
}
S3upload.js
S3Upload.prototype.signingUrl = '/sign-s3';
S3Upload.prototype.fileElement = null;
S3Upload.prototype.onFinishS3Put = function(signResult) {
return console.log('base.onFinishS3Put()', signResult.publicUrl);
};
S3Upload.prototype.onProgress = function(percent, status) {
return console.log('base.onProgress()', percent, status);
};
S3Upload.prototype.onError = function(status) {
return console.log('base.onError()', status);
};
function S3Upload(options) {
if (options == null) {
options = {};
}
for (option in options) {
if (options.hasOwnProperty(option)) {
this[option] = options[option];
}
}
this.handleFileSelect(this.fileElement);
}
S3Upload.prototype.handleFileSelect = function(fileElement) {
this.onProgress(0, 'Upload started.');
var files = fileElement.files;
var result = [];
for (var i=0; i < files.length; i++) {
var f = files[i];
result.push(this.uploadFile(f));
}
return result;
};
S3Upload.prototype.createCORSRequest = function(method, url) {
var xhr = new XMLHttpRequest();
if (xhr.withCredentials != null) {
xhr.open(method, url, true);
}
else if (typeof XDomainRequest !== "undefined") {
xhr = new XDomainRequest();
xhr.open(method, url);
}
else {
xhr = null;
}
return xhr;
};
S3Upload.prototype.executeOnSignedUrl = function(file, callback) {
var xhr = new XMLHttpRequest();
xhr.open('GET', this.signingUrl + '&objectName=' + file.name, true);
xhr.overrideMimeType && xhr.overrideMimeType('text/plain; charset=x-user-defined');
xhr.onreadystatechange = function() {
if (xhr.readyState === 4 && xhr.status === 200) {
var result;
try {
result = JSON.parse(xhr.responseText);
} catch (error) {
this.onError('Invalid signing server response JSON: ' + xhr.responseText);
return false;
}
return callback(result);
} else if (xhr.readyState === 4 && xhr.status !== 200) {
return this.onError('Could not contact request signing server. Status = ' + xhr.status);
}
}.bind(this);
return xhr.send();
};
S3Upload.prototype.uploadToS3 = function(file, signResult) {
var xhr = this.createCORSRequest('PUT', signResult.signedUrl);
if (!xhr) {
this.onError('CORS not supported');
} else {
xhr.onload = function() {
if (xhr.status === 200) {
this.onProgress(100, 'Upload completed.');
return this.onFinishS3Put(signResult);
} else {
return this.onError('Upload error: ' + xhr.status);
}
}.bind(this);
xhr.onerror = function() {
return this.onError('XHR error.');
}.bind(this);
xhr.upload.onprogress = function(e) {
var percentLoaded;
if (e.lengthComputable) {
percentLoaded = Math.round((e.loaded / e.total) * 100);
return this.onProgress(percentLoaded, percentLoaded === 100 ? 'Finalizing.' : 'Uploading.');
}
}.bind(this);
}
xhr.setRequestHeader('Content-Type', file.type);
xhr.setRequestHeader('x-amz-acl', 'public-read');
return xhr.send(file);
};
S3Upload.prototype.uploadFile = function(file) {
return this.executeOnSignedUrl(file, function(signResult) {
return this.uploadToS3(file, signResult);
}.bind(this));
};
module.exports = S3Upload;
Any help would be greatly appreciated here as I've been banging my head against the wall for quite a few hours now.
Source: (StackOverflow)