amazon-s3 interview questions
Top amazon-s3 frequently asked interview questions
I am using an Amazon S3 bucket for uploading and downloading of data using my .NET application. Now my question is: I want to access my S3 bucket using SSL. Is it possible to implement SSL for an Amazon s3 bucket?
Thanks in advance.
Source: (StackOverflow)
Pretty basic question but I haven't been able to find an answer. Using Transit I can "move" files from one S3 bucket on one AWS account to another S3 bucket on another AWS account, but what it actually does is download the files from the first then upload them to the second.
Is there a way to move files directly from one S3 account to another without downloading them in between?
Source: (StackOverflow)
Does anyone know how to do this? So far I haven't been able to find anything useful via Google.
I'd really like to setup a local repo and use git push
to publish it to S3, the idea being to have local version control over assets but remote storage on S3.
Can this be done, and if so, how?
Source: (StackOverflow)
Background:
I'm designing the authentication scheme for a REST web service. This doesn't "really" need to be secure (it's more of a personal project) but I want to make it as secure as possible as an exercise/learning experience. I don't want to use SSL since I don't want the hassle and, mostly, the expense of setting it up.
These SO questions were especially useful to get me started:
I'm thinking of using a simplified version of Amazon S3's authentication (I like OAuth but it seems too complicated for my needs). I'm adding a randomly generated nonce, supplied by the server, to the request, to prevent replay attacks.
To get to the question:
Both S3 and OAuth rely on signing the request URL along with a few selected headers. Neither of them sign the request body for POST or PUT requests. Isn't this vulnerable to a man-in-the-middle attack, which keeps the url and headers and replaces the request body with any data the attacker wants?
It seems like I can guard against this by including a hash of the request body in the string that gets signed. Is this secure?
Source: (StackOverflow)
I noticed that there doesnt seem to be an option to download an entire S3 bucket from the AWS Management Console.
Is there an easy way to grab everything in one of my buckets? I was thinking about making the root folder public, using wget to grab it all, and then making it private again but I dont know if there's an easier way.
Source: (StackOverflow)
How do I change the key pair for my ec2 instance in AWS management console? I can stop the instance, I can create new key pair, but I don't see any link to modify the instance's key pair.
Source: (StackOverflow)
Are there use cases that lend themselves better to Amazon cloudfront over s3 or the other way around? I'm trying to understand the difference between the 2 through examples.
Source: (StackOverflow)
I've been looking for ways of making my site load faster and one way that I'd like to explore is making greater use of Cloudfront.
Because Cloudfront was originally not designed as a custom-origin CDN and because it didn't support gzipping, I have so far been using it to host all my images, which are referenced by their Cloudfront cname in my site code, and optimized with far-futures headers.
CSS and javascript files, on the other hand, are hosted on my own server, because until now I was under the impression that they couldn't be served gzipped from Cloudfront, and that the gain from gzipping (about 75 per cent) outweighs that from using a CDN (about 50 per cent): Amazon S3 (and thus Cloudfront) did not support serving gzipped content in a standard manner by using the HTTP Accept-Encoding header that is sent by browsers to indicate their support for gzip compression, and so they were not able to Gzip and serve components on the fly.
Thus I was under the impression, until now, that one had to choose between two alternatives:
move all assets to the Amazon CloudFront and forget about GZipping;
keep components self-hosted and configure our server to detect incoming requests and perform on-the-fly GZipping as appropriate, which is what I chose to do so far.
There were workarounds to solve this issue, but essentially these didn't work. [link].
Now, it seems Amazon Cloudfront supports custom origin, and that it is now possible to use the standard HTTP Accept-Encoding method for serving gzipped content if you are using a Custom Origin [link].
I haven't so far been able to implement the new feature on my server. The blog post I linked to above, which is the only one I found detailing the change, seems to imply that you can only enable gzipping (bar workarounds, which I don't want to use), if you opt for custom origin, which I'd rather not: I find it simpler to host the coresponding fileds on my Cloudfront server, and link to them from there. Despite carefully reading the documentation, I don't know:
whether the new feature means the files should be hosted on my own domain server via custom origin, and if so, what code setup will achieve this;
how to configure the css and javascript headers to make sure they are served gzipped from Cloudfront.
Source: (StackOverflow)
I've been interacting with Amazon S3 through S3Fox and I can't seem to delete my buckets. I select a bucket, hit delete, confirm the delete in a popup, and... nothing happens. Is there another tool that I should use?
Source: (StackOverflow)
I am trying to set up FTP on Amazon Cloud Server, but without luck...
I search over net and there is no concrete steps how to do it...
Since there is no steps on web, can someone help me finding it or write it here?
I usually use dedicated server or shared hosting, but I am not that good with those Cloud servers...
I found those commands to run:
$ yum install vsftpd
$ ec2-authorize default -p 20-21
$ ec2-authorize default -p 1024-1048
$ vi /etc/vsftpd/vsftpd.conf
#<em>---Add following lines at the end of file---</em>
pasv_enable=YES
pasv_min_port=1024
pasv_max_port=1048
pasv_address=<Public IP of your instance>
$ /etc/init.d/vsftpd restart
But I don't know where to write them...
Help?
Source: (StackOverflow)
There has been a long standing issue with Firefox not loading font from different origin than the current webpage. Usually, the issue arise when the fonts are served on CDNs.
Various solutions has been raised in other questions:
css @font-face not working with firefox, but working with chrome and IE
With the introduction of Amazon S3 CORS, is there a solution using CORS to address the font loading issue in Firefox?
Thanks in advance!
edit: It would be great to see a sample of the S3 CORS configuration.
edit2: I have found a working solution without actually understanding what it did. If anyone could provide more detailed explanations about the configs and the background magic that happens on Amazon's interpretation of the config, it will be greatly appreciated, as with nzifnab who put up a bounty for it.
Source: (StackOverflow)
I'm working on a little webapp (all client-side) I want to host it on Amazon S3. I've found several guides on this and have managed to create myself a bucket (with the same name as my domain), set it as a website and upload some content.
Where I'm struggling, and where all the documentation starts to get a bit vague, is how to properly configure my DNS.
All my registrar (123-reg) could suggest was web forwarding which gives me mydomain.com.s3.amazonaws.com
What do I have to configure, and where (ie. 123-reg / Amazon) can I get a clean URL?
Source: (StackOverflow)