rackspace interview questions
Top rackspace frequently asked interview questions
I have a cloud server (CentOS) on Rackspace. A long time ago I disabled root login and created a user with SSH public key authorization. A hard drive on my laptop broke down recently and I lost my public key. How can I access my server now?
Source: (StackOverflow)
I'm copying a file from S3 to Cloudfiles, and I would like to avoid writing the file to disk. The Python-Cloudfiles library has an object.stream() call that looks to be what I need, but I can't find an equivalent call in boto. I'm hoping that I would be able to do something like:
shutil.copyfileobj(s3Object.stream(),rsObject.stream())
Is this possible with boto (or I suppose any other s3 library)?
Source: (StackOverflow)
Trying to fetch all the servers registered to our RackSpace account under a certain tag.
Using RackSpace's Python bindings for OpenStack, pyrax
, we haven't found a way to do this. Is there some way to achieve it with that library, or is there another Python library that would do that?
Thanks very much!
Source: (StackOverflow)
I currently have a Rackspace Cloud Server that I'd like to migrate to an Azure Virtual Machine. I recently got an MSDN subscription which gives me a certain level of hosting via Azure at no cost, where I'm currently paying for that level of service with Rackspace.
However, one of the nice things about Rackspace is that I can schedule nightly/weekly backups of the VM image. Is there any mechanism for doing this on Azure? I'm worried about protecting against corruption of the database (i.e. what if someone were to run an UPDATE statement and forget the WHERE clause). Is there a mechanism for this with Azure?
I know the VMs are stored as .VHD files in my local Azure storage, but the VM image is 127 gigs. Downloading that nightly even with FIOS internet isn't really going to fly as a solution.
Source: (StackOverflow)
I am hosting my Rails app on Rackspace with nginx webserver.
When calling any Rails API, I see this message in /var/log/nginx/error.log:
*49 connect() failed (111: Connection refused) while connecting to upstream, client: 10.189.254.5, server: , request: "POST /api/v1/users/sign_in HTTP/1.1", upstream: "http://127.0.0.1:3001/api/v1/users/sign_in", host: "anthemapp.com"
- What is the upstream block?
- What is /etc/nginx/sites-available/default? Is this where I can configure this?
- Why am I receiving the error above?
I spent several hours with 5-6 different Rackspace tech people (they didn't know how to resolve this). This all started when I took the server into rescue mode and followed the steps here: https://community.rackspace.com/products/f/25/t/69. Once I came out of rescue mode and rebooted the server, I started receiving the error I am writing about. Tnx!
Source: (StackOverflow)
I am getting security exception while using itextsharp.dll
in rackspace cloud site. Following is the exception:
[SecurityException: That assembly does not allow partially trusted callers.]
Can anyone help on this?
Source: (StackOverflow)
I have been trying to upload files to rackspace storage from my website. I have followed the following api guide to create the form to upload files to rackspace.
http://docs.rackspace.com/files/api/v1/cf-devguide/content/FormPost-d1a555.html
section 7.2, 7.2.1 and 7.2.2
It completely works fine if I do a normal form submit. The file gets uploaded to rackspace storage and returns a status 201 and a blank message. I checked the file in the container and its uploaded successfully.
But now the problem comes when i try to integrate progressbar using Blueimp jQuery file upload plugin.
Here's my code to initialize the fileupload plugin
$(function () {
'use strict';
// Initialize the jQuery File Upload widget:
$('#fileupload').fileupload({maxChunkSize: 10000000});
if (window.location.hostname === 'blueimp.github.com') {
// Demo settings:
$('#fileupload').fileupload('option', {
url: '//jquery-file-upload.appspot.com/',
maxFileSize: 5000000,
acceptFileTypes: /(\.|\/)(gif|jpe?g|png)$/i,
process: [
{
action: 'load',
fileTypes: /^image\/(gif|jpeg|png)$/,
maxFileSize: 20000000 // 20MB
},
{
action: 'resize',
maxWidth: 1440,
maxHeight: 900
},
{
action: 'save'
}
]
});
// Upload server status check for browsers with CORS support:
if ($.support.cors) {
$.ajax({
url: '//jquery-file-upload.appspot.com/',
type: 'HEAD'
}).fail(function () {
$('<span class="alert alert-error"/>')
.text('Upload server currently unavailable - ' +
new Date())
.appendTo('#fileupload');
});
}
} else {
// Load existing files:
console.log("mukibul");
$('#fileupload').each(function () {
var that = this;
console.log("result1");
$.getJSON(this.action, function (result) {
if (result && result.length) {
console.log("result");
console.log(result);
$(that).fileupload('option', 'done')
.call(that, null, {result: result});
}
});
});
}});
Here's the web form to upload files
<form id="fileupload" action="https://storage101.dfw1.clouddrive.com/v1/MossoCloudFS_4d6c7b53-7b5a-458c-8a2d-957971f293bb/tranceyatralocal/${sessionScope.tyUser.userID}/${albumDetails.albumId}" method="POST" enctype="multipart/form-data">
<!-- The fileupload-buttonbar contains buttons to add/delete files and start/cancel the upload -->
<input type="hidden" name="redirect" value="http://localhost:8080/impianlabs/home/uploadResponse.htm" />
<input type="hidden" name="max_file_size" value="${max_file_size}" />
<input type="hidden" name="max_file_count" value="10" />
<input type="hidden" name="expires" value="${expires}" />
<input type="hidden" name="signature" value="${hmac}" />
<div class="row fileupload-buttonbar" style="margin:10px;">
<div class="span7" style="">
<!-- The fileinput-button span is used to style the file input field as button -->
<span class="btn btn-success fileinput-button">
<i class="icon-plus icon-white"></i>
<span>Add files...</span>
<input type="file" name="files[]" multiple>
</span>
<button type="submit" class="btn btn-primary start">
<i class="icon-upload icon-white"></i>
<span>Start upload</span>
</button>
<button type="reset" class="btn btn-warning cancel">
<i class="icon-ban-circle icon-white"></i>
<span>Cancel upload</span>
</button>
<button type="button" class="btn btn-danger delete">
<i class="icon-trash icon-white"></i>
<span>Delete</span>
</button>
<input type="checkbox" class="toggle">
</div>
<!-- The global progress information -->
<div class="span5 fileupload-progress fade">
<!-- The global progress bar -->
<div class="progress progress-success progress-striped active" role="progressbar" aria-valuemin="0" aria-valuemax="100">
<div class="bar" style="width:0%;"></div>
</div>
<!-- The extended global progress information -->
<div class="progress-extended"> </div>
</div>
</div>
<!-- The loading indicator is shown during file processing -->
<div class="fileupload-loading"></div>
<br>
<!-- <div>
<ul id="filePreview">
</ul>
</div> -->
<!-- The table listing the files available for upload/download -->
<table role="presentation" class="table table-striped"><tbody class="files" data-toggle="modal-gallery" data-target="#modal-gallery"></tbody></table>
</form>
When I upload any files, it starts uploading normally, the progress starts showing up as expected. In chrome Inspect->Network tabs I could see two requests to the rackspace server. One is method OPTIONS which returns 200 and another is method POST which remains in Pending until the progress bar reaches 100% but as soon as it reaches 100% the status of the POST method changes to cancelled and the jquery file upload plugin prints error true in the webpage. I am not able to understand why the plugin is throwing error. I checked the container and found that the file got uploaded successfully.
I have used curl to set all headers required for CORS in rackspace. But not sure what I am doing wrong. Any help to resolve the issue would be appreciated.
Note: I'm using spring mvc for the application.
Thanks,
Mukibul
Source: (StackOverflow)
I'm trying to use ElasticSearch for an application I'm building, and I am hosting it on Rackspace servers. However, the auto-discovery
feature is not working. I thought that it was because auto-discovery
uses broadcast and multicast to find the other nodes with the matching cluster name. I found this article saying that Rackspace does now support multicast and broadcast with their new Cloud Networks feature. Then following the article's instructions I created a network, and added that network to both of the servers the nodes were running on. I then tried restarting ElasticSearch
on both nodes, but they didn't find each other, and each declared themselves as the "master" (here's the output from the logs):
[2013-04-03 22:14:03,516][INFO ][node ] [Nemesis] {0.20.6}[2752]: initializing ...
[2013-04-03 22:14:03,530][INFO ][plugins ] [Nemesis] loaded [], sites []
[2013-04-03 22:14:07,873][INFO ][node ] [Nemesis] {0.20.6}[2752]: initialized
[2013-04-03 22:14:07,873][INFO ][node ] [Nemesis] {0.20.6}[2752]: starting ...
[2013-04-03 22:14:08,052][INFO ][transport ] [Nemesis] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/166.78.177.149:9300]}
[2013-04-03 22:14:11,117][INFO ][cluster.service ] [Nemesis] new_master [Nemesis][3ih_VZsNQem5W4csDk-Ntg][inet[/166.78.177.149:9300]], reason: zen-disco-join (elected_as_master)
[2013-04-03 22:14:11,168][INFO ][discovery ] [Nemesis] elasticsearch/3ih_VZsNQem5W4csDk-Ntg
[2013-04-03 22:14:11,202][INFO ][http ] [Nemesis] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/166.78.177.149:9200]}
[2013-04-03 22:14:11,202][INFO ][node ] [Nemesis] {0.20.6}[2752]: started
[2013-04-03 22:14:11,275][INFO ][gateway ] [Nemesis] recovered [0] indices into cluster_state
The other node's log:
[2013-04-03 22:13:54,538][INFO ][node ] [Jaguar] {0.20.6}[3364]: initializing ...
[2013-04-03 22:13:54,546][INFO ][plugins ] [Jaguar] loaded [], sites []
[2013-04-03 22:13:58,825][INFO ][node ] [Jaguar] {0.20.6}[3364]: initialized
[2013-04-03 22:13:58,826][INFO ][node ] [Jaguar] {0.20.6}[3364]: starting ...
[2013-04-03 22:13:58,977][INFO ][transport ] [Jaguar] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/166.78.63.101:9300]}
[2013-04-03 22:14:02,041][INFO ][cluster.service ] [Jaguar] new_master [Jaguar][WXAO9WOoQDuYQo7Z2GeAOw][inet[/166.78.63.101:9300]], reason: zen-disco-join (elected_as_master)
[2013-04-03 22:14:02,094][INFO ][discovery ] [Jaguar] elasticsearch/WXAO9WOoQDuYQo7Z2GeAOw
[2013-04-03 22:14:02,129][INFO ][http ] [Jaguar] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/166.78.63.101:9200]}
[2013-04-03 22:14:02,129][INFO ][node ] [Jaguar] {0.20.6}[3364]: started
[2013-04-03 22:14:02,211][INFO ][gateway ] [Jaguar] recovered [0] indices into cluster_state
Is adding the network not enough (Rackspace also gave me an IP for this network)? Do I need to somehow specify in the conf file to check that network when using multicast to find other nodes?
I also found this article which offered a different approach. Per the article's instructions I put this into /config/elasticsearch.yml
:
cloud:
account: account #
key: account key
compute:
type: rackspace
discovery:
type: cloud
However, then when I tried to restart ElasticSearch
I got this:
Stopping ElasticSearch...
Stopped ElasticSearch.
Starting ElasticSearch...
Waiting for ElasticSearch.......
WARNING: ElasticSearch may have failed to start.
And it did fail to start. I checked into the log file for any errors, but this was all that was there:
[2013-04-03 22:31:00,788][INFO ][node ] [Chamber] {0.20.6}[4354]: initializing ...
[2013-04-03 22:31:00,797][INFO ][plugins ] [Chamber] loaded [], sites []
And it stopped there without any errors and without continuing.
Has anyone successfully gotten ElasticSearch
to work in the Rackspace cloud before? I know that the unicast option is also available, but I'd prefer to not have to specify each IP address individually, as I would like it to be easy to add other nodes later. Thanks!
UPDATE
I haven't solved the issue yet, but after some searching I found this post that says the "old" cloud plugin was discontinued and replaced with just an Ec2 plugin for Amazon's cloud, which explains why the changes I made to the config file do not work.
Source: (StackOverflow)
I have an ASP File upload, PostedFile.InputStream
, it is giving us the System.IO.Stream
. Is this file stream similar to that of getting
System.IO.File.OpenRead("filename");
I have a Rackspace file content saver that gets the input as Stream, it's not getting the correct image displayed when used PostedFile.InputStream
.
Source: (StackOverflow)
We have a bunch of data on S3 (images) but just started reading about Mosso Files (rackspace). Sometime this month they are going to add CDN capabilities so any file you upload is part of the limelight CDN.
Anyone using this service, it's not as well documented or publicized at S3.
Source: (StackOverflow)
i'm stuck authenticating to the europe rackspace cloud with paperclip and fog. i also added this line to the credentials:
:rackspace_auth_url => "lon.auth.api.rackspacecloud.com"
but this doesnt change anything. it still tries to authenticate with the us cloud.
has anyone got this up and running?
thanks in advance!
Source: (StackOverflow)
I would like to improve the speed of my script that uploads a small 20kb file to cloudfiles, currently it takes 3 seconds but have seen it take more, up to about 7 seconds.
Basically it does the following...
- Authenticates
- Connects
- Gets a container
- Creates an object
- Loads data into object from filename
Tried using cachegrind and webgrind to figure out which part of script is slow and it turns out it's the CURL side of things.
An interesting post here CURL with PHP - Very slow, suggests it may relate to DNS lookups but I'm not 100% sure how to monitor my traffic on Windows, any suggestions?
Do any other users have any suggestions on how to figure out why my CURL request is slow?
Source: (StackOverflow)
I've been using EC2 for deployment all the time and now I wanna give Rackspace a try ,My application is have to be scalable, so I used RabbitMQ as the main queuing system . The actions on the front-end could lead to a very large amount of jobs that need execution which I want to queue somewhere.
Due to the expected load profile of the application it makes sense to use a scalable infrastructure like the rackspace cloud. Now I am wondering where it would be best to queue the jobs. Queueing them on the front-end server means that the number of front-end servers can only be scalled back down once the queues are processed which is a waste of resources if the peak load on the front-end is over we want to scale that down and scale up on machines that process the queue items.
If we queue them on the database server we are adding the load onto a single machine which in the current setup is already the most likely botleneck. How would you design this?
is there any built-in queuing for Rackspace something like amazon SQS or something ?
Source: (StackOverflow)
So I have some video files on Rackspace Cloud Files but since I use HTML5 functions (.toDataURL()) "SECURITY_ERR: DOM Exception 18" keeps getting thrown. My code works fine when I use a video file on my server.
So I read up about CORS and modified my Rackspace Cloud Files headers like this:
access-control-allow-credentials: true
access-control-allow-origin: [my domain here]
access-control-allow-headers: Content-Type, Depth, User-Agent, X-File-Size, X-Requested-With, If-Modified-Since, X-File-Name, Cache-Control
access-control-allow-methods: OPTIONS, GET, POST
access-control-expose-headers: X-File-Size, X-Requested-With, If-Modified-Since, X-File-Name
Content-Type: video/webm
But the DOM Exception 18 error keeps getting thrown. I don't know what the problem is. I checked to see if the HTTP headers were being outputted by my video files on Rackspace with web-sniffer.net and they are, so what's the problem, why doesn't it work?
I have tried it on IE9, Chrome 19, Safari 5.1.2, and Aurora 12.0a2, they don't work on any of those browsers so I'm certain that this is not a browser issue.
I just have to get rid of this DOM Exception 18 error.
Source: (StackOverflow)