EzDevInfo.com

rackspace-cloud interview questions

Top rackspace-cloud frequently asked interview questions

Pros/Cons Using multiple databases vs using single database

I need to design a windows application which represents multiple "customers" in SQL Server. Each customer has the same data model, but it's independent.

what will be the Pros/Cons Using multiple databases vs using single database.

which one is the best way to do this work. if going for an single database, what will the steps to do for that.

Edited:

One thing is database will be hosted in cloud(rackspace) account.


Source: (StackOverflow)

How to turn on/off cloud instances during office hours

I've got my head around creating cloud instances in AWS, Azure and Rackspace. However, I need to turn my instances off at the end of the day and on in the morning as this will half my hosting cost (they are for development).

I've looked at a few management services but they blew my brains out. Is there a simple way to do this?


Source: (StackOverflow)

Advertisements

Automatically deploying assets to Rackspace CDN via git and updating references to those assets?

I'm looking for some help in designing a strategy to automate deployment of a web application's assets (images, css, js) to Rackspace's Cloud Files (CDN) service.

I currently use git push to deploy the web app to a remote server. So here's one way I'm thinking this could happen. Are there any better/cleaner methods?

  • Dev makes changes to an asset file (css, js, or an image)
  • Dev commits his changes
  • Dev pushes his changes to the server
  • Assets are automatically renamed to eliminate cache issues (append git version?) and sent to the CDN
  • Referencing code would be automatically updated to new filename

FYI, this is a PHP app in CodeIgniter 2.x if it matters.

Happy to hear any ideas, alternative or not.


Source: (StackOverflow)

check if object exists in Cloud Files (PHP API)

I've just started working with the PHP API for Rackspace Cloud Files. So far so good-- but I am using it as sort of a poor man's memcache, storing key/value pairs of serialized data.

My app attempts to grab the existing cached object by its key ('name' in the API language) using something like this:

$obj = $this->container->get_object($key);

The problem is, if the object doesn't exist, the API throws a fatal error rather than simply returning false. The "right" way to do this by the API would probably be to do a

$objs = $this->container->list_objects();

and then check for my $key value in that list. However, this seems way more time/CPU intensive than just returning false from the get_object request.

Is there a way to do a "search for object" or "check if object exists" in Cloud Files?

Thanks


Source: (StackOverflow)

Comparision between Amazon web services (AWS) or Rackspace cloud servers? [closed]

There are two major offering of cloud computing environment by Amazon through AWS and by Rackspace through Rackspace cloud. I wanted to know more about What are cons/pros of one platform over other. That will help me in deciding platform for my future applications.


Source: (StackOverflow)

Django upload image - From a form to Rackspace/S3 with no manipulation

I simply want to upload an image (JPG) using a form, then send that image to Rackspace 'Cloud Files' or Amazon 'S3'.

  • No manipulating the file.
  • No saving to disk, everything to memory (am hosted on a cloud server)
  • Image size won't exceed 75kb

Update (Two Caveats):

  • One: It also needs to work when data is posted from a phone app.
  • Two: It needs to be sent to Rackspace Cloud Files as well as S3 (starting with CF).

The code below works but it is way WAY too heavy.

import cloudfiles as cf
def uploadImage(request, id):

  cf_con = cf.get_connection(username='YYY', api_key='XXX', serviceNet=True)
  container = cf_con.get_container('container_name')

  file = request.FILES["item_photo"]
  f = StringIO(file.read())
  f = Image.open(f)

  ### Only works if I resize for some reason, otherwise uploads a broken file
  image = f.resize((600,600), Image.ANTIALIAS)
  o = StringIO()
  image.save(o, "JPEG", quality=80)
  image = o.getvalue()

  file_name  = "%s/%s" % (id, '600x600.jpeg')

  ### This simply uploads to Rackspace Cloud files.
  put_file(container, file_name, image)

Thanks so much, Hope all is well ...

d.


Source: (StackOverflow)

installing sshpass on amazon linux AMI based ec2 instance

I am planning to automate aws-rackspace server migration. I am following the official rackspace documentation(https://github.com/cloudnull/InstanceSync/blob/master/rsrsyncLive.sh) which uses rsync to migrate. I have modified the code to use sshpass to dynamically provide login password while making an SSH connection to the remote server.

sshpass -p "YOUR_PASSWORD" ssh -o StrictHostKeyChecking=no username@IPAddress

But I am facing trouble installing sshpass package.

Debian based Distros - Installed Successfully
CentOS - Installed Successfully
Redhat - Package not found (yum list available | grep sshpass) 
Amazon Linux -  Package not found (yum list available | grep sshpass) 

I even tried 'yum update' and then 'yum -y install sshpass' but it didn't work.

Thanks,


Source: (StackOverflow)

Input on decision: file hosting with amazon s3 or similar and php

I appreciate your comments to help me decide on the following.

My requirements:

  • I have a site hosted on a shared server and I'm going to provide content to my users. About 60 GB of content (about 2000 files 30mb each. Users will have access to only 20 files at a time), I calculate about 100 GB monthly bandwidth usage.

  • Once a user registers for the content, links will be accessible for the user to download. But I want the links to expire in 7 days, with the posibility to increase the expiration time.

  • I think that the disk space and bandwidth calls for a service like Amazon S3 or Rackspace Cloud files (or is there an alternative? )

  • To manage the expiration I plan to somehow obtain links that expire (I think S3 has that feature, not Rackspace) OR control the expiration date on my database and have a batch process that will rename on a daily basis all 200 files on the cloud and on my database (in case a user copied the direct link, it won't work the next day, only my webpage will have the updated links). PHP is used for programming.

So what do you think? Cloud file hosting is the way to go? Which one? Does managing the links makes sense that way or it is too difficult to do that through programming (send commands to the cloud server...)

EDIT: Some host companies have Unlimited space and Bandwidth on their shared plans.. I asked their support staff and they said that they really honor the "unlimited" deal. So 100 GB of transfer a month is ok, the only thing to look out is CPU usage. So going shared hosting is one more alternative to choose from..

FOLLOWUP: So digging more into this I found that the TOS of the Unlimited plans say that it is not permitted to use the space primarily to host multimedia files. So I decided to go with Amazon s3 and the solution provided by Tom Andersen.

Thanks for the input.


Source: (StackOverflow)

Using fog for rackspace cloudfiles (EU) with paperclip

i'm stuck authenticating to the europe rackspace cloud with paperclip and fog. i also added this line to the credentials:

:rackspace_auth_url => "lon.auth.api.rackspacecloud.com"

but this doesnt change anything. it still tries to authenticate with the us cloud.

has anyone got this up and running?

thanks in advance!


Source: (StackOverflow)

Speed up often used Django random query

I've got a query set up that puts 28 random records from a database into a JSON response. This page is hit often, every few seconds, but is currently too slow for my liking.

In the JSON response I have:

  • ID's
  • Usernames
  • a Base64 thumbnail

These all come from three linked tables.
I'd be keen to hear of some other solutions, instead of users simply hitting a page, looking up 28 random records and spitting back the response. One idea I had:

  • Have a process running that creates a cached page every 30 seconds or so with the JSON response.

Is this a good option? If so, I'd be keen to hear how this would be done.

Thanks again,
Hope everyone is well


Source: (StackOverflow)

How do I make Cloudfiles FormPost return the "Access-Control-Allow-Origin" header to enable CORS?

I want to enable CORS in my Rackspace CluodFiles container, so after reading the docs, I see I have to set some container metadata (I'm using Python and Pyrax):

from pyrax import cloudfiles

cloudfiles.set_container_metadata(container_name, {
    'X-Container-Meta-Access-Control-Allow-Origin': 'localhost:8000',
    'X-Container-Meta-Access-Control-Expose-Headers': 'Access-Control-Allow-Origin',
    'X-Container-Meta-Access-Control-Max-Age': '10',
})
print cloudfiles.get_container_metadata(container_name)

And I get as output:

{'x-container-meta-access-control-allow-origin': 'localhost:8000',
 'x-container-meta-access-control-expose-headers': 'Access-Control-Allow-Origin',
 'x-container-meta-access-control-max-age': '10',
 'x-container-meta-access-log-delivery': 'false'}

But the browser is not getting a Access-Control-Allow-Origin in the OPTIONS preflight request, so it cancels the AJAX call:

HTTP/1.1 401 Unauthorized
Content-Length: 131
Content-Type: text/html; charset=UTF-8
Allow: HEAD, GET, PUT, POST, COPY, OPTIONS, DELETE
X-Trans-Id: txXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Date: Wed, 13 Nov 2013 20:07:34 GMT
Connection: keep-alive

What's missing?

Thanks!


Source: (StackOverflow)

uploading photos to server/rackspace cloudfiles - php/codeigniter

I'm working on uploading photos from my computer to the server, using this repository. When I try to upload a file called mylogo.jpg, I get an error that says:

Could not open file for reading: /tmp/php0d4X5Ny/mylogo.jpg

I need to figure out why I'm getting this error.

Here's my code from the controller (cloudfiles.php):

        public function add_local_file()
        {
                $file_location = $_FILES['userfile']['tmp_name'].'/';
                $file_name = $_FILES['userfile']['name'];

                $this->cfiles->do_object('a', $file_name, $file_location);

                $this->_show_errors('Image Added!');
        }

The do_object function is here (line 151). And here is the upload view:

<?php echo form_open_multipart('cloudfiles/add_local_file'); ?>

<input type="file" name="userfile" size="20" />

<input type="submit" value="upload" />

<?php echo form_close(); ?>

I searched the cloudfiles library and found that the "could not open file for reading" error is an exception in this function:

function load_from_filename($filename, $verify=true)
    {
        $fp = @fopen($filename, "r");
        if (!$fp) {
            throw new IOException("Could not open file for reading: ".$filename);
        }

        clearstatcache();

        $size = (float) sprintf("%u", filesize($filename));
        if ($size > MAX_OBJECT_SIZE) {
            throw new SyntaxException("File size exceeds maximum object size.");
        }

        $this->_guess_content_type($filename);

        $this->write($fp, $size, $verify);
        fclose($fp);
        return True;
    }

I've been looking at this for several hours and can't see what's going wrong. Not seeing any php errors, by the way, just the one error noted above. New to Stack Overflow (frequent browser, new account), so thanks in advance for your help.


Source: (StackOverflow)

Rackspace Cloud Sites and ASP.NET 4

I am looking at signing up for the Cloud Sites service from Rackspace, but I looking for practical experience anyone has had with their ASP.NET 4 'beta' program. I plan to develop in MVC3 moving forward and cannot consider the Cloud Sites service if ASP.NET 4 doesn't work well / correctly on their platform. Has anyone tried hosting an ASP.NET 4 / MVC 3 site on Cloud Sites? What was your experience like?


Source: (StackOverflow)

Set SSH Host IP Address on Rackspace for Ansible

The Question

When using the rax module to spin up servers and get inventory, how do I tell Ansible to connect to the IP address on an isolated network rather than the server's public IP?

Note: Ansible is being run from a server on the same isolated network.

The Problem

I spin up a server in the Rackspace Cloud using Ansible with the rax module, and I add it to an isolated/private network. I then add it to inventory and begin configuring it. The first thing I do is lock down SSH, in part by telling it to bind only to the IP address given to the host on the isolated network. The catch is, that means ansible can't connect over the public IP address, so I also set ansible_ssh_host to the private IP. (This happens when I add the host to inventory.)

- name: Add servers to group
  local_action:
    module: add_host
    hostname: "{{ item.name }}"
    ansible_ssh_host: "{{ item.rax.addresses.my_network_name[0].addr }}"
    groups: launched
  with_items: rax_response.success
  when: rax_response.action = 'create'

This works just fine on that first run of creating and configuring new instances. Unfortunately, the next time I try to connect to these servers, the connection is refused because Ansible is trying at an IP address on which SSH isn't listening. This happens because:

  1. Ansible tries to connect to ansible_ssh_host...
  2. But the rax.py inventory script has set ansible_ssh_host to the accessIPv4 returned by Rackspace...
  3. And Rackspace has set accessIPv4 to the public IP address of the server.

Now, I'm not sure what to do about this. Rackspace does allow an API call to update a server and set its accessIPv4, so I thought I could run another local_action after creating the server to do that. Unfortunately, the rax module doesn't appear to allow updating a server, and even if it did it depends on pyrax which in turn depends on novaclient, and novaclient only allows updating the name of the server, not accessIPv4.

Surely someone has done this before. What is the right way to tell Ansible to connect on the isolated network when getting dynamic inventory via the rax module?


Source: (StackOverflow)

connect() failed (111: Connection refused) while connecting to upstream

I am hosting my Rails app on Rackspace with nginx webserver.

When calling any Rails API, I see this message in /var/log/nginx/error.log: *49 connect() failed (111: Connection refused) while connecting to upstream, client: 10.189.254.5, server: , request: "POST /api/v1/users/sign_in HTTP/1.1", upstream: "http://127.0.0.1:3001/api/v1/users/sign_in", host: "anthemapp.com"

  1. What is the upstream block?
  2. What is /etc/nginx/sites-available/default? Is this where I can configure this?
  3. Why am I receiving the error above?

I spent several hours with 5-6 different Rackspace tech people (they didn't know how to resolve this). This all started when I took the server into rescue mode and followed the steps here: https://community.rackspace.com/products/f/25/t/69. Once I came out of rescue mode and rebooted the server, I started receiving the error I am writing about. Tnx!


Source: (StackOverflow)