EzDevInfo.com

coreos interview questions

Top coreos frequently asked interview questions

How can I find which version of CoreOS is running on a given machine

I can ssh to a box running CoreOS, but can't seem to find a way to check what version of CoreOS is running there. Could not find anything in the documentation either.


Source: (StackOverflow)

Docker Compose to CoreOS

I'm currently learning Docker, and have made a nice and simple Docker Compose setup. 3 containers, all with their own Dockerfile setup. How could I go about converting this to work on CoreOS so I can setup up a cluster later on?

web:
  build: ./app
  ports:
    - "3030:3000"
  links:
    - "redis"

newrelic:
  build: ./newrelic
  links:
    - "redis"

redis:
  build: ./redis
  ports:
    - "6379:6379"
  volumes:
    - /data/redis:/data

Source: (StackOverflow)

Advertisements

how do I clean up my docker host machine

As I create/debug a docker image/container docker seems to be leaving all sorts of artifacts on my system. (at one point there was a 48 image limit) But the last time I looked there were 20-25 images; docker images.

So the overarching questions are:

  • how does one properly cleanup?
  • as I was manually deleting images more started to arrive. huh?
  • how much disk space should I really allocate to the host?
  • will running daemons really restart after the next reboot?

and the meta question... what questions have I not asked that need to be?


Source: (StackOverflow)

CoreOS Authentication failure on vagrant up

I tried using CoreOS today. So I just tried to follow the Start guide and executed the following commands:

git clone https://github.com/coreos/coreos-vagrant.git

cd coreos-vagrant

vagrant up

The coreos-vagrant's folder have some configuration resource like: config.rb & user-data

config.rb :

$update_channel='alpha'

user-data:

#cloud-config

coreos:
  etcd:
    addr: $public_ipv4:4001
    peer-addr: $public_ipv4:7001
  fleet:
    public-ip: $public_ipv4
  units:
    - name: etcd.service
      command: start
    - name: fleet.service
      command: start

users:
  - name: carbonell
    passwd: $1$BulVX1y9$8W/3RHZAed3fb.wmbZYGi0
    groups:
      - docker

The command result:

devops@devops-server:~/workspace/coreos-vagrant$ vagrant up
Bringing machine 'core-01' up with 'virtualbox' provider...
==> core-01: Importing base box 'coreos-alpha'...
==> core-01: Matching MAC address for NAT networking...
==> core-01: Setting the name of the VM: coreos-vagrant_core-01_1405929178704_22375
==> core-01: Clearing any previously set network interfaces...
==> core-01: Preparing network interfaces based on configuration...
    core-01: Adapter 1: nat
    core-01: Adapter 2: hostonly
==> core-01: Forwarding ports...
    core-01: 22 => 2222 (adapter 1)
==> core-01: Running 'pre-boot' VM customizations...
==> core-01: Booting VM...
==> core-01: Waiting for machine to boot. This may take a few minutes...
    core-01: SSH address: 127.0.0.1:2222
    core-01: SSH username: vagrant
    core-01: SSH auth method: private key
    core-01: Warning: Connection timeout. Retrying...
    core-01: Warning: Authentication failure. Retrying...
    core-01: Warning: Authentication failure. Retrying...
    core-01: Warning: Authentication failure. Retrying...
    core-01: Warning: Authentication failure. Retrying...
    core-01: Warning: Authentication failure. Retrying...
    core-01: Warning: Authentication failure. Retrying...
    core-01: Warning: Authentication failure. Retrying...
    core-01: Warning: Authentication failure. Retrying...
    core-01: Warning: Authentication failure. Retrying...
    core-01: Warning: Authentication failure. Retrying...
    core-01: Warning: Authentication failure. Retrying...
    core-01: Warning: Authentication failure. Retrying...
    core-01: Warning: Authentication failure. Retrying...
    core-01: Warning: Authentication failure. Retrying...
    core-01: Warning: Authentication failure. Retrying...
    core-01: Warning: Authentication failure. Retrying...
    core-01: Warning: Authentication failure. Retrying...
    core-01: Warning: Authentication failure. Retrying...
    core-01: Warning: Authentication failure. Retrying...
    core-01: Warning: Authentication failure. Retrying...
    core-01: Warning: Authentication failure. Retrying...
    core-01: Warning: Authentication failure. Retrying...
    core-01: Warning: Authentication failure. Retrying...
    core-01: Warning: Authentication failure. Retrying...
    core-01: Warning: Authentication failure. Retrying...
    core-01: Warning: Authentication failure. Retrying...
    core-01: Warning: Authentication failure. Retrying...
    core-01: Warning: Authentication failure. Retrying...
    core-01: Warning: Authentication failure. Retrying...
Timed out while waiting for the machine to boot. This means that
Vagrant was unable to communicate with the guest machine within
the configured ("config.vm.boot_timeout" value) time period.

If you look above, you should be able to see the error(s) that
Vagrant had when attempting to connect to the machine. These errors
are usually good hints as to what may be wrong.

If you're using a custom box, make sure that networking is properly
working and you're able to connect to the machine. It is a common
problem that networking isn't setup properly in these boxes.
Verify that authentication configurations are also setup properly,
as well.

Secondary reference: https://github.com/coreos/coreos-vagrant.git


Source: (StackOverflow)

Zero Downtime app deployment with CoreOS

I have a docker container that I want to deploy to a CoreOS cluster that has to download my app from a git repo.

Let's say the app container runs nginx / nodejs

How should I update it?

If i submit the container and start it, that works the first time. But the second time I'll have to stop/start the container with fleetctl then I'll obviously have downtime. Should I start up new containers that are derived from that container?


Source: (StackOverflow)

(CoreOS) how to auto restart a docker container after a reboot?

assuming the docker daemon is restarted automatically by whatever init.d or systemd like process when the OS is restarted... what is the preferred way to restart one or more docker containers? For example I might have a number of web servers behind a reverse proxy or a DB server.


Source: (StackOverflow)

Should I use forever/pm2 within a (Docker) container?

I am refactoring a couple of node.js services. All of them used to start with forever on virtual servers, if the process crashed they just relaunch.

Now, moving to containerised and state-less application structures, I think the process should exit and the container should be restarted on a failure.

Is that correct? Are there benefits or disadvantages?


Source: (StackOverflow)

How to load balance containers?

How to load balance docker containers running a simple web application?

I have 3 web containers running in a single host. How do I load balance my web containers?


Source: (StackOverflow)

How to configure a high-availability cluster of MariaDB and Redis in Mesos or CoreOS

In most tutorials, presentations and demos, only stateless services are presented that are load balanced either via DNS (SkyDNS, skydock, etc.) or via reverse proxy, such as HAproxy or Vulcand, which are configured with etcd or ZooKeeper.

Is there a best practice for deploying a cluster of MariaDB and Redis using:

  1. CoreOS + fleet + Docker; or

  2. Mesos + Marathon + Docker

  3. Any other cluster management solution

How can one configure a Redis cluster and a MariaDB cluster (Galera), when the host running Master may change?

https://github.com/sheldonh/coreos-vagrant/tree/master/redis

http://www.severalnines.com/blog/how-deploy-galera-cluster-mysql-using-docker-containers


Source: (StackOverflow)

CoreOS systemd journal remote logging

I run multiple CoreOS instances on Google Compute Engine (GCE). CoreOS uses systemd's journal logging feature. How can I push all logs to a remote destination? As I understand, systemd journal doesn't come with remote logging abilities. My current work-around looks like this:

journalctl -o short -f | ncat <addr> <ip>

With https://logentries.com using their Token-based input via TCP:

journalctl -o short -f | awk '{ print "<token>", $0; fflush(); }' | ncat data.logentries.com 10000

Are there better ways?

EDIT: https://medium.com/coreos-linux-for-massive-server-deployments/defb984185c5


Source: (StackOverflow)

Can I clean /var/lib/docker/tmp?

My server is CoreOS. There are so many files in /var/lib/docker/tmp, their name's like "GetV2ImageBlob998303926".

The size of all GetV2ImageBlobxxxxxxxx files is 640MB.

Can I remove all files in /var/lib/docker/tmp?


Source: (StackOverflow)

my coreos/fleet deployed service is dying and I can't tell why

I'm trying to deploy nsqlookupd using fleet on a brand shiny new coreos cluster in EC2. Here is my systemd unit file:

[Unit]
Description=nsqlookupd service
After=docker.service
Requires=docker.service

[Service]
EnvironmentFile=/etc/environment
ExecStartPre=-/usr/bin/docker kill nsqlookupd
ExecStartPre=-/usr/bin/docker rm nsqlookupd
ExecStart=/usr/bin/docker run -d --name=nsqlookupd -e BROADCAST_ADDRESS=$COREOS_PUBLIC_IPV4 -p 4160:4160 -p 4161:4161 mikedewar/nsqlookupd
ExecStartPost=/usr/bin/etcdctl set /nsqlookupd_broadcast_address $COREOS_PUBLIC_IPV4
ExecStop=/usr/bin/docker stop -t 1 nsqlookupd
ExecStopPost=/usr/bin/etcdctl rm /nsqlookupd_broadcast_address

I've verified the container works fine if I just run the ExecStart command. My docker logs just look like

~ $ docker logs nsqlookupd
2014/08/08 02:23:58 nsqlookupd v0.2.29-alpha (built w/go1.2.2)
2014/08/08 02:23:58 TCP: listening on [::]:4160
2014/08/08 02:23:58 HTTP: listening on [::]:4161

and my fleetctl journal looks like

$ fleetctl journal nsqlookupd.service
-- Logs begin at Sun 2014-08-03 12:49:00 UTC, end at Fri 2014-08-08 02:30:06 UTC. --
Aug 08 02:23:57 ip-10-147-9-249 systemd[1]: Starting nsqlookupd service...
Aug 08 02:23:57 ip-10-147-9-249 docker[6140]: Error response from daemon: No such container: nsqlookupd
Aug 08 02:23:57 ip-10-147-9-249 docker[6140]: 2014/08/08 02:23:57 Error: failed to kill one or more containers
Aug 08 02:23:57 ip-10-147-9-249 docker[6148]: Error response from daemon: No such container: nsqlookupd
Aug 08 02:23:57 ip-10-147-9-249 docker[6148]: 2014/08/08 02:23:57 Error: failed to remove one or more containers
Aug 08 02:23:57 ip-10-147-9-249 etcdctl[6157]: 54.198.93.169
Aug 08 02:23:57 ip-10-147-9-249 systemd[1]: Started nsqlookupd service.
Aug 08 02:23:57 ip-10-147-9-249 docker[6155]: 0fce4465f61c092541ba9d4c4e89ce13c4d6bedc096519034ed585d7adb5e0d7
Aug 08 02:23:59 ip-10-147-9-249 docker[6194]: nsqlookupd

both of which look just fine. But the container dies quietly, and my fleetctl list-units gives

$ fleetctl list-units
UNIT                STATE       LOAD    ACTIVE          SUB     DESC                MACHINE
nsqlookupd.service  launched    loaded  deactivating    stop    nsqlookupd service  1320802c.../10.147.9.249

Running docker images is a little worrying:

$ docker images
REPOSITORY             TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
<none>                 <none>              8ef9d8f9d18d        9 minutes ago       710 MB
mikedewar/nsqadmin     latest              432af572bda8        2 days ago          710 MB
mikedewar/nsqd         latest              00bd4e474964        2 days ago          710 MB
<none>                 <none>              adf0ed97208e        3 weeks ago         710 MB
mikedewar/nsqlookupd   latest              2219c0e783d9        3 weeks ago         710 MB
<none>                 <none>              35d2212f8932        3 weeks ago         710 MB
mikedewar/nsq          latest              f9794fe056e1        3 weeks ago         710 MB
busybox                latest              a9eb17255234        9 weeks ago         2.433 MB
zmarcantel/cassandra   latest              b1168b45b4f8        4 months ago        738 MB

as I've been updating mikedewar/nsqlookupd quite regularly over the last 3 weeks. Maybe that's the time I first pushed something to docker hub? I'd love to know that the image I'm working with is the up-to-date one. I've tried docker rmi mikedewar/nsqlookupd followed by docker pull mikedewar/nsqlookupd but the CREATED column still says it was created 3 weeks ago.

I don't know if this is useful, but the ExecStopPost=/usr/bin/etcdctl rm /nsqlookupd_broadcast_address command seems to have worked - the etcdctl log line in the fleet journal suggests I managed to set the key to my IP, but after the container dies I can't get that key from etcd.

Any help on where to look next for clues, or any ideas why this is happening would be greatly appreciated! As is probably clear I'm rather new to this sort of thing...


Source: (StackOverflow)

Using rsync on windows with vagrant running a CoreOS VM

I am using windows 8.1 Pro pc running vagrant and cygwin's rsync.

I am configuring as such:

config.vm.synced_folder "../sharedFolder", "/vagrant_data", type: "rsync"

And when I execute vagrant up I get the following error:

C:\dev\vagrantBoxes\coreOS>vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Checking if box 'yungsang/coreos' is up to date...
==> default: Clearing any previously set forwarded ports...
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
    default: Adapter 1: nat
==> default: Forwarding ports...
    default: 22 => 2222 (adapter 1)
==> default: Running 'pre-boot' VM customizations...
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
    default: SSH address: 127.0.0.1:2222
    default: SSH username: core
    default: SSH auth method: private key
    default: Warning: Connection timeout. Retrying...
==> default: Machine booted and ready!
==> default: Rsyncing folder: /c/dev/vagrantBoxes/sharedFolder/ => /vagrant_data
There was an error when attempting to rsync a synced folder.
Please inspect the error message below for more info.

Host path: /c/dev/vagrantBoxes/sharedFolder/
Guest path: /vagrant_data
Command: rsync --verbose --archive --delete -z --copy-links --chmod=ugo=rwX --no-perms --no-owner --no-group --rsync-path sudo rsync -e ssh -p 2222 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/d
ev/null -i 'C:/Users/aaron.axisa/.vagrant.d/insecure_private_key' --exclude .vagrant/ /c/dev/vagrantBoxes/sharedFolder/ core@127.0.0.1:/vagrant_data
Error: Warning: Permanently added '[127.0.0.1]:2222' (RSA) to the list of known hosts.
rsync: change_dir "/c/dev/vagrantBoxes/sharedFolder" failed: No such file or directory (2)
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at /usr/src/ports/rsync/rsync-3.0.9-1/src/rsync-3.0.9/main.c(1052) [sender=3.0.9]

I assume it is an issue with how it is changing the directory path to /c/dev rather than C:\dev


Source: (StackOverflow)

Is it safe to use etcd across multiple data centers?

Is it safe to use etcd across multiple data centers? As it expose etcd port to public internet. Do I have to use client certificates in this case or etcd has some sort of authification?


Source: (StackOverflow)

Docker container logs taking all my disk space

I am running a container on a VM. My container is writing logs by default to /var/lib/docker/containers/CONTAINER_ID/CONTAINER_ID-json.log file until the disk is full.

Currently, I have to delete manually this file to avoid the disk to be full. I read that in Docker 1.8 there will be a parameter to rotate the logs. What would you recommend as the current workaround?


Source: (StackOverflow)