EzDevInfo.com

upstart interview questions

Top upstart frequently asked interview questions

Node.js upstart vs forever

I am looking to daemonize my Node.js application. What's the difference between upstart and forever? Also, are there other packages I might want to considering looking at?


Source: (StackOverflow)

Gunicorn and Django with Upstart and Nginx

First of all I have many Django instances setup and running like this.

In each project I have a script.sh shell script that starts gunicorn etc.:

 #!/bin/bash
  set -e
  LOGFILE=/var/log/gunicorn/app_name.log
  LOGDIR=$(dirname $LOGFILE)
  NUM_WORKERS=3
  # user/group to run as
  USER=root
  GROUP=root
  PORT=8060
  IP=127.0.0.1
  cd /var/www/webapps/app_name
  source ../bin/activate
  test -d $LOGDIR || mkdir -p $LOGDIR
  exec /var/www/webapps/bin/gunicorn_django -b $IP:$PORT -w $NUM_WORKERS \
    --user=$USER --group=$GROUP --log-level=debug --log-file=$LOGFILE 2>>$LOGFILE

When running this script from the command line with bash script.sh, the site works perfectly, so Nginx is setup right.

As soon as I use upstart with service app_name start the app starts and then just stops. It does not even write to the log file.

This is the app_name.conf file in /etc/init/app_name.conf :

description "Test Django instance"
start on runlevel [2345]
stop on runlevel [06]
respawn
respawn limit 10 5
exec /var/www/webapps/app_name/script.sh

So what is the problem here? Cause running from command line works, but doing trough upstart does not. And I dont know where to see whats wrong?


Source: (StackOverflow)

Advertisements

How can I get an upstart script to properly manage running a docker image?

I have a local docker-registry that I'd like to manage with upstart.

I have the following script (in /etc/init/docker-registry.conf):

description "docker registry" 
author "me" 
start on filesystem and started docker 
stop on runlevel [!2345] 
respawn 
script
    /usr/bin/docker.io run -a stdout --rm --name=docker-registry \
    -v /var/local/docker-registry:/var/local/docker-registry \
    -p 5000:5000 mysite:5000/docker-registry
end script

I can start my docker registry fine with:

sudo start docker-registry

Response: docker-registry start/running, process 8620

Check to confirm its running?

sudo status docker-registry

Response: docker-registry start/running, process 8620

Trying to stop it with:

sudo stop docker-registry

Response: docker-registry stop/waiting

However, it doesn't actually stop. The process is still alive, the container is running, and it's still functioning perfectly


It does stop perfectly with:

docker stop docker-registry

I've tried adding this to the upstart script:

post-stop script
    docker stop docker-registry
end script

But it just returns: stop: Job failed while stopping


Source: (StackOverflow)

Increase max open files for Ubuntu/Upstart (initctl)

This is on an Ubuntu 12.04.3 LTS server.

I've added the following to /etc/security/limits.conf (my Golang processes run as root):

*      hard   nofile   50000
*      soft   nofile   50000
root   hard   nofile   50000
root   soft   nofile   50000

I've added the following to /etc/pam.d/common-session

session required pam_limits.so

I've added the following to /etc/sysctl.conf:

fs.file-max = 50000

Yet when I cat /proc/{PID}/limits, I get:

Limit                     Soft Limit           Hard Limit           Units     
Max open files            1024                 4096                 files     

This happens only when I start the process from Upstart via sudo initctl start service_name. If I start the process myself, it acknowledges my settings.

How do I fix this?


Source: (StackOverflow)

Need help running Python app as service in Ubuntu with Upstart

I have written a logging application in Python that is meant to start at boot, but I've been unable to start the app with Ubuntu's Upstart init daemon. When run from the terminal with sudo /usr/local/greeenlog/main.pyw, the application works perfectly. Here is what I've tried for the Upstart job:

/etc/init/greeenlog.conf

# greeenlog

description     "I log stuff."

start on startup
stop on shutdown

script
    exec /usr/local/greeenlog/main.pyw
end script

My application starts one child thread, in case that is important. I've tried the job with the expect fork stanza without any change in the results. I've also tried this with sudo and without the script statements (just a lone exec statement). In all cases, after boot, running status greeenlog returns greeenlog stop/waiting and running start greeenlog returns:

start: Rejected send message, 1 matched rules; type="method_call", sender=":1.61" (uid=1000 pid=2496 comm="start) interface="com.ubuntu.Upstart0_6.Job" member="Start" error name="(unset)" requested_reply=0 destination="com.ubuntu.Upstart" (uid=0 pid=1 comm="/sbin/init"))

Can anyone see what I'm doing wrong? I appreciate any help you can give. Thanks.


Source: (StackOverflow)

initctl too old upstart check

I am trying to do a syntax check on an upstart script using init-checkconf. However when I run it, it returns ERROR: version of /sbin/initctl too old.

I have no idea what to do, I have tried reinstalling upstart but nothing changes. This is being run from within a docker container (ubuntu:14.04) which might have something to do with it.


Source: (StackOverflow)

ubuntu: start (upstart) second instance of mongodb

the standard upstart script that comes with mongodb works fine:

# Ubuntu upstart file at /etc/init/mongodb.conf

limit nofile 20000 20000

kill timeout 300 # wait 300s between SIGTERM and SIGKILL.

pre-start script
    mkdir -p /var/lib/mongodb/
    mkdir -p /var/log/mongodb/
end script

start on runlevel [2345]
stop on runlevel [06]

script
  ENABLE_MONGODB="yes"
  if [ -f /etc/default/mongodb ]; then . /etc/default/mongodb; fi
  if [ "x$ENABLE_MONGODB" = "xyes" ]; then exec start-stop-daemon --start --quiet --chuid mongodb --exec  /usr/bin/mongod -- --config /etc/mongodb.conf; fi
end script

if i want to run a second instance of mongod i thought i just copy both /etc/mongodb.conf -> /etc/mongodb2.conf and /etc/init/mongodb.conf -> /etc/init/mongodb2.conf and change the std port in the first conf-file. then adjust the script above to start with the newly created /etc/mongodb2.conf.

i can then just say start mongodb2and the service starts ... but it is killed right after starting. what do i change, to get both processes up and running?

 # Ubuntu upstart file at /etc/init/mongodb2.conf

limit nofile 20000 20000

kill timeout 300 # wait 300s between SIGTERM and SIGKILL.

pre-start script
    mkdir -p /var/lib/mongodb2/
    mkdir -p /var/log/mongodb2/
end script

start on runlevel [2345]
stop on runlevel [06]

script
  ENABLE_MONGODB="yes"
  if [ -f /etc/default/mongodb ]; then . /etc/default/mongodb; fi
  if [ "x$ENABLE_MONGODB" = "xyes" ]; then exec start-stop-daemon --start --quiet --chuid mongodb --exec  /usr/bin/mongod -- --config /etc/mongodb2.conf; fi
end script

Source: (StackOverflow)

How to write an Ubuntu Upstart job for Celery (django-celery) in a virtualenv

I really enjoy using upstart. I currently have upstart jobs to run different gunicorn instances in a number of virtualenvs. However, the 2-3 examples I found for Celery upstart scripts on the interwebs don't work for me.

So, with the following variables, how would I write an Upstart job to run django-celery in a virtualenv.

Path to Django Project:

/srv/projects/django_project

Path to this project's virtualenv:

/srv/environments/django_project

Path to celery settings is the Django project settings file (django-celery):

/srv/projects/django_project/settings.py

Path to the log file for this Celery instance:

/srv/logs/celery.log

For this virtual env, the user:

iamtheuser

and the group:

www-data

I want to run the Celery Daemon with celerybeat, so, the command I want to pass to the django-admin.py (or manage.py) is:

python manage.py celeryd -B

It'll be even better if the script starts after the gunicorn job starts, and stops when the gunicorn job stops. Let's say the file for that is:

/etc/init/gunicorn.conf

Source: (StackOverflow)

ubuntu - too many open files?

I have a websocket service. it's strage that have error:"too many open files", but i have set the system configure:

/etc/security/limits.conf
*               soft    nofile          65000
*               hard    nofile          65000

/etc/sysctl.conf
net.ipv4.ip_local_port_range = 1024 65000

ulimit -n
//output 6500

So i think my system configure it's right.

My service is manage by supervisor, it's possible supervisor limits?

check process start by supervisor:

cat /proc/815/limits
Max open files            1024                 4096                 files 

check process manual start:

cat /proc/900/limits
Max open files            65000                 65000                 files 

The reason is used supervisor manage serivce. if i restart supervisor and restart child process, it's "max open files" ok(65000) but wrong(1024) when reboot system supervisor automatically start.

May be supervisor start level is too high and system configure does not work when supervisor start?

edit:

system: ubuntu 12.04 64bit

It's not supervisor problem, all process auto start after system reboot are not use system configure(max open files=1024), but restart it's ok.

update

Maybe the problem is:

Now the question is, how to set a global nofile limit because i don't want to set nofile limit in every upstart script which i need.


Source: (StackOverflow)

Upstart script for node.js app

I'm having trouble starting an Upstart script.

Here's the script (app.conf in /etc/init/)

description "node.js server"
author      "kvz"

start on startup
stop on shutdown

script
   # We found $HOME is needed. Without it, we ran into problems
   export HOME="/root"

   exec sudo -u /usr/local/bin/node \
                /var/www/vhosts/travelseguro.com/node/app.js \
                2>&1 >> /var/log/node.log
end script

When I run sudo start app, I get:

start: Unknown job: app

How can I make this work?


Source: (StackOverflow)

Upstart node.js working directory

Starting Node.js with Upstart, when trying to access files within Node.js it cannot access them without using the full path. I need it to use the working directory.

start on startup
stop on shutdown

script
        echo $$ > /var/run/mynodeapp.pid
        exec sudo -u mynodeapp node server.js >> /var/log/mynodeapp.sys.log 2>&1
end script

pre-start script
        echo "Starting" >> /var/log/mynodeapp.sys.log
end script

pre-stop script
        rm /var/run/mynodeapp.pid
        echo "Stopping" >> /var/log/mynodeapp.sys.log
end script

Source: (StackOverflow)

Can upstart expect/respawn be used on processes that fork more than twice?

I am using upstart to start/stop/automatically restart daemons. One of the daemons forks 4 times. The upstart cookbook states that it only supports forking twice. Is there a workaround?

How it fails

If I try to use expect daemon or expect fork, upstart uses the pid of the second fork. When I try to stop the job, nobody responds to upstarts SIGKILL signal and it hangs until you exhaust the pid space and loop back around. It gets worse if you add respawn. Upstart thinks the job died and immediately starts another one.

Bug acknowledged by upstream

A bug has been entered for upstart. The solutions presented are stick with the old sysvinit, rewrite your daemon, or wait for a re-write. RHEL is close to 2 years behind the latest upstart package, so by the time the rewrite is released and we get updated the wait will probably be 4 years. The daemon is written by a subcontractor of a subcontractor of a contractor so it will not be fixed any time soon either.


Source: (StackOverflow)

How to use foreman to export to upstart?

I am trying to export my application to another process management format/system (specifically, upstart). In doing so, I have come across a number of roadblocks, mostly due to lacking documentation.

As a non-root user, I ran the following command (as shown here):

-bash> foreman export upstart /etc/init
ERROR: Could not create: /etc/init

I "could not create" the directory due to inadequate permissions, so I used sudo:

-bash> sudo foreman export upstart /etc/init
Password:
ERROR: Could not chown /var/log/app to app

I "could not chown... to app" because there is no user named app.

Where is app coming from?

How should I use forman to export to upstart?


Source: (StackOverflow)

Setting memory consumption limits with Upstart

I've recently become quite fond of Upstart. Previously I've been using God, Monit and Bluepill but I don't really like these solutions so I'm giving Upstart a try.

I've been using the Foreman gem to generate some basic Upstart configuration files for my processes in /etc/init. However, these generated files only handle the respawning of a crashed process. I was wondering whether it's possible to tell Upstart to restart a process that's consuming for example > 150mb of memory, as you would with Monit, God or Bluepill.

I read through the Upstart docs and this looks like the thing I'm looking for. Though I have no clue how to config something like this.

What I basically want is quite simple. I want to restart my web process if the memory usage is > 150mb ram. These are the files I have:

|-- myapp-web-1.conf
|-- myapp-web-2.conf
|-- myapp-web-3.conf
|-- myapp-web.conf
|-- myapp.conf

And their contents are:

myapp.conf

pre-start script

bash << "EOF"
  mkdir -p /var/log/myapp
  chown -R deployer /var/log/myapp
EOF

end script

myapp-web.conf

start on starting myapp
stop on stopping myapp

myapp-web-1.conf / myapp-web-2.conf / myapp-web-3.conf

start on starting myapp-web
stop on stopping myapp-web
respawn

exec su - deployer -c 'cd /var/applications/releases/20110607140607; cd myapp && bundle exec unicorn -p $PORT >> /var/log/myapp/web-1.log 2>&1'

Any help much appreciated!


Source: (StackOverflow)

Starting multiple upstart instances automatically

We use PHP gearman workers to run various tasks in parallel. Everything works just fine, and I have silly little shell script to spin them up when I want them. Being a programmer (and therefore lazy), I wanted to see if I could spin these up via an upstart script.

I figured out how to use the instance stanza, so I could start them with an instance number:

description "Async insert workers"
author      "Mike Grunder"

env SCRIPT_PATH="/path/to/my/script"

instance $N

script
    php $SCRIPT_PATH/worker.php
end script

And this works great, to start them like so:

sudo start async-worker N=1
sudo start async-worker N=2

The way I want to use these workers is to spin up some number of them (maybe one per core, etc), and I would like to do this on startup. To be clear, I don't need the upstart script to detect the number of cores. I'm happy to just say "do 8 instances", but that's why I want multiple running. Is there a way for me to use the "start on" clause in an upstart script to do this automatically?

For example, start instance 1, 2, 3, 4? Then have them exit on shutdown properly?

I suppose I could hook this into an init.d script, but I was wondering if upstart can handle something like this, or if anyone has figured out this issue.

Cheers guys!


Source: (StackOverflow)