EzDevInfo.com

python-daemon

Python daemonizer for Unix, Linux and OS X

Using python, daemonizing a process

Okay I have looked at python-daemon, and also at various other daemon related code recipes. Are there any 'hello world' tutorials out there that can help me get started using a python based daemonized process?


Source: (StackOverflow)

How to fix the daemonize import error in graphite?

I am configuring a graphite monitoring system. When following the tutorial on https://gist.github.com/surjikal/2777886 I ran into the following import error:

python /opt/graphite/bin/carbon-cache.py start

Traceback (most recent call last):
  File "/opt/graphite/bin/carbon-cache.py", line 28, in <module>
    from carbon.util import run_twistd_plugin
  File "/opt/graphite/lib/carbon/util.py", line 21, in <module>
    from twisted.scripts._twistd_unix import daemonize
ImportError: cannot import name daemonize

Googling around I found several possible solutions for this issue:

1) Remove the daemonize imports from /opt/graphite/lib/carbon/util.py (https://answers.launchpad.net/graphite/+question/239063):

from time import sleep, time
from twisted.python.util import initgroups
from twisted.scripts.twistd import runApp
# from twisted.scripts._twistd_unix import daemonize
# daemonize = daemonize # Backwards compatibility

2) Use Twisted 13.1.0 instead of a higher twisted version.

3) Install daemonize via pip and import it directly (https://www.digitalocean.com/community/tutorials/installing-and-configuring-graphite-and-statsd-on-an-ubuntu-12-04-vps):

# from twisted.scripts._twistd_unix import daemonize
import daemonize

What is the most stable and proven solution for a twisted environment to fix this import issue?


Source: (StackOverflow)

Advertisements

calling a script from daemon

I am trying to call a script from python-daemon but its not working. this is what i am tying to do, is it correct?

I also want to pass a random argument to that script, currently i have hard coded it

import daemon
import time
import subprocess
import os

def interval_monitoring():
    print "Inside interval monitoring"
    while True:
        print "its working"
#         os.system("XYZ.py 5416ce0eac3d94693cf7dbd8") Tried this too but not working
        subprocess.Popen("XYZ.py 5416ce0eac3d94693cf7dbd8", shell=False)
        time.sleep(60)
        print "condition true"




def run():
    print daemon.__file__
    with daemon.DaemonContext():
        interval_monitoring()

if __name__ == "__main__":
    run()

Source: (StackOverflow)

Paramiko inside Python Daemon causes IOError

I'm trying to execute ssh commands using paramiko from inside a python daemon process. I'm using the following implementation for the daemon: https://pypi.python.org/pypi/python-daemon/

When the program is started pycrypto raises an IOError with a Bad file descriptor when paramiko tries to connect. If I remove the daemon code (just uncomment the last line and comment the two above) the ssh connection is established as expected.

The code for a short test program looks like this:

#!/usr/bin/env python2
from daemon import runner
import paramiko

class App():

    def __init__(self):
        self.stdin_path = '/dev/null'
        self.stdout_path = '/dev/tty'
        self.stderr_path = '/dev/tty'
        self.pidfile_path =  '/tmp/testdaemon.pid'
        self.pidfile_timeout = 5

    def run(self):
        ssh = paramiko.SSHClient()
        ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
        ssh.load_system_host_keys()
        ssh.connect("hostname", username="username")
        ssh.close()

app = App()
daemon_runner = runner.DaemonRunner(app)
daemon_runner.do_action()
#app.run()

The trace looks like this:

Traceback (most recent call last):
  File "./daemon-test.py", line 31, in <module>
    daemon_runner.do_action()
  File "/usr/lib/python2.7/site-packages/daemon/runner.py", line 189, in do_action
    func(self)
  File "/usr/lib/python2.7/site-packages/daemon/runner.py", line 134, in _start
    self.app.run()
  File "./daemon-test.py", line 22, in run
    ssh.connect("hostname", username="username")
  File "/usr/lib/python2.7/site-packages/paramiko/client.py", line 311, in connect
    t.start_client()
  File "/usr/lib/python2.7/site-packages/paramiko/transport.py", line 460, in start_client
    Random.atfork()
  File "/usr/lib/python2.7/site-packages/Crypto/Random/__init__.py", line 37, in atfork
_UserFriendlyRNG.reinit()
  File "/usr/lib/python2.7/site-packages/Crypto/Random/_UserFriendlyRNG.py", line 224, in reinit
_get_singleton().reinit()
  File "/usr/lib/python2.7/site-packages/Crypto/Random/_UserFriendlyRNG.py", line 171, in reinit
    return _UserFriendlyRNG.reinit(self)
  File "/usr/lib/python2.7/site-packages/Crypto/Random/_UserFriendlyRNG.py", line 99, in reinit
    self._ec.reinit()
  File "/usr/lib/python2.7/site-packages/Crypto/Random/_UserFriendlyRNG.py", line 62, in reinit
    block = self._osrng.read(32*32)
  File "/usr/lib/python2.7/site-packages/Crypto/Random/OSRNG/rng_base.py", line 76, in read
data = self._read(N)
  File "/usr/lib/python2.7/site-packages/Crypto/Random/OSRNG/posix.py", line 65, in _read
    d = self.__file.read(N - len(data))
IOError: [Errno 9] Bad file descriptor

I'm guessing this has something to do with the stream redirection when the daemon spawns. I've tried to set them all to /dev/tty or even to a normal file but nothing works.

When I run the program with strace I can see that something tries to close a file twice and that's when I get the error. But I couldn't find out which file the descriptor actually points to (strace shows a memory location that doesn't seem to be set anywhere).


Source: (StackOverflow)

Python Daemon Process Memory Management

I'm currently writing a Python daemon process that monitors a log file in realtime and updates entries in a Postgresql database based on their results. The process only cares about a unique key that appears in the log file and the most recent value it's seen from that key.

I'm using a polling approach,and process a new batch every 10 seconds. In order to reduce the overall set of data to avoid extraneous updates to the database, I'm only storing the key and the most recent value in a dict. Depending on how much activity there has been in the last 10 seconds, this dict can vary from 10-1000 unique entries. Then the dict gets "processed" and those results are sent to the database.

My main concern has revolves around memory management and the dict over time (days, weeks, etc). Since this is a daemon process that's constantly running, memory usage bloats based on the size of the dict, but never shrinks appropriately. I've tried reseting dict using a standard dereference, and the dict.clear() method after processing a batch, but noticed no changes in memory usage (FreeBSD/top). It seems that forcing a gc.collect() does recover some memory, but usually only around 50%.

Do you guys have any advice on how I should proceed? Is there something more I could be doing in my process? Feel free to chime in if you see a different road around the issue :)


Source: (StackOverflow)

How to find the find reason for a python daemon process dying?

I've got a daemon implemented in python using the python-daemon library.

The daemon appears to periodically die (or is killed) however, where periodically varies from one day to several months.

I've tried to find the reason for the daemon dying by catching exceptions, logging them to a file, and mailing them to me. The daemon part of my script looks roughly like:

import daemon

context = daemon.DaemonContext(
    working_directory='/foo/',
    pidfile=lockfile.FileLock('/foo/foo.pid')
)

try:
    with context:
        do_stuff()
except Exception, e:
    log_exception_to_file(e)
    mail_exeption_to_me(e)

I've had quite a few exceptions logged and mailed to me, so I know the code generally works.

For the majority of cases, I get nothing, and a watchdog script alerts me to the fact that the daemon is no longer running. Is there some way I can find out or track why the daemon is either dying or being killed?


Source: (StackOverflow)

remotely start Python program in background

I need to use fabfile to remotely start some program in remote boxes from time to time, and get the results. Since the program takes a long while to finish, I wish to make it run in background and so I dont need to wait. So I tried os.fork() to make it work. The problem is that when I ssh to the remote box, and run the program with os.fork() there, the program can work in background fine, but when I tried to use fabfile's run, sudo to start the program remotely, os.fork() cannot work, the program just die silently. So I switched to Python-daemon to daemonalize the program. For a great while, it worked perfectly. But now when I started to make my program to read some Python shelve dicts, python-daemon cannot work any longer. Seems like if you use python-daemon, the shelve dicts cannot be loaded correctly, which I dont know why. Anyone has an idea besides os.fork() and Python-daemon, what else can I try to solve my problem?


Source: (StackOverflow)

Ubuntu upstart will hang on start/stop/etc

I've got a several services on Ubuntu which will start using 'upstart'. They are working as requested, but when I use 'stop/start/restart {myservice}' it will hang (but WILL do as requested).

I understand it has something to do with forking.

My services are python scripts, which will create new threads on startup. One script will create 1 new thread (and will continue running on the main as well), the second one will create 2 new threads and will continue running on the main as well and the third one will create no new threads.

All of them hang on the command.

All use the same code in /etc/init as follows:

description "my service"
version "1.0"
author "my name, 2013"

expect fork

start on runlevel [2345]
stop on runlevel [!2345]
respawn


chdir <to script dir>

exec /usr/bin/python ./scriptname/

what do you think might be the problem? Does 'fork' has anything to do with creating new threads?


Source: (StackOverflow)

python-daemon 2.0.5 won't install with pip

I am getting errors when trying to install python-daemon 2.0.5 with pip & python 2.6. I know there are other questions that refer to python-daemon 2.0.3 having this problem. But those answers indicate it should be fixed now.

I've tried installing older versions as well without luck. Although if I start over with a fresh virtualenv I am able to install 1.5.6 . However in this virtualenv I get the same error with 2.0.5 and 1.5.6

(py26)[brianb@api proj]$ pip install python-daemon
Downloading/unpacking python-daemon
 Downloading python-daemon-2.0.5.tar.gz (71Kb): 71Kb downloaded
  Running setup.py egg_info for package python-daemon
    Traceback (most recent call last):
      File "<string>", line 14, in <module>
      File "/home/brianb/py26/build/python-daemon/setup.py", line 26, in     <module>
        import version
      File "version.py", line 438
        for item in versions}
      ^
    SyntaxError: invalid syntax
    Complete output from command python setup.py egg_info:
    Traceback (most recent call last):

  File "<string>", line 14, in <module>

  File "/home/brianb/py26/build/python-daemon/setup.py", line 26, in <module>

    import version

  File "version.py", line 438

    for item in versions}

  ^

SyntaxError: invalid syntax

----------------------------------------
Command python setup.py egg_info failed with error code 1
Storing complete log in /home/brianb/.pip/pip.log

Source: (StackOverflow)

Python-daemon vs start-stop-daemon

I am writing a daemon program in Python for use on Debian. I did some thorough research, and I have two candidate solutions left:

  • Python-daemon, a Python library
  • Start-stop-daemon, a Linux command used a lot in Init.d scripts

What are the pros and cons of each of the solution and which one should I pick?


Source: (StackOverflow)

accessing Dictionary from different programs

I am creating dictionary out of a large file.

def make_dic():
    big_dic={}
    for foo in open(bar):
           key,value=do_something(foo)
           big_dic[key]=value
def main():
    make_dic() #this takes time

I have to access this dictionary many times but from completely different programs. It takes lot of time to read this file and make dictionary. Is it possible to make a dictionary which remains in memory even if one program exits???? So that I create it once but can use it again and again from different programs....


Source: (StackOverflow)

running python-daemon as non-priviliged user and keeping group-memberships

i'm writing a daemon in python, using the python-daemon package. the daemon is started at boot-time (init.d) and needs to access various devices. the daemon is to run on an embedded sysytem (beaglebone) running ubuntu.

now my problem is that i want to run the daemon as an unpriviliged user rather (e.g. mydaemon) than root.

in order to allow the daemon to access the devices i added that user to the required groups. in the python code i use daemon.DaemonContext(uid=uidofmydamon).

the process started by root daemonizes nicely and is owned by the correct user, but i get permission denied errors when trying to access the devices. i wrote a small test application, and it seems that the process does not inherit the group-memberships of the user.

#!/usr/bin/python
import logging, daemon, os

if __name__ == '__main__':
  lh=logging.StreamHandler()
  logger = logging.getLogger()
  logger.setLevel(logging.INFO)
  logger.addHandler(lh)

  uid=1001 ## UID of the daemon user
  with daemon.DaemonContext(uid=uid,
                            files_preserve=[lh.stream],
                            stderr=lh.stream):
    logger.warn("UID : %s" % str(os.getuid()))
    logger.warn("groups: %s" % str(os.getgroups()))

when i run the above code as the user with uid=1001 i get something like

$ ./testdaemon.py
UID: 1001
groups: [29,107,1001]

wheras when i run the above code as root (or su), i get:

$ sudo ./testdaemon.py
UID: 1001
groups: [0]

how can i create a daemon-process started by root but with a different effective uid and intact group memberships?


Source: (StackOverflow)

No shell prompt message, just a blinking cursor after starting a Python script as a daemon?

  • python-daemon-1.5.2-1.el6.noarch

Below is the script that I received from a developer:

import threading
import multiprocessing, os, signal, time, Queue
import time
from suds.client import Client
from hotqueue import HotQueue
from config import config

queue = HotQueue(config['redis_hotqueue_list'], host=config['redis_host'], port=int(config['redis_port']),password=config['redis_pass'], charset="utf-8",db=0)
@queue.worker()
def sendMail(item):    
    key = item[0]        
    domain = item[1]
    fromemail = item[2]
    fromname = item[3]
    subject = item[4]
    content = item[5]
    toemail = item[6]            
    cc = item[7]
    bcc = item[8]
    replyto = item[9]

    # Convert to string variable
    url = config['sendmail_tmdt_url']
    client = Client(url)        
    client.service.send_mail(key,domain, fromemail,subject, content, toemail,fromname, '','','');               
for i in range(10):
    t = threading.Thread(target=sendMail)
    t.setDaemon(True)
    t.start()
while True:
    time.sleep(50)

As you can see, he's using the threading module to make it can be run as a daemon.

I'm going to switch to use the daemon library follow this blog post.

Here's my first try:

from daemon import runner
import logging
import time
import threading
import multiprocessing, os, signal, time, Queue
import time
from suds.client import Client
from hotqueue import HotQueue
from config import config

class Mail():
    def __init__(self):
        self.stdin_path = '/dev/null'
        self.stdout_path = '/dev/tty'
        self.stderr_path = '/dev/tty'
        self.pidfile_path = '/var/run/sendmailworker/sendmailworker.pid'
        self.pidfile_timeout = 1

    def run(self):    
        while True:
            queue = HotQueue(config['redis_hotqueue_list'], host=config['redis_host'], port=int(config['redis_port']), password=config['redis_pass'], charset=r"utf-8", db=0)
            @queue.worker()
            def sendMail(item):
                key = item[0]        
                domain = item[1]
                fromemail = item[2]
                fromname = item[3]
                subject = item[4]
                content = item[5]
                toemail = item[6]            
                cc = item[7]
                bcc = item[8]
                replyto = item[9]

                # Convert to string variable
                url = config['sendmail_tmdt_url']
                client = Client(url)        
                client.service.send_mail(key,domain, fromemail,subject, content, toemail, fromname, '', '', '');            
                logger.debug("result")
            #sleep(50)

mail = Mail()

logger = logging.getLogger("sendmailworker")
logger.setLevel(logging.INFO)
formatter = logging.Formatter("%(asctime)s - %(name)s - %(levelname)s - %(message)s")
handler = logging.FileHandler("/var/log/sendmailworker/sendmailworker.log")
handler.setFormatter(formatter)
logger.addHandler(handler)

daemon_runner = runner.DaemonRunner(mail)
daemon_runner.daemon_context.files_preserve=[handler.stream]
daemon_runner.do_action()

It works but I have to press the Ctrl-C to get the shell prompt after starting:

/etc/init.d/sendmailworker start

Starting server
# started with pid 2586
^C
#

How can I get rid of this problem?


Append an ampersand doesn't help:

# /etc/init.d/sendmailworker start &
[1] 4094
# Starting server
started with pid 4099
^C
[1]+  Done                    /etc/init.d/sendmailworker start
#

As @Celada pointed out: actually, I already had my shell prompt, but it doesn't display [root@hostname ~]# as usual, just a blinking cursor. Simple pressing Enter make my shell prompt reappear. So the question should be: how to make the started with pid xxxxx come first, at the same line with Starting server, then display my shell prompt?


The stop function is working fine:

[root@hostname ~]# /etc/init.d/sendmailworker stop
Stopping server
Terminating on signal 15
[root@hostname ~]# 

How can I do the similar for the start function? Something like this:

[root@hostname ~]# /etc/init.d/sendmailworker start
Starting server
started with pid 30624
[root@hostname ~]# 

Source: (StackOverflow)

Python: terminate a multithreading program after some time using daemon thread

I want to implement a program that will terminate after running for some time t, and the t is read from command line using ArgumentParser. Currently I have the following code (omit some details):

def run():
    parser = create_arg_parser()
    args = parser.parse_args()
    class_instance = MultiThreadClass(args.arg1, args.arg2)
    class_instance.run()

if __name__ == '__main__':
    run_thread = Thread(target=run)
    run_thread.daemon = True
    run_thread.start()
    time.sleep(3.0)

The program works as I expect (it terminates after running for 3 seconds). But as I mentioned before, the running time (3.0 in the code snippet above) should be input from command line (eg. args.arg3 = 3.0) instead of hard coded. Apparently I cannot put time.sleep(args.arg3) directly. I was wondering if there is any approach that could solve my problem? Answers without using daemon thread are also welcome! Thanks.

PS. If I put the argument parsing code outside of run function like:

def run(args):
    class_instance = MultiThreadClass(args.arg1, args.arg2)
    class_instance.run()

if __name__ == '__main__':
    parser = create_arg_parser()
    args = parser.parse_args()
    run_thread = Thread(target=run(args))
    run_thread.daemon = True
    run_thread.start()
    time.sleep(args.arg3)

The program will not terminate after args.arg3 seconds and I'm confused about the reason. I would also be very appreciative if any one could explain the magic behind all of these... Thanks a lot!


Source: (StackOverflow)

Signal handling in python-daemon

I installed python-daemon and now I'm trying to get the signal handling right. My code:

#!/usr/bin/env python
# -*- coding: utf-8 -*-

import signal, time, syslog
import daemon

def runDaemon():
    context = daemon.DaemonContext()

    context.signal_map = { signal.SIGTERM: programCleanup }

    context.open()
    with context:
        doMainProgram()

def doMainProgram():
    while True:
        syslog.syslog("pythonDaemon is running")
        time.sleep(5)

def programCleanup():
    syslog.syslog("pythonDaemon STOP")

if __name__ == "__main__":
    runDaemon()

When I start the code everything works as expected: The text pythonDaemon is running gets written to /var/log/syslog every 5 seconds. But when I want to terminate the daemon with kill -TERM *PID* the daemon is terminated but the text pythonDaemon STOP is missing from syslog.

What am I doing wrong?

NB: I am not working with from daemon import runner here, cause that gives me an error (looks like I need an older version of lockfile) and I will not fix this unless it is the only possibilty to get the signal-handling right.


Source: (StackOverflow)