EzDevInfo.com

flower

collection of modules to build distributed and reliable concurrent systems in Python.

How to configure Celery to send email alerts when tasks fail?

How is it possible to configure celery to send email alerts when tasks are failing?

For example I want Celery to notify me when more than 3 tasks fail or more than 10 tasks are being retried.

Is it possible using celery or a utility (e.g. flower) or I have to write my own plugin?


Source: (StackOverflow)

How to use environment variables in supervisord commands

How can I use an environment variable in a supervisord command? I tried:

flower --broker=$MYVAR

but it doesn't work (variable is not expanded), so I tried using an inline python script:

command=python -c "import os;os.system('flower --broker={0}'.format(os.environ['MYVAR']))"

The command above works, but then I'm unable to terminate the process using supervisorctl stop ...I get "stopped" back but the process is actually still running! How can I solve my issue? (I don't want to put that parameter inline)


Source: (StackOverflow)

Advertisements

Supervisord using enviornment variables in command

My supervisor configuration file

environment=USER=%(ENV_FLOWER_USER_NAME),PASS=%(ENV_FLOWER_PASSWORD)
command=/usr/local/opt/python/bin/flower --basic_auth=%(ENV_USER}:%(ENV_PASS)

When I start supervisord, I receive the following error

Restarting supervisor: Error: Format string 'USER=%(ENV_FLOWER_USER_NAME),PASS=%(ENV_FLOWER_PASSWORD)' for 'environment' is badly formatted

Any ideas?


Source: (StackOverflow)

Correct setup for multiple web sites with nginx, django and celery

Im trying to find some information on the correct way of setting up multiple django sites on a linode (Ubuntu 12.04.3 LTS (GNU/Linux 3.9.3-x86_64-linode33 x86_64)

Here is what I have now:

Webserver: nginx

Every site is contained in a .virtualenv

Django and other packages is installed using pip in each .virtualenv

RabbitMQ is installed using sudo apt-get rabbitmq, and a new user and vhost is created for each site.

Each site is started using supervisor script:

[group:<SITENAME>]
programs=<SITENAME>-gunicorn, <SITENAME>-celeryd, <SITENAME>-celerycam


[program:<SITENAME>-gunicorn]
directory = /home/<USER>/.virtualenvs/<SITENAME>/<PROJECT>/
command=/home/<USER>/.virtualenvs/<SITENAME>/bin/gunicorn <PROJECT>.wsgi:application -c /home/<USER>/.virtualenvs/<SITENAME>/<PROJECT>/server_conf/<SITENAME>-gunicorn.py

user=<USER>
autostart = true
autorestart = true
stderr_events_enabled = true
redirect_stderr = true
logfile_maxbytes=5MB


[program:<SITENAME>-celeryd]
directory=/home/<USER>/.virtualenvs/<SITENAME>/<PROJECT>/
command=/home/<USER>/.virtualenvs/<SITENAME>/bin/python /home/<USER>/.virtualenvs/<SITENAME>/<PROJECT>/manage.py celery worker -E -n <SITENAME> --broker=amqp://<SITENAME>:<SITENAME>@localhost:5672//<SITENAME> --loglevel=ERROR
environment=HOME='/home/<USER>/.virtualenvs/<SITENAME>/<PROJECT>/',DJANGO_SETTINGS_MODULE='<PROJECT>.settings.staging'

user=<USER>
autostart=true
autorestart=true
startsecs=10
stopwaitsecs = 600

[program:<SITENAME>-celerycam]
directory=/home/<USER>/.virtualenvs/<SITENAME>/<PROJECT>/
command=/home/<USER>/.virtualenvs/<SITENAME>/bin/python /home/<USER>/.virtualenvs/<SITENAME>/<PROJECT>/manage.py celerycam
environment=HOME='/home/<USER>/.virtualenvs/<SITENAME>/<PROJECT>/',DJANGO_SETTINGS_MODULE='<PROJECT>.settings.staging'

user=<USER>
autostart=true
autorestart=true
startsecs=10

Question 1: Is this the correct way? Or is it a better way to do this?

Question 2: I have tried to install celery flower, but how does that work with multiple sites? Do I need to install one flower-package for each .virtualenv, or could I use one install for every site? How do I setup nginx to display the flower-page(s) on my server?


Source: (StackOverflow)

Worker always offline in Celery flower

I have Celery and Flower running on my server and the tasks run just fine and are correctly registered and updated for me to monitor within the Flower UI, however, the worker status is allways Offline no matter if I restart the workers or Flower itself, and my log file (as given by the --log_file_prefix option) is empty, so no errors, nothing.

The only thing I can see is that the chrome dev tools show a Websocket Handshake error as shown below: flower error in chrome dev tools and a message CAUTION: Provitional headers are shown. enter image description here

I read that I need to make my server to respond with the Upgrade: websocket and Connection: upgrade headers for the Websocket handshake to be successful. I'm using apache, then I tried so by specifying the following in /etc/apache2/sites-enabled/mysite.conf:

Header set Upgrade "websocket"
Header set Connection "upgrade"

but it didn't work.

Does anyone have a clue on this error? Let me know if you need any more info.

Thanks!!


Source: (StackOverflow)

Celery - Activate a task via command line or HTTP requests

I have a predefined celery task in my code, say my_proj.tasks.my_celery_task

I want to activate the task via the command line/HTTP request (not via my application).

I searched the documents (saw flower and curl options) but there isn't a real good example of calling a predefined task there. How to achieve this?


Source: (StackOverflow)

Celery Flower for several project

I've several project's on one server, which use celery package with different BROKER_URL. Flower allows one BORKER_URL as command option:

celery flower --broker=amqp://guest:guest@localhost:5672//

How can i run one flower process for all brokers?


Source: (StackOverflow)

What methods are available in the Flower HTTP API?

I want to use the Flower HTTP API to monitor Celery, but I can't seem to find any documentation of available REST methods, other than the few examples on the README. Can anyone point me in the right direction, or is reading the source code the only option?


Source: (StackOverflow)

Celery Flower - how can i load previous catched tasks?

I started to use celery flower for tasks monitoring and it is working like a charm. I have one concern though, how can i "reload" info about monitored tasks after flower restart ? I use redis as a broker, and i need to have option to check on tasks even in case of unexpected restart of service (or server).

Thanks in advance


Source: (StackOverflow)

How to view all graphs in Celery Flower Monitor tab

Am running Celery 3.1.16 with a RabbitMQ 3.4.1 back end and using Flower 0.7.3 on Python3.4 to monitor my celery tasks. I have several tasks running and I can view their results in the task tab of Celery Flower.

In the monitor tab, there are 4 sections. Succeeded tasks, failed tasks, task times, and broker. Of these 4, only the Broker view is showing a 'traffic' graph. Is there a setting to enable the other graphs show some statistics?

flowerconfig.py

# Broker settings
BROKER_URL = 'amqp://guest:guest@localhost:5672//'

# RabbitMQ management api
broker_api = 'http://guest:guest@localhost:15672/api/'

#Port
port = 5555

# Enable debug logging
logging = 'INFO'

Supervisor: flower.conf

[program:flower]
command=/opt/apps/venv/my_app/bin/celery flower --app=celery_conf.celeryapp --conf=flowerconfig
directory=/opt/apps/my_app/celery_conf
user=www-data
autostart=true
autorestart=true
startsecs=10
redirect_stderr=true
stderr_logfile=/var/log/celery/flower.err.log
stdout_logfile=/var/log/celery/flower.out.log

While were at it, in the Broker graph, I have two queues one green the othe red. However, the one that shows in the graph is the red one yet both are running and I can view their results from the Tasks window.

I've noticed something peculiar in the Config Tab under Workers Tab in Flower. The CELERY_ROUTE and CELERY_QUEUES are showing as empty lists while all other fields look like they picked the correct data out of the celeryconfig file

BROKER_URL  amqp://guest:********@localhost:5672//
CELERYBEAT_SCHEDULE {}
CELERYD_PREFETCH_MULTIPLIER 0
CELERY_ALWAYS_EAGER False
CELERY_AMQP_TASK_RESULT_EXPIRES 60
CELERY_CREATE_MISSING_QUEUES    False
CELERY_DEFAULT_EXCHANGE default
CELERY_DEFAULT_QUEUE    default
CELERY_DEFAULT_ROUTING_KEY  ********
CELERY_IMPORTS  ['student.admission', 'student.schedule']
CELERY_INCLUDE  ['celery.app.builtins', 'student.schedule', 'student.admission']
CELERY_QUEUES   [{}, {}, {}, {}, {}]     #<==== Should it show an empty list?
CELERY_RESULT_BACKEND   amqp://guest:guest@localhost:5672//
CELERY_ROUTES   [{}, {}, {}, {}]     #<==== Should it show an empty list?
CELERY_STORE_ERRORS_EVEN_IF_IGNORED True
CELERY_TASK_RESULT_EXPIRES  3600

The celeryconfig.py looks like below:

BROKER_URL = 'amqp://guest:guest@localhost:5672//'
CELERY_RESULT_BACKEND = 'amqp://guest:guest@localhost:5672//'

#Task settings
CELERY_TASK_RESULT_EXPIRES = 3600
CELERY_AMQP_TASK_RESULT_EXPIRES = 60
CELERYD_PREFETCH_MULTIPLIER = 0 
CELERY_ALWAYS_EAGER = False
CELERY_CREATE_MISSING_QUEUES = False
CELERY_STORE_ERRORS_EVEN_IF_IGNORED = True

#Scripts to be imported 
CELERY_IMPORTS=('student.admission', 'student.schedule')

#Celery Exchanges, Queues, Routes
default_exchange = Exchange('default', type='direct')
student_admission_exchange = Exchange('student_admission_exchange', type='direct', durable=False)

CELERY_QUEUES = (
    Queue('default', default_exchange, routing_key='default'),
    Queue('student_admission_queue', student_admission_exchange, routing_key='admission', durable=False),
)
CELERY_ROUTES = (
                 {'student.admission.admit': {'queue': 'student_admission_queue','routing_key': 'admission'}},
                     )
CELERY_DEFAULT_QUEUE = 'default'
CELERY_DEFAULT_EXCHANGE = 'default'
CELERY_DEFAULT_ROUTING_KEY = 'default'

Edit

As I see am not the only one stuck on this, I though I include the screenshot of the "missing" graphs as a guide.

Celery: Uncharted Graphs


Source: (StackOverflow)

"Unknown task" error in Celery Flower when posting a new task

I'm running celery 3.1.11 and flower 0.6.0 .

I have a celery application configured as such;

# myapp.tasks.celery.py
from __future__ import absolute_import    

from celery import Celery


class Config(object):
    BROKER_URL = 'amqp://'
    CELERY_RESULT_BACKEND = 'amqp'

    CELERY_TASK_RESULT_EXPIRES = None
    CELERY_RESULT_SERIALIZER = 'json'
    CELERY_INCLUDE = [
        'myapp.tasks.source',
        'myapp.tasks.page',
        'myapp.tasks.diffusion',
        'myapp.tasks.place',
    ]
)

celery = Celery('myapp')
celery.config_from_object(Config)    


if __name__ == '__main__':
    celery.start()

I execute the celery worker using the following command:

$ celery -A myapp.tasks worker --loglevel=INFO -E -Q celery

I can see the complete list of available tasks in the worker output.

[tasks]
  ...    
  . myapp.tasks.diffusion.post_activity
  ...

I then execute the flower server with the following command:

$ celery -A myapp.tasks flower

Now, whenever I try to post a new task via the Flower REST API, I get a 404 error with an error message "Unknown task TASK_NAME".

[W 140423 12:16:17 web:1302] 404 POST /api/task/async-apply/myapp.tasks.diffusion.post_activity (82.225.61.194): Unknown task 'myapp.tasks.diffusion.post_activity'
[W 140423 12:16:17 web:1728] 404 POST /api/task/async-apply/myapp.tasks.diffusion.post_activity (82.225.61.194) 4.68ms

I've put a pdb breakpoint in the flower API handler, and it appears that the only tasks available when the requests is handled are the following:

ipdb> pp celery.tasks
{'celery.backend_cleanup': <@task: celery.backend_cleanup of yoda.tasks.celery:0x7fb9191eb490>,
 'celery.chain': <@task: celery.chain of yoda.tasks.celery:0x7fb9191eb490>,
 'celery.chord': <@task: celery.chord of yoda.tasks.celery:0x7fb9191eb490>,
 'celery.chord_unlock': <@task: celery.chord_unlock of yoda.tasks.celery:0x7fb9191eb490>,
 'celery.chunks': <@task: celery.chunks of yoda.tasks.celery:0x7fb9191eb490>,
 'celery.group': <@task: celery.group of yoda.tasks.celery:0x7fb9191eb490>,
 'celery.map': <@task: celery.map of yoda.tasks.celery:0x7fb9191eb490>,
 'celery.starmap': <@task: celery.starmap of yoda.tasks.celery:0x7fb9191eb490>}

No tasks seem to be available. However, when I use the task async_apply() method in a shell, the task is executed by the worker.

Any idea what I'm doing wrong? Thank you!

Edit: when I'm using celery 3.0.19 and flower 0.5.0, it works seemelesly.


Source: (StackOverflow)

Bitbucket git repository printed strange colored flower after push. WTH? [duplicate]

This question already has an answer here:

enter image description here

When I pushed a tag named v2.4.4 into bitbucket, it gave me this response. WTH is this??

Everything was fine, no error, push accepted and new tag was created.


Source: (StackOverflow)

Celery and Flower: nothing in broker tab

I'm trying to configure Flower, Celery's monitoring tool. This works ok overall, but I cannot see anything under the broker tab. I can see stuff under "workers", 'tasks' and 'monitor' and the graphs are updating. I'm using the following to start flower:

celery flower --broker=amqp://<username>:<password>@<ipaddress>:5672/vhost_ubuntu --broker_api=http://<username>:<password>@<ipaddress>:15672/api

Relevant error message I'm receiving is: Unable to get broker info: 401 Client Error: Unauthorized

I can login to RabbitMQ management via http://:15672/ with username guest and password guest

Any ideas as to why I can't see the messages under the broker tab?


Source: (StackOverflow)

monitor celery queue pending tasks with or without flower

I am trying to monitor celery queue so that if no of tasks increases in a queue i can chose to spawn more worker.

How can i do this with or without Flower(the celery monitoring tool)

eg: I can get a list of all the workers like this

curl -X GET http://localhost:5555/api/workers

{
    "celery@ip-172-0-0-1": {
        "status": true,
        "queues": [
            "tasks"
        ],
        "running_tasks": 0,
        "completed_tasks": 0,
        "concurrency": 1
    },
    "celery@ip-172-0-0-2": {
        "status": true,
        "queues": [
            "tasks"
        ],
        "running_tasks": 0,
        "completed_tasks": 5,
        "concurrency": 1
    },
    "celery@ip-172-0-0-3": {
        "status": true,
        "queues": [
            "tasks"
        ],
        "running_tasks": 0,
        "completed_tasks": 5,
        "concurrency": 1
    }
}

similarly i need a list of tasks pending by queue name so i can start a worker on that queue.

Thanks for not down voting this question.


Source: (StackOverflow)

Deploying Flower to Heroku

I'm following the instructions on https://github.com/jorilallo/celery-flower-heroku to deploy Flower celery monitoring app to Heroku.

After configuring and deploying my app I see the following in heroku logs:

Traceback (most recent call last):
  File "/app/.heroku/python/bin/flower", line 9, in <module>
    load_entry_point('flower==0.7.0', 'console_scripts', 'flower')()
  File "/app/.heroku/python/lib/python2.7/site-packages/flower/__main__.py", line 11, in main
    flower.execute_from_commandline()
  File "/app/.heroku/python/lib/python2.7/site-packages/celery/bin/base.py", line 306, in execute_from_commandline
    return self.handle_argv(self.prog_name, argv[1:])
  File "/app/.heroku/python/lib/python2.7/site-packages/flower/command.py", line 99, in handle_argv
    return self.run_from_argv(prog_name, argv)
  File "/app/.heroku/python/lib/python2.7/site-packages/flower/command.py", line 75, in run_from_argv
    **app_settings)
  File "/app/.heroku/python/lib/python2.7/site-packages/flower/app.py", line 40, in __init__
    max_tasks_in_memory=max_tasks)
  File "/app/.heroku/python/lib/python2.7/site-packages/flower/events.py", line 60, in __init__
    state = shelve.open(self._db)
  File "/app/.heroku/python/lib/python2.7/shelve.py", line 239, in open
    return DbfilenameShelf(filename, flag, protocol, writeback)
  File "/app/.heroku/python/lib/python2.7/shelve.py", line 223, in __init__
    Shelf.__init__(self, anydbm.open(filename, flag), protocol, writeback)
  File "/app/.heroku/python/lib/python2.7/anydbm.py", line 85, in open
    return mod.open(file, flag, mode)
  File "/app/.heroku/python/lib/python2.7/dumbdbm.py", line 250, in open
    return _Database(file, mode)
  File "/app/.heroku/python/lib/python2.7/dumbdbm.py", line 71, in __init__
    f = _open(self._datfile, 'w')
IOError: [Errno 2] No such file or directory: 'postgres://USERNAME:PASSWORD@ec2-HOST.compute-1.amazonaws.com:5432/DBNAME.dat'

Notice the .dat appendix there? No idea where it comes from, its not present int my DATABASE_URL env variable.

Furthermore, the error above is with flower 0.7. I also tried installing 0.6, with which I do get further (namely the DB is correctly recognized and connection established), but I then get the following warnings once flower starts:

2014-06-19T15:14:02.464424+00:00 app[web.1]: [E 140619 15:14:02 state:138] Failed to inspect workers: '[Errno 104] Connection reset by peer', trying again in 128 seconds
2014-06-19T15:14:02.464844+00:00 app[web.1]: [E 140619 15:14:02 events:103] Failed to capture events: '[Errno 104] Connection reset by peer', trying again in 128 seconds.

Loading flower in my browser does show a few tabs of stuff, but there is no data.

How do I resolve these issues?


Source: (StackOverflow)