gevent
Coroutine-based concurrency library for Python
What is gevent? — gevent 1.1b4.dev0 documentation
I have a process, that needs to perform a bunch of actions "later" (after 10-60 seconds usually). The problem is that those "later" actions can be a lot (1000s), so using a Thread
per task is not viable. I know for the existence of tools like gevent and eventlet, but one of the problem is that the process uses zeromq for communication so I would need some integration (eventlet already has it).
What I'm wondering is What are my options? So, suggestions are welcome, in the lines of libraries (if you've used any of the mentioned please share your experiences), techniques (Python's "coroutine" support, use one thread that sleeps for a while and checks a queue), how to make use of zeromq's poll or eventloop to do the job, or something else.
Source: (StackOverflow)
I've been using the python requests library for some time, and recently had a need to make a request asynchronously, meaning I would like to send off the HTTP request, have my main thread continue to execute, and have a callback called when the request returns.
Naturally, I was lead to the grequests library (https://github.com/kennethreitz/grequests), but i'm confused about the behavior. For example:
import grequests
def print_res(res):
from pprint import pprint
pprint (vars(res))
req = grequests.get('http://www.codehenge.net/blog', hooks=dict(response=print_res))
res = grequests.map([req])
for i in range(10):
print i
The above code will produce the following output:
<...large HTTP response output...>
0
1
2
3
4
5
6
7
8
9
The grequests.map() call obviously blocks until the HTTP response is available. It seems likely I misunderstood the 'asynchronous' behavior here, and the grequests library is just for performing multiple HTTP requests concurrently and sending all responses to a single callback. Is this accurate?
Source: (StackOverflow)
I am making use of gevent in my Python application (Django based). However, I am now wondering how to run it in production. What server should I use? During development, I use gevent.pywsgi, but is that production-ready? I have also heard about gunicorn, but I've seen some pretty bad benchmarks about it.
Note: I need SSL.
Source: (StackOverflow)
Both 'pypy' and 'gevent' are supposed to provide high performance. Pypy is supposedly faster than CPython, while gevent is based on co-routines and greenlets, which supposedly makes for a faster web server.
However, they're not compatible with each other.
I'm wondering which setup is more efficient (in terms of speed/performance):
- The builtin Flask server running on pypy
or:
- The gevent server, running on CPython
Source: (StackOverflow)
Those two libraries share the similar philosophy and the similar design decisions as a result. But this popular WSGI benchmark says eventlet
is way slower than gevent
. What do make their performance so different?
As I know key differences between them are:
gevent
intentionally depends on and is coupled to libev
(libevent
, previously) while eventlet
defines independent reactor interface and implements particular adapters using select
, epoll
, and Twisted reactor behind it. Does additional reactor interface make critical performance hits?
gevent
is mostly written in Cython while eventlet
is written in pure Python. Is natively compiled Cython so faster than pure Python, for not-so-much-computational but IO-bound programs?
Primitives of gevent
emulate standard libraries’ interfaces while eventlet
’s primitives differ from standard and provides additional layer to emulate it. Does additional emulation layer makes eventlet
slower?
Is the implementation of eventlet.wsgi
just worse than gevent.pywsgi
?
I really wonder, because they overall look so similar for me.
Source: (StackOverflow)
I just wrote a simple piece of code to perf test Redis + gevent to see how async helps perforamance and I was surprised to find bad performance. here is my code. If you get rid of the first two lines to monkey patch this code then you will see the "normal execution" timing.
On a Ubuntu 12.04 LTS VM, I am seeing a timing of
without monkey patch - 54 secs
With monkey patch - 61 seconds
Is there something wrong with my code / approach? Is there a perf issue here?
#!/usr/bin/python
from gevent import monkey
monkey.patch_all()
import timeit
import redis
from redis.connection import UnixDomainSocketConnection
def UxDomainSocket():
pool = redis.ConnectionPool(connection_class=UnixDomainSocketConnection, path = '/var/redis/redis.sock')
r = redis.Redis(connection_pool = pool)
r.set("testsocket", 1)
for i in range(100):
r.incr('testsocket', 10)
r.get('testsocket')
r.delete('testsocket')
print timeit.Timer(stmt='UxDomainSocket()',
setup='from __main__ import UxDomainSocket').timeit(number=1000)
Source: (StackOverflow)
After much searching and googling I am coming back to the well.
I have Django 1.4 and am looking for a decent working example to figure out getting Django to work with gevent.
I like the Django framwork but I need it to handle long polling.
I already have a working server using gevent on it's own that handles long polling requests as well as does image streaming via http at about 10 frames/second. I would like to use all the goodies in Django to provide a framework for this part.
There are many examples out there, but unfortunately none of these seem to work out of the box! It would really help to have a working example to understand how these two things are working together.
Here is what I have found so far and the problems:
http://codysoyland.com/2011/feb/6/evented-django-part-one-socketio-and-gevent/
problem:
ImportError: Could not import settings 'webchat.settings' (Is it on sys.path?): No module named webchat.settings
https://github.com/codysoyland/django-socketio-example/blob/master/README.rst
Problem: installation fails with permission problem getting gevent
Tried manually getting it from git hub. The example runs, but generates these errors when the browsers connect.
These are informative but do not provide the basic answer.
Need help understanding Comet in Python (with Django)
https://bitbucket.org/denis/gevent/src/tip/examples/webchat/chat/views.py
http://blog.gevent.org/2009/10/10/simpler-long-polling-with-django-and-gevent/
What I hope someone can explain (please, pretty please....) is this:
I have a basic site created using Django 1.4 - the tutorial here https://docs.djangoproject.com/en/1.4/intro/tutorial01/ is excellent.
So now I need to understand what changes to make in order to use gevent and be able to handle asynchronous events. I am sure it is not difficult - I just need someone who understands it to explain what to do and also what is happening (with things like monkey_patch).
Thanks.
Source: (StackOverflow)
I am attempting to use multiprocessing's pool to run a group of processes, each of which will run a gevent pool of greenlets. The reason for this is that there is a lot of network activity, but also a lot of CPU activity, so to maximise my bandwidth and all of my CPU cores, I need multiple processes AND gevent's async monkey patching. I am using multiprocessing's manager to create a queue which the processes will access to get data to process.
Here is a simplified fragment of the code:
import multiprocessing
from gevent import monkey
monkey.patch_all(thread=False)
manager = multiprocessing.Manager()
q = manager.Queue()
Here is the exception it produces:
Traceback (most recent call last):
File "multimonkeytest.py", line 7, in <module>
q = manager.Queue()
File "/usr/local/Cellar/python/2.7.2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/multiprocessing/managers.py", line 667, in temp
token, exp = self._create(typeid, *args, **kwds)
File "/usr/local/Cellar/python/2.7.2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/multiprocessing/managers.py", line 565, in _create
conn = self._Client(self._address, authkey=self._authkey)
File "/usr/local/Cellar/python/2.7.2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/multiprocessing/connection.py", line 175, in Client
answer_challenge(c, authkey)
File "/usr/local/Cellar/python/2.7.2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/multiprocessing/connection.py", line 409, in answer_challenge
message = connection.recv_bytes(256) # reject large message
IOError: [Errno 35] Resource temporarily unavailable
I believe this must be due to some difference between the behaviour of the normal socket module and gevent's socket module.
If I monkeypatch within the subprocess, The queue is created successfully, but when the subprocess tries to get() from the queue, a very similar exception occurs. The socket does need to be monkeypatched due to doing large numbers of network requests in the subprocesses.
My version of gevent, which I believe is the latest:
>>> gevent.version_info
(1, 0, 0, 'alpha', 3)
Any ideas?
Source: (StackOverflow)
Quickie here that needs more domain expertise on pymongo than I have right now:
Are the "right" parts of the pymongo driver written in python for me to call gevent monkey_patch() and successfully alter pymongo's blocking behavior on r/w within gevent "asynchronous" greenlets?
If this will require a little more leg work on gevent and pymongo -- but it is feasible -- I would be more than willing to put in the time as long as i can get a little guidance over irc.
Thanks!
Note: At small scale mongo writes are not a big problem because we are just queuing a write "request" before unblocking. BUT talking to fiorix about his twisted async mongo driver (https://github.com/fiorix/mongo-async-python-driver), even mongo's quick write (requests) can cause problems in asyncronous applications at scale. (And of course, non-blocking reads could cause problems from the start!)
Source: (StackOverflow)
I am using gevent and I am monkey patching everything.
It seems like the monkey patching causes the threading to work serially.
My code:
import threading
from gevent import monkey; monkey.patch_all()
class ExampleThread(threading.Thread):
def run(self):
do_stuff() # takes a few minutes to finish
print 'finished working'
if __name__ == '__main__':
worker = ExampleThread()
worker.start()
print 'this should be printed before the worker finished'
So the thread is not working as expected.
But if I remove the monkey.patch_all()
it is working fine.
The problem is that I need the monkey.patch_all()
for using gevent (now shown in the code above)
My solution:
I changed the
monkey.patch_all()
to
monkey.patch_all(thread=False)
so I am not patching the thread.
Source: (StackOverflow)
I'd like to use Celery as a queue for my tasks so my web app could enqueue a task, return a response and the task will be processed meanwhile / someday / ... I build a kind of API, so I don't know what sort of tasks will be there in advance - in future, there can be tasks dealing with HTTP requests, another IO, but also CPU-consuming tasks. In respect to that, I'd like to run Celery's workers on processes, as these are universal kind of parallelism in Python.
However, I'd like to use gevent in my tasks too, so I could have a single task spawning many HTTP requests, etc. The problem is, when I do this:
from gevent import monkey
monkey.patch_all()
Celery stops to work. It starts, but no tasks can be effectively enqueued - they seem to go to the broker, but Celery worker doesn't collect them and process them. Only starts and waits. If I delete those lines and perform the task without any gevent and parallelization, everything works.
I think it could be because gevent patches also threading. So I tried
from gevent import monkey
monkey.patch_all(thread=False)
...but then Celery doesn't even start, it crashes without giving a reason (debug level of logging turned on).
Is it possible to use Celery for enqueuing tasks and gevent for doing some stuff inside a single task? How? What do I do wrong?
Source: (StackOverflow)
I have been playing with Gevent, and I like it a lot. However I have run into a problem. Breakpoint are not being hit, and debugging doesn't work (using both Visual Studio Python Tools and Eclipse PyDev). This happens after monkey.patch_all()
is called.
This is a big problem for me, and unfortunately this is a blocker for the use of gevent. I have found a few threads that seem to indicate that gevent breaks debugging, but I would imagine there is a solution for that.
Does anyone know how to make debugging and breakpoints work with gevent and monkey patching?
Source: (StackOverflow)
I'm writing a Python client+server that uses gevent.socket
for communication. Are there any good ways of testing the socket-level operation of the code (for example, verifying that SSL connections with an invalid certificate will be rejected)? Or is it simplest to just spawn
a real server?
Edit: I don't believe that "naive" mocking will be sufficient to test the SSL components because of the complex interactions involved. Am I wrong in that? Or is there a better way to test SSL'd stuff?
Source: (StackOverflow)
I'm currently researching websocket support in Python and am a bit confused with the offerings.
On one hand it's possible to use Flask + gevent. On the other hand, uwsgi has socket support and at last there is an extension that bundles both uwsgi and gevent.
What's the problem with implementing websockets with only one of these? What do I win by mixing them?
Changing the question
What does adding gevent do that threaded uwsgi won't?
Source: (StackOverflow)
for this code:
import sys
import gevent
from gevent import monkey
monkey.patch_all()
import requests
import urllib2
def worker(url, use_urllib2=False):
if use_urllib2:
content = urllib2.urlopen(url).read().lower()
else:
content = requests.get(url, prefetch=True).content.lower()
title = content.split('<title>')[1].split('</title>')[0].strip()
urls = ['http://www.mail.ru']*5
def by_requests():
jobs = [gevent.spawn(worker, url) for url in urls]
gevent.joinall(jobs)
def by_urllib2():
jobs = [gevent.spawn(worker, url, True) for url in urls]
gevent.joinall(jobs)
if __name__=='__main__':
from timeit import Timer
t = Timer(stmt="by_requests()", setup="from __main__ import by_requests")
print 'by requests: %s seconds'%t.timeit(number=3)
t = Timer(stmt="by_urllib2()", setup="from __main__ import by_urllib2")
print 'by urllib2: %s seconds'%t.timeit(number=3)
sys.exit(0)
this result:
by requests: 18.3397213892 seconds
by urllib2: 2.48605842363 seconds
in sniffer it looks this:
description: first 5 requests are sended by requests library, next 5 requests are sended by urllib2 library.
red - is time when work was freeze, dark - when data receiving... wtf?!
How it posible if socket library patched and libraries must work identically?
How use requests without requests.async for asynchronious work?
Source: (StackOverflow)