python-gearman
Gearman API - Client, worker, and admin client interfaces
Yelp/python-gearman · GitHub python-gearman - gearman api - client, worker, and admin client interfaces
I need to get status of Gearman jobs by these uniq id, not by open handlers, as desribed every place I seen
Is it possible? using in python-gearman v. 2...
Thanks for assistance!
Source: (StackOverflow)
I want to implement "GEARMAN" in project but I don't know how to install or configure "GEARMAN" in windows OS. Can anyone provide me any link from where I can work with "GEARMAN"?
Source: (StackOverflow)
When connecting to a gearman daemon, if the daemon url or port are incorrect and no connection can be made an exception is raised:
File "/usr/lib/python2.7/dist-packages/gearman/client.py", line 205, in establish_request_connection
raise ServerUnavailable('Found no valid connections: %r' % self.connection_list)
gearman.errors.ServerUnavailable: Found no valid connections: [<GearmanConnection localhost:4700 connected=False>]
I want to catch the exception and handle it gracefully but the code below doesn't do that. The exception and traceback is displayed as though I haven't tried to catch the exception.
The code that generates and tries to trap the exception is:
import gearman
from gearman.errors import ConnectionError, InvalidAdminClientState, ServerUnavailable
try:
gmClient = gearman.GearmanClient(['localhost:4730'])
except gearman.errors.ServerUnavailable, e:
# I've also tried except ServerUnavailable, e: - same result.
print(e)
How do I correctly catch gearman client connection exceptions?
Source: (StackOverflow)
I am trying to run a basic example of gearman using python-gearman library available here. I am running python 2.7.3
Worker:
import gearman
gm_worker = gearman.GearmanWorker(['localhost:4730'])
def task_listener_reverse(gearman_worker, gearman_job):
print 'reporting status'
return reversed(gearman_job.data)
gm_worker.set_client_id('testclient')
gm_worker.register_task('reverse', task_listener_reverse)
gm_worker.work()
Client:
import gearman
gm_client = gearman.GearmanClient(['localhost:4730'])
print 'Sending job...'
request = gm_client.submit_job('reverse', 'Hello World!')
print "Result: " + request.result
I am getting the following error (full trace available here)
File "/Users/developer/gearman/connection_manager.py", line 27, in _enforce_byte_string
raise TypeError("Expecting byte string, got %r" % type(given_object))
TypeError: Expecting byte string, got <type 'reversed'>
Any help would be appreciated!
Thanks.
Source: (StackOverflow)
I have 5 workers doing different tasks and i am passing the same input to all the 5 workers and retrieving the collective output of all the 5 workers.
I am using gearman library of python and i have tried using
m_client.submit_multiple_jobs(site, background=False, wait_until_complete=False)
Now i need to make sure that all these 5 workers are working in parallel.
How can i achieve it ?
Thanks in Advance.
Source: (StackOverflow)
How can I access Yii models outside of the framework?
I have some gearman workers performing tasks and are managed using BrianMoons GearmanManager. I'd like to be able to access a few Yii models in the worker's script without have to load the whole Yii framework. What do I need to import to load the models in my script? (CActiverecord, DBconnection, etc).
A worker looks like this:
as simple function
function reverse_string($job, &$log) {
$workload = $job->workload();
$result = strrev($workload);
$log[] = "Success";
return $result;
}
?>
or as a class:
<?php
class Sum {
private $cache = array();
private $foo = 0;
public function run($job, &$log) {
$workload = $job->workload();
if(empty($this->cache[$workload])){
$dat = json_decode($workload, true);
$sum = 0;
foreach($dat as $d){
$sum+=$d;
sleep(1);
}
$this->cache[$workload] = $sum + 0;
} else {
$sum = $this->cache[$workload] + 0;
}
$log[] = "Answer: ".$sum;
$this->foo = 1;
return $sum;
}
}
?>
I'd like to be able to access a few models and perform operations within the worker like so:
$foo=Foo::model()->findByPk($id);
$foo->attribute="bar";
$foo->save();
Source: (StackOverflow)
I am trying a vaery basic example of string reversal using Python gearman module
MY localhost has been set up using IIS on port 4730
However i am getting the error
raise UnknownCommandError(missing_callback_msg)
UnknownCommandError: Could not handle command: 'GEARMAN_COMMAND_TEXT_COMMAND' - {'raw_text': 'HTTP/1.1 400 Bad Request\r'}
Client.py
import gearman
# setup client, connect to Gearman HQ
gm_client = gearman.GearmanClient(['localhost:4730'])
print 'Sending job...'
request = gm_client.submit_job('reverse', 'Hello World!')
print "Result: " + request.result
Worker.py
import gearman
gm_worker = gearman.GearmanWorker(['localhost:4730'])
# define method to handled 'reverse' work
def task_listener_reverse(gearman_worker, gearman_job):
print 'reporting status'
return reversed(gearman_job.data)
gm_worker.set_client_id('your_worker_client_id_name')
gm_worker.register_task('reverse', task_listener_reverse)
gm_worker.work()
Any suggestions as to why this might occur and how to resolve the same
Source: (StackOverflow)
I'm try use gearman with backgroud tasks and get data progress from worker.
In documentation I'm see methods: send_job_data and send_job_status, but with background first method not work (I'm not see data in job.data_updates), but status changes in job.status.
I'm use this code for test workers:
from gearman import GearmanWorker
import time
worker = GearmanWorker(['192.168.1.79:4730'])
def long_task(work, job):
work.send_job_data(job, 'long task')
work.send_job_status(job, 0, 3)
time.sleep(60)
work.send_job_data(job, 'long task2')
work.send_job_status(job, 1,3)
time.sleep(120)
work.send_job_status(job,3,3)
return "COMPLETE ALL"
worker.register_task('pool', long_task)
worker.work()
And this code from client:
from gearman import GearmanClient
client = GearmanClient(['192.168.1.79:4730'])
This code (blocking) work normal:
In [6]: pool = client.submit_job('pool', '')
In [7]: pool.result
Out[7]: 'COMPLETE ALL'
In [8]: pool.data_updates
Out[8]: deque(['long task', 'long task2'])
In [9]: pool.status
Out[9]:
{'denominator': 3,
'handle': 'H:dhcp94:22',
'known': True,
'numerator': 3,
'running': True,
'time_received': 1322755490.691739}
And this client not work normal (not update status for task and not get data/result) :(
In [10]: pool = client.submit_job('pool', '', background=True)
In [11]: pool = client.get_job_status(pool)
In [12]: pool.status
Out[12]:
{'denominator': 3,
'handle': 'H:dhcp94:23',
'known': True,
'numerator': 0,
'running': True,
'time_received': 1322755604.695123}
In [13]: pool.data_updates
Out[13]: deque([])
In [14]: pool = client.get_job_status(pool)
In [15]: pool.data_updates
Out[15]: deque([])
In [16]: pool.status
Out[16]:
{'denominator': 0,
'handle': 'H:dhcp94:23',
'known': False,
'numerator': 0,
'running': False,
'time_received': 1322755863.306605}
How I'm can normal get this data? Because my background task will work few hours and send information about our status in messages.
Source: (StackOverflow)
Can anyone recommend or create a tutorial on how to make a news feed similar to that of Facebook's only using Django, Tastypie (webservice API framework for Django), Redis (key-value store) and Gearman (task queue)?
Currently I have user model, post model, favorites model and a comment model. I have created Tastypie resources with these models to allow for favoriting, liking, commenting and posting.
I would like to know how to generate feed actions that apply directly to the user. For example:
User1 commented on your post. (2 seconds ago)
User2 liked your post. (3 mins ago)
User2 & User1 favorited your post (5 mins ago)
I really require in depth examples and tutorials on how to build an activity feed using the technologies above. Any help would be appreciated.
Source: (StackOverflow)
how can I tell if a background job or a non-blocking request by gearman client is successful or not?
while (True):
jobs = getJobs()
submitted_requests = gm_client.submit_multiple_jobs(jobs, background = False, wait_until_complete = False)
# check status in a non-blocking mode
Source: (StackOverflow)
Does calling remote gearman worker from local system is possible? I tried calling using my remote azure server IP:
client on local system:
gm_client = gearman.GearmanClient(['204.43.9.41:4730'])
sent = sys.argv[1]
completed_job_request = gm_client.submit_job("load_db", sent)
remote worker :
def __init__(self):
self.gm_worker = gearman.GearmanWorker(['204.43.9.41:4730'])
self.context = self.init_context()
res = self.gm_worker.register_task('load_db', self.run_query)
When I kept worker running on remote server and called from local client, it gave this error:
gearman.errors.ServerUnavailable: Found no valid connections: GearmanConnection 204.43.9.41:4730 connected=False
Source: (StackOverflow)
I'm attempting to change the tasks available on a python-gearman worker during its work cycle. My reason for doing this is to allow me a little bit of control over my worker processes and allowing them to reload from a database. I need every worker to reload at regular intervals, but I don't want to simply kill the processes, and I want the service to be constantly available which means that I have to reload in batches. So I would have 4 workers reloading while another 4 workers are available to process, and then reload the next 4 workers.
Process:
- Start reload process 4 times.
- unregister the
reload
process
- reload the dataset
- register a
finishReload
task
- return
- Repeat step 1 until there are no workers with the
reload
task registered.
- Start
finishReload
(1) task until there are no workers with the finishReload
task available.
(1) the finishReload task unregisters the finishReload
task and registers the reload
task and then returns.
Now, the problem that I'm running into is that the job fails when I change the tasks that are available to the worker process. There are no error messages or exceptions, just an "ERROR" in the gearmand log. Here's a quick program that replicates the problem.
WORKER
import gearman
def reversify(gmWorker, gmJob):
return "".join(gmJob.data[::-1])
def strcount(gmWorker, gmJob):
gmWorker.unregister_task('reversify') # problem line
return str(len(gmJob.data))
worker = gearman.GearmanWorker(['localhost:4730'])
worker.register_task('reversify', reversify)
worker.register_task('strcount', strcount)
while True:
worker.work()
CLIENT
import gearman
client = gearman.GearmanClient(['localhost:4730'])
a = client.submit_job('reversify', 'spam and eggs')
print a.result
>>> sgge dna maps
a = client.submit_job('strcount', 'spam and eggs')
...
Please let me know if there are anything things that I can elucidate.
EDIT: I know that someone will ask to see the log I mentioned. I've posted this question to the gearman group on Google as well, and log is available there.
Source: (StackOverflow)
I am trying to install the python version of gearman on CentOs. I cloned https://git.openstack.org/cgit/openstack-infra/gear to the machine then run python setup.py install
and got the output like below.
running install
running build
running build_py
running egg_info
writing pbr to gear.egg-info/pbr.json
writing requirements to gear.egg-info/requires.txt
writing gear.egg-info/PKG-INFO
writing top-level names to gear.egg-info/top_level.txt
writing dependency_links to gear.egg-info/dependency_links.txt
writing entry points to gear.egg-info/entry_points.txt
[pbr] Processing SOURCES.txt
[pbr] In git context, generating filelist from git
warning: no files found matching 'AUTHORS'
warning: no files found matching 'ChangeLog'
warning: no previously-included files matching '*.pyc' found anywhere in distribution
reading manifest template 'MANIFEST.in'
warning: no files found matching 'AUTHORS'
warning: no files found matching 'ChangeLog'
warning: no previously-included files found matching '.gitignore'
warning: no previously-included files found matching '.gitreview'
warning: no previously-included files matching '*.pyc' found anywhere in distribution
writing manifest file 'gear.egg-info/SOURCES.txt'
running install_lib
running install_egg_info
removing '/usr/lib/python2.6/site-packages/gear-0.5.7-py2.6.egg-info' (and everything under it)
Copying gear.egg-info to /usr/lib/python2.6/site-packages/gear-0.5.7-py2.6.egg-info
running install_scripts
Installing geard script to /usr/bin
Anyway, when I try to start gearman using gearman -d
or gearmand
it says command not found
.
What do I need to do to install it?
Source: (StackOverflow)
i got a error in gearman. Try anything like , only send a string letter or number or string number but always got this error pls help... (same code working other view)
Request Method: POST
Request URL: http://local.example.com:8000/business/user-panel
Django Version: 1.6.6
Exception Type: ProtocolError
Exception Value:
Received non-binary arguments: {'unique': 'ab69c55005d118f92e27dcaa3a9bb5d7', 'task': u'task_name', 'data': "1010"}
Exception Location: /home/xcoder/NopyFlexiEnv/lib/python2.7/site-packages/gearman/protocol.py in pack_binary_command, line 242
Python Executable: /home/xcoder/NopyFlexiEnv/bin/python2.7
Python Version: 2.7.6
Source: (StackOverflow)
I'm using gearman for synchronizing data on different servers. We have 1 main server and, for example, 10 local servers. Let me describe one of possible situations. Say, gearman started working, and 5 jobs are done, data on that 5 servers is synced. When doing the next job is started, say, we lost the connection with server and it's not available right now. By the logic of gearman it retries again and again. So the remaining jobs (for servers 7, 8, 9, 10) will not be executed until the 6th is not done. The best solution would be postponing the job and putting it to the end of queue and continuing work of jobs 7-10.
If someone do know how to do that, please post the way.
PS: I'm using python.
Source: (StackOverflow)