sucker_punch
Sucker Punch is a Ruby asynchronous processing library using Celluloid, heavily influenced by Sidekiq and girl_friday.
In my rails app, I'm running delayed jobs with Sucker Punch gem. I'm looping through some phone numbers sending an sms message to each with Twilio.
If there is an error with Twilio sending it, I can capture it with a rescue just fine, but is there a way to notify the user without just raising an exception showing an error page?
Ideally, it would be as a flash message (or other notification) when it happens. Is there some way to do this?
Source: (StackOverflow)
I'm attempting to get ActiveJob to use the sucker-punch adapter with the below code in config/initilizers/sucker_punch.rb
Rails.application.configure do
config.active_job.queue_adapter = :sucker_punch
end
With this code in place ActiveJob still uses the inline adapter. If I move this code into config/application.rb it works no problem.
I can log from my custom initialiser so I know it is being called.
Versions:
- Rudy 2.2.1
- Rails 4.2.1
- Sucker Punch 1.4.0
Could someone please help.
Thanks!
Source: (StackOverflow)
Wondering if anyone has seen this problem.
I am running Rails 3.2 on Passenger 3 with sucker_punch gem version 1.1
I have a long running sucker_punch job ( takes around 10 hours) it's an overnight batch.
I am running on Phusion Passenger with (I think 3 worker threads)
status from passenger-status
----------- General information -----------
max = 3
count = 0
active = 0
inactive = 0
Waiting on global queue: 0
My sucker_punch job is executed async, as part of the job it executes other async smaller sucker_punch jobs ( each take around 30 seconds)
I cannot exactly determine what is going on, but 'sometimes' my long running job just dies or seems to halt.
I did added some debug code around the entire sucker_punch job
begin
rescue Exception => e
logger.error(e)
raise e
end
However didn't see an exception, So assuming my long running sucker_punch is being halted rather than killed? Or potential some sort of deadlock?
The interesting part of this. Sometimes my long running job works fine, and sometimes it doesn't.
Source: (StackOverflow)
I'm using the newest version of rails and ruby. Through a form I import a CSV which then triggers an async job (I'm using the gem https://github.com/brandonhilkert/sucker_punch for this) that reads the file and handles the data.
All in all everything works fine, however:
My question is, how would I let the controller know when the background job is done? I'd like to redirect with a flash notice but obviously I shouldn't do that from the model... What can I do here?
This is my sucker_punch job:
#app/jobs/import_job.rb
class ImportJob
include SuckerPunch::Job
def perform(import_file)
ActiveRecord::Base.connection_pool.with_connection do
ModelName.import_from_csv(import_file)
end
end
end
And this is my controller action:
def import
ImportJob.new.async.perform(import_file)
redirect_to list_data_path, notice: 'Data being imported...'
# i get sent to list_data_path and i stay there even though the background job already finished successfully
end
Source: (StackOverflow)
I'm trying to run a background mailer and depending on the params of the article, dump different users into the mailing list. I'm getting this error upon request to make a new article:
Actor crashed!
NoMethodError: undefined method `email' for #<User::ActiveRecord_Relation:0x007fca99f657c8>
Here is the logic:
def create
@article = Article.new(article_params)
@all_users = []
if @article.football == true
@all_users << User.where( :sport => "Football").all
elsif @article.basketball == true
@all_users << User.where("users.sport LIKE ?", "%Basketball%").all
elsif @article.volleyball == true
@all_users << User.where( :sport => "Volleyball").all
elsif @article.lacrosse == true
@all_users << User.where( :sport => "Lacrosse").all
else
@all_users = User.all
end
if @article.save
@all_users.each do |user|
ArticleMailer.new.async.article_confirmation(user,@article)
end
redirect_to @article
else
render 'new'
end
end
Source: (StackOverflow)
I have sucker_punch worker which is processing a csv file, I initially had a problem with the csv file disappearing when the dyno powered down, to fix that i'm gonna set up s3 for file storage.
But my current concern is whether a dyno powering down will stop my worker in it's tracks.
How can I prevent that?
Source: (StackOverflow)
I'm seeing the following error:
Error message: undefined local variable or method `call_alert_path' for #<RoadrunnerTwilioAlert:0x007f34401bbd10>
However, I feel like call_alert_path
is properly defined in the routes. This is corroborated by the fact that my tests pass. The main difference between test mode & production is that in production, the method that calls call_alert_path
is in an async job. Perhaps that's throwing it off... anyways, I just want to confirm with the community that call_alert_path
is otherwise correctly defined and there's nothing wrong with the code as written.
Controller code:
# calls async job in production
if Rails.env == "production"
RoadrunnerTwilioAlert.new.async.perform(params[:rentalrequest_id])
else
@alert = twilio_client.account.calls.create(
from: ENV["Twilio_Verified_Phone"],
to: ENV["Roadrunner_Phone"],
url: call_alert_path,
method: 'post'
)
@request.update_attributes(twilio_alert: "call")
end
Async job code:
def perform(rentalrequest_id)
@request = Request.find(id)
@alert = twilio_client.account.calls.create(
from: ENV["Twilio_Verified_Phone"],
to: ENV["Roadrunner_Phone"],
url: call_alert_path,
method: 'post'
)
@request.update_attributes(twilio_alert: "call")
end
Route:
match '/twilio/call_alert', to: 'twilio#call_alert', via: :post, as: "call_alert"
Source: (StackOverflow)
The purpose of this code is to send an email to a user with an array of products whose discount percentages have reached a given threshold. The products are returned by:
user.notifications
which returns an array of 2 element arrays with the following format:
[[product, notification]]
A notification is an object composed of a discount percentage and a product_id.
send_notification?
checks to see if a user has been sent a notification in the last 7 days and returns a boolean (true if they have not received an email in the last week and false if they have for the product being passed in.)
I have the following job and accompanying test:
class ProductNotificationEmailJob
include SuckerPunch::Job
def perform(user)
user_notifications = user.notifications || []
products = []
notifications = []
user_notifications.each do |notification|
if notification[1].send_notification?
products << notification[0]
notifications << notification[1]
end
end
NotificationMailer.notification_email(user, products).deliver
notifications.each do |notification|
notification.update(notification_date: Time.now)
end
end
end
test:
require 'rails_helper'
describe ProductNotificationEmailJob do
it 'performs' do
notification = ObjectCreation.create_notification
expect(notification.notification_date).to be_nil
user = notification.user
stub = double("Object")
expect(NotificationMailer).to receive(:notification_email).with(user, [notification.my_product.product]).and_return(stub)
expect(stub).to receive(:deliver)
ProductNotificationEmailJob.new.perform(user)
expect(MyProductsNotification.last.notification_date).to_not be_nil
end
end
When I take out the line:
include SuckerPunch::Job
the test passes fine but I cannot get it to pass with that line in though. For some reason with the include SuckerPunch::Job line it seems as though the object creation method does not work and returns nil for all values. I apologize in advance if I didn't give enough detail but I didn't want to post too much code. Leave a comment and I will include any details requested. Thank you for your time I really appreciate it!
Source: (StackOverflow)
This is probably not the brightest question... feeling especially dense about this. I'm using a really nifty gem, Fist of Fury, to do recurring background tasks with Sucker Punch: https://github.com/facto/fist_of_fury. As the author of the gem states, recurrence rules are built using the ice cube gem https://github.com/seejohnrun/ice_cube.
He gives an example of a recurring job as such:
class SayHiJob
include SuckerPunch::Job
include FistOfFury::Recurrent
recurs { minutely }
def perform
Rails.logger.info 'Hi!'
end
end
I read thru the docs for Fist of Fury and Ice Cube, both as linked above, and just want to confirm my understanding...
- Fist of fury requires an Ice Cube rule to be in the
recurs {}
brackets, the recurs
is essentially a replacement of schedule
from Ice Cube
- You can use a predefined rule from Ice Cube, such as
minutely
in the example or daily(2)
(every other day), or you can define your own rules
If you define your own rules, you can just put it right above the recur, since ice cube's gem is already installed with Fist of Fury, something like (causes activity to happen every 13th of the month):
rule = Rule.monthly.day_of_month(13)
recurs { rule }
If the latter is right, I'd love to know how to write a rule for specific time of day. Something like rule = Rule.daily(2).time_of_day(3)
means I want the activity to happen every other day at 3am.
Source: (StackOverflow)
I've defined the following SuckerPunch Job:
class MyWorker
include SuckerPunch::Job
def perform(account)
@account = account
end
def params
@account
end
end
And I want to test it using RSpec:
describe MyWorker do
before { subject.perform("test@mail.nl") }
its(:params) { should eq "test@mail.nl" }
end
This works fine when testing without include SuckerPunch::Job
. Probably because the subject refers to an instance of ActorProxy
instead of MyWorker
.
How should I test MyWorker
? Or how should I get access to the instance of MyWorker
? I've read the Gotchas described in the Celluloid wiki, but the #wrapped_object
method doesn't to exist (anymore).
Source: (StackOverflow)
I am implementing sucker_punch with ActiveJob for the first time and I cant understand why my classes wont initialize.
app/jobs/init_org_accounts.rb
(ive tried app/jobs/init_org_accounts_job.rb as well)
class InitOrgAccountsJob < ActiveJob::Base
# include SuckerPunch::Job
queue_as :default
def perform(org)
ActiveRecord::Base.connection_pool.with_connection do
org.process_accounts
end
end
end
config/initializers/sucker_punch.rb
Rails.application.configure do
config.active_job.queue_adapter = :sucker_punch
end
In the console:
o = Organization.create(name: 'test')
InitOrgAccountsJob.new.perform(o)
-NameError: uninitialized constant InitializeOrgAccountsJob
from (pry):2:in <main>
can anyone assist me with figuring out the basics of active job with sucker_punch?
Source: (StackOverflow)
I have a simple SuckerPunch job, I am trying to make it so only one job runs at any given time. Struggling to work it out tried playing with Celluloid and Ruby concurrency
What I have
DataChangeJob.new.async.perform
with
class DataChangeJob
include SuckerPunch::Job
def perform
value = Random.rand
SuckerPunch.logger.info ("starting #{value}")
sleep(5)
SuckerPunch.logger.info ("running data change #{value}")
end
end
Source: (StackOverflow)
I have a Classified Model where i use a after_create callback for checking keywords of a user and send email notification.
this email it is send by a Background Job using ActiveJobs and Sucker_punch as background driver.
I see in the logs 3 jobs are being queued:
[ActiveJob] Enqueued ActionMailer::DeliveryJob (Job ID: 8843b126-18fe-4cc1-b2f3-41141a199bcb) to SuckerPunch(mailers) with arguments: "NotificationMailer", "keyword_found", "deliver_now", gid://clasificados/Classified/233, gid://clasificados/User/1
[ActiveJob] Enqueued ActionMailer::DeliveryJob (Job ID: 591ce6eb-34d1-4381-93ea-4b708171996f) to SuckerPunch(mailers) with arguments: "NotificationMailer", "keyword_found", "deliver_now", gid://clasificados/Classified/234, gid://clasificados/User/1
[ActiveJob] Enqueued ActionMailer::DeliveryJob (Job ID: 3b1de0ea-f48d-41f2-be5a-8f5b2369b8ea) to SuckerPunch(mailers) with arguments: "NotificationMailer", "keyword_found", "deliver_now", gid://clasificados/Classified/235, gid://clasificados/User/1
but i only received 2 emails...
i see in logs errors like:
Terminating 6 actors...
Terminating task: type=:finalizer, meta={:method_name=>:__shutdown__}, status=:receiving
Celluloid::TaskFiber backtrace unavailable. Please try `Celluloid.task_class = Celluloid::TaskThread` if you need backtraces here.`
Terminating task: type=:call, meta={:method_name=>:perform}, status=:callwait
Celluloid::TaskFiber backtrace unavailable. Please try `Celluloid.task_class = Celluloid::TaskThread` if you need backtraces here.
Celluloid::PoolManager: async call `perform` aborted!
Celluloid::Task::TerminatedError: task was terminated
/home/angel/.gem/ruby/2.2.2/gems/celluloid-0.16.0/lib/celluloid/tasks/task_fiber.rb:34:in `terminate'
/home/angel/.gem/ruby/2.2.2/gems/celluloid-0.16.0/lib/celluloid/actor.rb:345:in `each'
/home/angel/.gem/ruby/2.2.2/gems/celluloid-0.16.0/lib/celluloid/actor.rb:345:in `cleanup'
/home/angel/.gem/ruby/2.2.2/gems/celluloid-0.16.0/lib/celluloid/actor.rb:329:in `shutdown'
/home/angel/.gem/ruby/2.2.2/gems/celluloid-0.16.0/lib/celluloid/actor.rb:164:in `run'
/home/angel/.gem/ruby/2.2.2/gems/celluloid-0.16.0/lib/celluloid/actor.rb:130:in `block in start'
/home/angel/.gem/ruby/2.2.2/gems/celluloid-0.16.0/lib/celluloid/thread_handle.rb:13:in `block in initialize'
/home/angel/.gem/ruby/2.2.2/gems/celluloid-0.16.0/lib/celluloid/actor_system.rb:32:in `block in get_thread'
/home/angel/.gem/ruby/2.2.2/gems/celluloid-0.16.0/lib/celluloid/internal_pool.rb:130:in `call'
/home/angel/.gem/ruby/2.2.2/gems/celluloid-0.16.0/lib/celluloid/internal_pool.rb:130:in `block in create'
Terminating task: type=:call, meta={:method_name=>:perform}, status=:callwait
Celluloid::TaskFiber backtrace unavailable. Please try `Celluloid.task_class = Celluloid::TaskThread` if you need backtraces here.
Celluloid::PoolManager: async call `perform` aborted!
Celluloid::Task::TerminatedError: task was terminated
/home/angel/.gem/ruby/2.2.2/gems/celluloid-0.16.0/lib/celluloid/tasks/task_fiber.rb:34:in `terminate'
/home/angel/.gem/ruby/2.2.2/gems/celluloid-0.16.0/lib/celluloid/actor.rb:345:in `each'
/home/angel/.gem/ruby/2.2.2/gems/celluloid-0.16.0/lib/celluloid/actor.rb:345:in `cleanup'
/home/angel/.gem/ruby/2.2.2/gems/celluloid-0.16.0/lib/celluloid/actor.rb:329:in `shutdown'
/home/angel/.gem/ruby/2.2.2/gems/celluloid-0.16.0/lib/celluloid/actor.rb:164:in `run'
/home/angel/.gem/ruby/2.2.2/gems/celluloid-0.16.0/lib/celluloid/actor.rb:130:in `block in start'
/home/angel/.gem/ruby/2.2.2/gems/celluloid-0.16.0/lib/celluloid/thread_handle.rb:13:in `block in initialize'
/home/angel/.gem/ruby/2.2.2/gems/celluloid-0.16.0/lib/celluloid/actor_system.rb:32:in `block in get_thread'
/home/angel/.gem/ruby/2.2.2/gems/celluloid-0.16.0/lib/celluloid/internal_pool.rb:130:in `call'
/home/angel/.gem/ruby/2.2.2/gems/celluloid-0.16.0/lib/celluloid/internal_pool.rb:130:in `block in create'
Terminating task: type=:finalizer, meta={:method_name=>:__shutdown__}, status=:receiving
Celluloid::TaskFiber backtrace unavailable. Please try `Celluloid.task_class = Celluloid::TaskThread` if you need backtraces here.
Model:
class Classified < ActiveRecord::Base
after_create :find_matched_keywords
def find_matched_keywords
User.all.each do |u|
u.profile.keywords.scan(/[a-zA-Z\d]+/) do |k|
if self[:content].downcase.include?(k)
SendEmailJob.new.async.perform(self, u)
break
end
end
end
end
end
Job:
class SendEmailJob < ActiveJob::Base
include SuckerPunch::Job
queue_as :default
def perform(classified, user)
NotificationMailer.keyword_found(classified, user).deliver_later
end
end
any idea what could be happening?
thanks in advance :D
Source: (StackOverflow)