EzDevInfo.com

locust

Scalable user load testing tool written in Python Locust - A modern load testing framework

how to keep session and CSRF token in locus test

I wan't to test my django web app with locust.io. In ha form i have the problem that it is secured with a CSRF token. I do the following:

class WebsiteTasks(TaskSet): def on_start(self): print("On start")

@task
def post_answer(self):
    self.client.get("/polls/2/vote")
    self.client.post("/polls/2/vote/", {"choice": "8"})

Why do I get a 403 error ?.. That the post is fobidden, the locust documentation says that the client objects keeps the session alive..


Source: (StackOverflow)

Locust.io send requests at an exact rate

Is there a way in locust.io to set the rate at which the requests will be sent? I am using locust to see how my database will perform under increased load. I am not interested in the max request rate the database can take but rather the performance of the database when it receives a specific rate. For example I want to see the latency of the read operations under a specific write load.


Source: (StackOverflow)

Advertisements

How can i get error report in detailed manner in Locust Load testing tool?

If I am hitting any web page in Locust Load testing tool means ,it throws some error.I want to get the error in detailed summary.

How can I get error report in detailed manner.Is there any function like trace-back in locust it will be used to print the error list in detail.

My coding is like this:

class MyTaskSet(TaskSet):

try:
  @task(1)
  def index(self):
      self.client.get("/")
except Exception as e:
     print e.msg  

If I test any url like this https://www.python.org/ it will throw error like in below screen shot enter image description here

Error :ConnectionError(ProtocolError('Connection aborted.', BadStatusLine('''',)),)

I want to get this error in detail.


Source: (StackOverflow)

Locust : How to invoke the test through an API

I would like to invoke Locust load tests through an API to be able to start tests from a CI tool.

I dont see much documentation about such a scenario, there is no "Runner" or a similar class in the locust API documentation.

I checked the "locust" command which becomes available, after installation in Windows . It is a .exe so not sure what it does and how it actually starts the test

So , the specific question is, is there an interface to start the test from another Python program


Source: (StackOverflow)

Way to use locust.io by supplying user list

I need to stress-test a system and http://locust.io seems like the best way to go about this. However, it looks like it is set up to use the same user every time. I need each spawn to log in as a different user. How do I go about setting that up? Alternatively, is there another system that would be good to use?


Source: (StackOverflow)

ec2 instance running locust.io issues

I'm trying to run a locust.io load test on an ec2 instance - a t2.micro. I fire up 50 concurrent users, and initially everything works fine, with the CPU load reaching ~15%. After an hour or so though, the network out shows a drop of about 80% -

enter image description here

Any idea why this is happening? It's certainly not due to CPU credits. Maybe I reached the network limits for a t2 micro instance?

Thanks


Source: (StackOverflow)

Made Locust to login to a Web Application

I want locust to be able to login to my web application and start to click in the links inside the web application.

With this code I just get activity for the front page with the login and i don't get any notification from inside the application.

Code:

import random
from locust import HttpLocust, TaskSet, task
from pyquery import PyQuery


class WalkPages(TaskSet):
    def on_start(self):
        self.client.post("/", {
            "UserName": "my@email.com",
            "Password": "2Password!",
            "submit": "Sign In"
        })
       self.index_page()

    @task(10)
    def index_page(self):
        r = self.client.get("/Dashboard.mvc")
        pq = PyQuery(r.content)
        link_elements = pq("a")
        self.urls_on_current_page = []
        for l in link_elements:
          if "href" in l.attrib:
            self.urls_on_current_page.append(l.attrib["href"])

    @task(30)
    def load_page(self):
        url = random.choice(self.urls_on_current_page)
        r = self.client.get(url)


class AwesomeUser(HttpLocust):
    task_set = WalkPages
    host = "https://myenv.beta.webapp.com"
    min_wait = 20  * 1000
    max_wait = 60  * 1000

I get the follow msg in the terminal after the first round.

[2015-02-13 12:08:43,740] webapp-qa/ERROR/stderr: Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/locust/core.py", line 267, in run
    self.execute_next_task()
  File "/usr/local/lib/python2.7/dist-packages/locust/core.py", line 293, in execute_next_task
    self.execute_task(task["callable"], *task["args"], **task["kwargs"])
  File "/usr/local/lib/python2.7/dist-packages/locust/core.py", line 305, in execute_task
    task(self, *args, **kwargs)
  File "/home/webapp/LoadTest/locustfile.py", line 31, in load_page
    url = random.choice(self.urls_on_current_page)
  File "/usr/lib/python2.7/random.py", line 273, in choice
    return seq[int(self.random() * len(seq))]  # raises IndexError if seq is empty
IndexError: list index out of range
[2015-02-13 12:08:43,752] webapp-qa/ERROR/stderr: Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/locust/core.py", line 267, in run
    self.execute_next_task()
  File "/usr/local/lib/python2.7/dist-packages/locust/core.py", line 293, in execute_next_task
    self.execute_task(task["callable"], *task["args"], **task["kwargs"])
  File "/usr/local/lib/python2.7/dist-packages/locust/core.py", line 305, in execute_task
    task(self, *args, **kwargs)
  File "/home/webapp/LoadTest/locustfile.py", line 31, in load_page
    url = random.choice(self.urls_on_current_page)
  File "/usr/lib/python2.7/random.py", line 273, in choice
    return seq[int(self.random() * len(seq))]  # raises IndexError if seq is empty
IndexError: list index out of range
[2015-02-13 12:08:43,775] webapp-qa/ERROR/stderr: Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/locust/core.py", line 267, in run
    self.execute_next_task()
  File "/usr/local/lib/python2.7/dist-packages/locust/core.py", line 293, in execute_next_task
    self.execute_task(task["callable"], *task["args"], **task["kwargs"])
  File "/usr/local/lib/python2.7/dist-packages/locust/core.py", line 305, in execute_task
    task(self, *args, **kwargs)
  File "/home/webapp/LoadTest/locustfile.py", line 31, in load_page
    url = random.choice(self.urls_on_current_page)
  File "/usr/lib/python2.7/random.py", line 273, in choice
    return seq[int(self.random() * len(seq))]  # raises IndexError if seq is empty
IndexError: list index out of range

Source: (StackOverflow)

Locust.io Load Testing getting "Connection aborted BadStatusLine" Errors

I'm using Locust.io to load test an application. I will get a random error that I am unable to pinpoint the problem:

1) ConnectionError(ProtocolError(\'Connection aborted.\', BadStatusLine("\'\'",)),)

2) ConnectionError(ProtocolError('Connection aborted.', error(104, 'Connection reset by peer')),)

The first one is the one that happens a few times every 1,000,000 request or so and seems to happen in groups where there will be 5-20 all at once and then its is fine. the second only happens every couple days or so.

The CPU and memory are well below all the servers max load for the database server, app server and the machine running locust.io.

The servers are medium sized Linode servers running Ubuntu 14.04. The app is Django and the database in PostgreSQL. I have already increased the maximum open file limit but am wondering if something else needs to be increased on the server that could be leading to the occasional errors.

From what I have been able to gather from searching the error is that it might have something to do with python requests library.

-Any help would be greatly appreciated.


Source: (StackOverflow)

Report on custom parameters without forking Locust

I'm measuring performance of various components of my app and I'd like to get some statistics using Locust load testing framework. Currently, there's a way to measure average response time using a built-in locust method.

Is it possible to get statistics on a custom parameter (which is a part of server response) and add it to reports without forking Locust? Perhaps some monkey-patching options are available?


Source: (StackOverflow)

What is the purpose of response time distribution in locust.io?

would like to ask the experts here what is the purpose of response time distribution in locust.io. I could not figure out what does 50%, 60% means in the downloaded CSV


Source: (StackOverflow)

How do I execute a specific task in my main TaskSet's 'on_start' when I have nested TaskSets?

My Locust structure is as follows:

MainTaskSet
   |
   +----> TaskSet1
   |          |
   |          +---> task11 (weight: 1)
   |          |
   |          +---> stop (weight: 5) (interrupts this TaskSet)
   |
   +----> TaskSet2
              |
              +---> task21 (weight: 1)
              |
              +---> task22 (weight: 1)
              |
              +---> stop (weight: 5) (interrupts this TaskSet)

According to the Locust doc's nested TaskSet section, each of my nested TaskSets need to have the stop task which interrupts the respective nested TaskSet.

Now, task21 and task22 need task11 to have been executed at least once to do certain things. So what I'm doing currently is to call schedule_task to schedule TaskSet2 in MainTaskSet::on_start.

However, I find that this schedule_task call almost always calls the stop task in TaskSet1 which is to be expected since that task is 5 times more likely to be called.

So I then decided to do this: In task21 and task22, I check some state variable that I maintain to see if task11 has been run at least once. If not, I call interrupt. So this way, these tasks would never continue beyond a certain point until task11 has been executed.

The problem with this however is that it wastes a lot of time sometimes because task21 and task22 will be scheduled for quite a while before task11, and they'd just keep interrupting themselves => my load test is just wasting CPU.

I see another way to get around this problem, but it seems to me to be quite complicated for what I feel may have a simple solution that I do not know of. This way would be to set task11 to have a really large weight initially, and once it has run once, I reset the weight to something that is more realistic.

But is there a way I can call task11 directly from MainTaskSet::on_start?


Source: (StackOverflow)

Python Locust - herror: [Errno 1] Unknown host

I'm trying to test my application using python Locust, but I can't get a basic version to work. My locustfile.py file:

from locust import HttpLocust, TaskSet, task

class UserBehavior(TaskSet):

    @task(1)
    def test_get(self):
        self.client.get("/")

class WebsiteUser(HttpLocust):
    task_set = UserBehavior
    min_wait=5000
    max_wait=9000

I ran

locust --host=http://example.com

And got this error:

[2015-08-04 23:10:11,734] my-macbook-pro.local/ERROR/stderr: herror: [Errno 1] Unknown host

Wondering if it's just me putting in the wrong host, I tried facebook, google, and other hosts with no success either.

What am I doing wrong here?


Source: (StackOverflow)

Default install of Python Locust throws exception

Operating System: CentOS 7.0 x64
Interpreter: Python 3.4.1 compiled from source with --enable-shared

Example command:

$ python3.4 -c "import locust"

Traceback:

Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/usr/local/lib/python3.4/site-packages/locust/__init__.py", line 1, in <module>
  from core import HttpLocust, Locust, TaskSet, task
File "/usr/local/lib/python3.4/site-packages/locust/core.py", line 106
  raise LocustError, LocustError("A task inside a Locust class' main TaskSet 
  (`%s.task_set` of type `%s`) seems to have called interrupt() or raised 
  an InterruptTaskSet exception. The interrupt() function is used to hand over execution
  to a parent TaskSet, and should never be called in the main TaskSet which a Locust
  class' task_set attribute points to." % (type(self).__name__, self.task_set.__name__)), sys.exc_info()[2]
                  ^
SyntaxError: invalid syntax

I'm confused... Why is it calling a SyntaxError on the word "attribute" inside the exception string?

I am really not sure what to even try here. I did a strace and the result is basically 100% the same, so no extra clues there.


Source: (StackOverflow)

Locust.io: Controlling the request per second parameter

I have been trying to load test my API server using Locust.io on EC2 compute optimized instances. It provides an easy-to-configure option for setting the consecutive request wait time and number of concurrent users. In theory, rps = wait time X #_users. However while testing, this rule breaks down for very low thresholds of #_users (in my experiment, around 1200 users). The variables hatch_rate, #_of_slaves, including in a distributed test setting had little to no effect on the rps.

Experiment info

The test has been done on a C3.4x AWS EC2 compute node (AMI image) with 16 vCPUs, with General SSD and 30GB RAM. During the test, CPU utilization peaked at 60% max (depends on the hatch rate - which controls the concurrent processes spawned), on an average staying under 30%.

Locust.io

setup: uses pyzmq, and setup with each vCPU core as a slave. Single POST request setup with request body ~ 20 bytes, and response body ~ 25 bytes. Request failure rate: < 1%, with mean response time being 6ms.

variables: Time between consecutive requests set to 450ms (min:100ms and max: 1000ms), hatch rate at a comfy 30 per sec, and RPS measured by varying #_users.

Locust.io throughput graph

The RPS follows the equation as predicted for upto 1000 users. Increasing #_users after that has diminishing returns with a cap reached at roughly 1200 users. #_users here isn't the independent variable, changing the wait time affects the RPS as well. However, changing the experiment setup to 32 cores instance (c3.8x instance) or 56 cores (in a distributed setup) doesn't affect the RPS at all.

So really, what is the way to control the RPS? Is there something obvious I am missing here?


Source: (StackOverflow)

Locust - Python load testing. No documentation provided; so what do the ramping terms mean?

Locust gives the option to ramp up wherein you need to enter certain input details. I didn't find any documentation explaining the following terms whereas the other ones (not mentioned) seem self-explanatory:

Ramping

Hatch stride
Precision (min value of hatch stride)
Calibration time (seconds)
Percentile (%)
Max percentile response time (ms)
Accepted fail ratio (%)

Thanks!


Source: (StackOverflow)