34

I need to stress-test a system and http://locust.io seems like the best way to go about this. However, it looks like it is set up to use the same user every time. I need each spawn to log in as a different user. How do I go about setting that up? Alternatively, is there another system that would be good to use?

jloosli
  • 2,461
  • 2
  • 22
  • 34

4 Answers4

63

Locust author here.

By default, each HttpLocust user instance has an HTTP client that has it's own separate session.

Locust doesn't have any feature for providing a list of user credentials or similar. However, your load testing scripts are just python code, and luckily it's trivial to implement this yourself.

Here's a short example:

# locustfile.py

from locust import HttpLocust, TaskSet, task

USER_CREDENTIALS = [
    ("user1", "password"),
    ("user2", "password"),
    ("user3", "password"),
]

class UserBehaviour(TaskSet):
    def on_start(self):
        if len(USER_CREDENTIALS) > 0:
            user, passw = USER_CREDENTIALS.pop()
            self.client.post("/login", {"username":user, "password":passw})

    @task
    def some_task(self):
        # user should be logged in here (unless the USER_CREDENTIALS ran out)
        self.client.get("/protected/resource")

class User(HttpLocust):
    task_set = UserBehaviour
    min_wait = 5000
    max_wait = 60000

The above code wouldn't work when running Locust distributed, since the same code runs on each slave node, and they don't share any state. Therefore you would have to introduce some external datastore which the slave nodes could use to share states (e.g. PostgreSQL, redis, memcached or something else).

heyman
  • 4,845
  • 3
  • 26
  • 19
  • 1
    What is the behaviour if the list is empty? Meaning what happens if we specify to spawn 100 users, but our user list only has 50 names? Will we spawn 100 users? – bearrito Sep 19 '14 at 00:52
  • Expanding on my previous question. What if we spawn 50 users but have 100 names in our list, is it possible for a locust to be killed. It seems like self.interrupt() might be able to accomplish this or could I just throw an exception. – bearrito Sep 19 '14 at 00:55
  • In the above example you would get an error if you'd spawn more users than you had user credentials. Locust instances (users) are not meant to die during a test run. If you want to simulate that a user leaves, and another arrives, it's better to re-cycle the running locust instances. – heyman Sep 23 '14 at 08:53
  • 1
    I think best way is to send user information in the headers for login required pages. – Mesut GUNES Oct 07 '15 at 10:16
  • What @heyman suggested would definitely work, but using a DB is more hassle for what it's worth tho. I think each slave can randomize the list and only work on a subset of the list. This should work relatively well... there might be some credentials that never get called, but then again, we're doing load testing here, not testing code correctness :) – the1plummie May 21 '16 at 00:40
  • Is there a way to put user back in the list after worker is done with it. So that test can run continuously but same user is not signed in in multiple sessions at the same time? – raitisd Sep 12 '16 at 12:35
  • Or, is there a way to reset the list once all users ran out? – raitisd Sep 12 '16 at 12:53
  • @raitisd check my solution below. You can use it continuously. – Mesut GUNES Dec 23 '16 at 13:33
14

Alternatively, you can create users.py module to hold the users' information you need in your test cases, in my example, it holds email and cookies. Then you can call them randomly in your tasks. See below:

# locustfile.py
from locust import HttpLocust, TaskSet, task
from user_agent import *
from users import users_info


class UserBehaviour(TaskSet):
    def get_user(self):
        user = random.choice(users_info)
        return user

    @task(10)
    def get_siparislerim(self):
        user = self.get_user()
        user_agent = self.get_user_agent()
        r = self.client.get("/orders", headers = {"Cookie": user[1], 'User-Agent': user_agent})

class User(HttpLocust):
    task_set = UserBehaviour
    min_wait = 5000
    max_wait = 60000

User and user-agent can be called by a function. With this way, we could distribute the test with many users and different user-agents.

# users.py

users_info = [
['performancetest.1441926507@gmail.com', 'cookies_created_by_each_user'], 
['performancetest.1441926506@gmail.com', 'cookies_created_by_each_user'], 
['performancetest.1441926501@gmail.com', 'cookies_created_by_each_user'], 
['performancetest.1441926499@gmail.com', 'cookies_created_by_each_user'], 
['performancetest.1441926494@gmail.com', 'cookies_created_by_each_user'], 
['performancetest.1441926493@gmail.com', 'cookies_created_by_each_user'], 
['performancetest.1441926492@gmail.com', 'cookies_created_by_each_user'], 
['performancetest.1441926491@gmail.com', 'cookies_created_by_each_user'], 
['performancetest.1441926490@gmail.com', 'cookies_created_by_each_user'], 
['performancetest.1441926489@gmail.com', 'cookies_created_by_each_user'], 
['performancetest.1441926487@gmail.com', 'cookies_created_by_each_user']] 
Mesut GUNES
  • 7,089
  • 2
  • 32
  • 49
  • 1
    Nice suggestion. Wonder what you might propose if one needed to have the data used in sequential order. Popping out from array as in the other proposals? – David Jul 24 '17 at 01:44
  • @David if you need that it is a good idea to pop out since you have done wlth it then insert it again. – Mesut GUNES Jul 24 '17 at 05:07
5

Piggy-backing on @heyman's answer here. The example code will work, but continuing to start / stop tests will eventually clear out the USER_CREDENTIALS list, and start to throw errors.

I ended up adding the following:

2023 Update

From v1.3.0 the events API has changed.

USER_CREDENTIALS = generate_users()


@events.spawning_complete.add_listener
def spawn_complete_handler(**kw):
    global USER_CREDENTIALS
    USER_CREDENTIALS = generate_users()

Original answer

from locust import events # in addition to the other locust modules needed

def hatch_complete_handler(**kw):
    global USER_CREDENTIALS
    USER_CREDENTIALS = generate_users() # some function here to regenerate your list

events.hatch_complete += hatch_complete_handler

This refreshes your user list once your swarm finishes hatching.

Also keep in mind that you'll need a list longer than the number of users you wish to spawn.

Grey Vugrin
  • 455
  • 5
  • 12
  • 1
    events.hatch_complete is deprecated from locust 1.3.0. Use events.spawning_complete ``` USER_LIST = generate_users() @events.spawning_complete.add_listener def spawn_complete_handler(**kw): global USER_LIST USER_LIST = generate_users() ``` – Gh0sT May 17 '23 at 17:43
4

I took a slightly different approach when implementing this for a distributed system. I utilized a very simple flask server that I made a get call to during the on_start portion of the TaskSet.

from flask import Flask, jsonify
app = Flask(__name__)

count = 0    #Shared Variable

@app.route("/")
def counter():
    global count

    count = count+1
    tenant = count // 5 + 1
    user = count % 5 + 1

    return jsonify({'count':count,'tenant':"load_tenant_{}".format(str(tenant)),'admin':"admin",'user':"load_user_{}".format(str(user))})

if __name__ == "__main__":
    app.run()

In this way I now have an endpoint I can get at http://localhost:5000/ on whatever host I run this. I just need to make this endpoint accessible to the slave systems and I wont have to worry about duplicate users or some type of round robin effect caused by having a limited set of user_info.

user2943791
  • 81
  • 1
  • 3
  • Aside for distributed mode, would be nice if this could be done built into locust to avoid having to run a separate server. locust does have a built in web service, although I'd prefer not having to make a local REST call to get the data. Would be nice to get a global hatch counter to reference. – David Jul 24 '17 at 01:39
  • Also, for a shared dataset and distributed testing, if the dataset is large enough, one could also avoid duplication from single dataset and not using your solution here, by making use of starting offsets. Each slave could use a different (row/key) offset to pull from the dataset, ensuring they don't start the same. And if dataset not large enough, at least they round robin with different overlaps. – David Jul 24 '17 at 01:43
  • I might be wrong, but I believe this approach would only work with a Flask server in dev/debug mode (i.e single-threaded) because if not, using a global variable is not thread-safe, and might lead to trouble. – cjauvin Nov 08 '17 at 19:31
  • @David The idea of a hatch number is being discussed in the issue [Expose an on_hatch event? And/or hatch number (e.g. we just hatched user number X) as well #634](https://github.com/locustio/locust/issues/634) and I'm definitely interested too, but nobody seems to be working on it until now. – Yushin Washio Jul 10 '18 at 09:28
  • @YushinWashio that was actually me proposing the on_hatch event with the hatch counter ;-) I'd look into the implementation myself if I ever find time to dig into locust source code. – David Jul 10 '18 at 23:56