4

I am trying to do stress test on a server using Python 3. The idea is to send an HTTP request to the API server every 1 second for 30 minutes. I tried using requests and apscheduler to do this but I kept getting

Execution of job "send_request (trigger: interval[0:00:01], next run at: 2017-05-23 11:05:46 EDT)" skipped: maximum number of running instances reached (1)

How can I make this work? Below is my code so far:

import requests, json, time, ipdb
from apscheduler.schedulers.blocking import BlockingScheduler as scheduler

def send_request():
    url = 'http://api/url/'

    # Username and password
    credentials = { 'username': 'username', 'password': 'password'}

    # Header
    headers = { 'Content-Type': 'application/json', 'Client-Id': 'some string'}

    # Defining payloads
    payload = dict()

    payload['item1']    = 1234
    payload['item2'] = 'some string'
    data_array = [{"id": "id1", "data": "some value"}]
    payload['json_data_array'] = [{ "time": int(time.time()), "data": data_array]

    # Posting data
    try:
        request = requests.post(url, headers = headers, data =  json.dumps(payload))
    except (requests.Timeout, requests.ConnectionError, requests.HTTPError) as err:
        print("Error while trying to POST pid data")
        print(err)
    finally:
        request.close()

    print(request.content)

    return request.content

if __name__ == '__main__':
    sched = scheduler()
    print(time.time())
    sched.add_job(send_request, 'interval', seconds=1)
    sched.start()
    print('Press Ctrl+{0} to exit'.format('Break' if os.name == 'nt' else 'C'))

    try:
        # This is here to simulate application activity (which keeps the main thread alive).
        while true:
            pass
    except (KeyboardInterrupt, SystemExit):
        # Not strictly necessary if daemonic mode is enabled but should be done if possible
        scheduler.shutdown()

I tried searching on stack overflow but none of the other questions does what I want so far, or maybe I missed something. I would appreciate someone to point me to the correct thread if that is the case. Thank you very much!

OneCricketeer
  • 179,855
  • 19
  • 132
  • 245
user8054069
  • 43
  • 1
  • 1
  • 4
  • Possible duplicate of [python apscheduler - skipped: maximum number of running instances reached](https://stackoverflow.com/questions/34020161/python-apscheduler-skipped-maximum-number-of-running-instances-reached) – calico_ May 23 '17 at 15:36
  • @calico_ Thank you, will take a look at it soon. – user8054069 May 23 '17 at 15:43
  • @calico_ Yes, the problem is that the request takes longer than 1 second. But since it is a stress test, I cannot afford skipping the request if a request is already in place. What I would like the code to do is to make the API request even if previous request is not done/being returned yet. – user8054069 May 23 '17 at 17:00
  • Yes, sorry that other answer not thorough. I edited my answer to include a solution. – calico_ May 23 '17 at 17:16

2 Answers2

4

I think your error is described well by the duplicate that I marked as well as the answer by @jeff

Edit: Apparently not.. so here I'll describe how to fix the maximum instances problem:

Maximum instances problem

When you're adding jobs to the scheduler there is an argument you can set for the number of maximum allowed concurrent instances of the job. You can should read about this here: BaseScheduler.add_job()

So, fixing your problem is just a matter of setting this to something higher:

sch.add_job(myfn, 'interval', seconds=1, max_instances=10)

But, how many concurrent requests do you want? If they take more than one second to respond, and you request one per second, you will always eventually get an error if you let it run long enough...

Schedulers

There are several scheduler options available, here are two:

BackgroundScheduler

You're importing the blocking scheduler - which blocks when started. So, the rest of your code is not being executed until after the scheduler stops. If you need other code to be executed after starting the scheduler, I would use the background scheduler like this:

from apscheduler.schedulers.background import BackgroundScheduler as scheduler

def myfn():
    # Insert your requests code here
    print('Hello')

sch = scheduler()
sch.add_job(myfn, 'interval', seconds=5)
sch.start()

# This code will be executed after the sceduler has started
try:
    print('Scheduler started, ctrl-c to exit!')
    while 1:
        # Notice here that if you use "pass" you create an unthrottled loop
        # try uncommenting "pass" vs "input()" and watching your cpu usage.
        # Another alternative would be to use a short sleep: time.sleep(.1)

        #pass
        #input()
except KeyboardInterrupt:
    if sch.state:
        sch.shutdown()

BlockingScheduler

If you don't need other code to be executed after starting the scheduler, you can use the blocking scheduler and it's even easier:

apscheduler.schedulers.blocking import BlockingScheduler as scheduler

def myfn():
    # Insert your requests code here
    print('Hello')

# Execute your code before starting the scheduler
print('Starting scheduler, ctrl-c to exit!')

sch = scheduler()
sch.add_job(myfn, 'interval', seconds=5)
sch.start()
calico_
  • 1,171
  • 12
  • 23
0

I have never used the scheduler in python before, however this other stackOverflow question seems to deal with that.

It means that the task is taking longer than one second and by default only one concurrent execution is allowed for a given job... -Alex Grönholm

In your case I imagine using threading would meet your needs. If you created a class that inherited threads in python, something like:

class Requester(threading.Thread):
  def __init__(self, url, credentials, payload):
    threading.Thread._init__(self)
    self.url = url
    self.credentials = credentials
    self.payload = payload        
  def run(self):
    # do the post request here
    # you may want to write output (errors and content) to a file
    # rather then just printing it out sometimes when using threads 
    # it gets really messing if you just print everything out

Then just like how you handle with a slight change.

if __name__ == '__main__':
  url = 'http://api/url/'
# Username and password
  credentials = { 'username': 'username', 'password': 'password'}
# Defining payloads
  payload = dict()
  payload['item1']    = 1234
  payload['item2'] = 'some string'
  data_array = [{"id": "id1", "data": "some value"}]
  payload['json_data_array'] = [{ "time": int(time.time()), "data": data_array]
  counter = 0
  while counter < 1800:
    req = Requester(url, credentials, payload)
    req.start()
    counter++
    time.sleep(1)

And of course finish the rest of it however you would like to, if you want to you could make it so that the KeyboardInterrupt is what actually finishes the script.

This of course is a way to get around the scheduler, if that is what the issue is.

Jeff
  • 96
  • 5
  • 1
    Hmm, so the Requester will not wait until the next Requester starts? This is an interesting idea, I'll try it and see how it works. Thanks! – user8054069 May 23 '17 at 17:06
  • One question though, if we define `def run(self)`, instead of `req.start()`, shouldn't we have `req.run()` instead? I also updated the header part, I took it out together with some irrelevant code by mistake. – user8054069 May 23 '17 at 17:14
  • So when using threads typically you don't actually call the run method, run gets invoked from the start method. – Jeff May 23 '17 at 17:51
  • A side note. I come from more of a java background, but I did see this [https://stackoverflow.com/questions/660961/overriding-python-threading-thread-run] with Jerubs answer, which is probably a more python way to work this solution. So if you didn't want to make a subclass of thread you could instead do something like:def makeRequest(url, headers, payload):#the actual handling of the request (Sorry I could not seem to figure out how to put the code in a code block) – Jeff May 23 '17 at 17:58
  • The `apscheduler` package accomplishes this goal in a clean object-oriented fashion, but does require understanding some intermediate Python principles. See my answer below for details. – calico_ May 24 '17 at 16:01