2

I am using id "com.github.lkishalmi.gatling" version "3.2.9" to run my gatling performance tests

below is my simulation code

  print("TIME "+System.currentTimeMillis())
  val httpConf = http.baseUrl("http://abc.io")
  val httpConf2 = http.baseUrl("  http://abc.io")
  val scenario_name = "only1_1in10"
  val scn = scenario(scenario_name)
    .exec(
      http("370kb"+"_"+scenario_name)
        .post("/pulse/swift/upload?startTime="+System.currentTimeMillis())
        //.body(StringBody("""{ "runId": """" + 0 + """", "imageName":"""" + imageName + """" }""")).asJson
        .bodyPart(RawFileBodyPart("file","src/gatling/simulations/370kb.png")).asMultipartForm
    ).exec(
    http("370kb_next"+"_"+scenario_name)
      .post("/pulse/swift/upload?startTime="+System.currentTimeMillis())
      //.body(StringBody("""{ "runId": """" + 0 + """", "imageName":"""" + imageName + """" }""")).asJson
      .bodyPart(RawFileBodyPart("file","src/gatling/simulations/370kb.png")).asMultipartForm
  ).exec(
    http("370kb_next_next"+"_"+scenario_name)
      .post("/pulse/swift/upload?startTime="+System.currentTimeMillis())
      //.body(StringBody("""{ "runId": """" + 0 + """", "imageName":"""" + imageName + """" }""")).asJson
      .bodyPart(RawFileBodyPart("file","src/gatling/simulations/370kb.png")).asMultipartForm
  )



  setUp(
    scn.inject(
      constantUsersPerSec(1) during (10)
    )
  ).protocols(httpConf).assertions(forAll.failedRequests.percent.is(0))

I am just uploading images to my server. The server inturn pushes these images to a kafka Queue and responds with a 200

The issue I am having all the requests in first http group is always slow..while the other http groups are way faster. I am aware that the first request will take long time as server needs some time to warm up. However I am confused why all the 10 requests are slow.

below is the response time distribution for same image for above code

enter image description here

enter image description here

enter image description here

Can someone explain why the response time keeps on improving. What is the difference between the first group of requests and subsequent group of requests?

My server is a simple Spring Boot server which takes a multipart request and pushes it to a Kafka Queue.


code after seperating in different scenario

import io.gatling.http.Predef._
import io.gatling.core.Predef._
class OneSimulation extends Simulation {
  print("TIME "+System.currentTimeMillis())
  val httpConf = http.baseUrl("http://abc.io")
  val httpConf2 = http.baseUrl("  http://abc.io")
  val scenario_name = "only1_1in10"
  val scn = scenario(scenario_name)
    .exec(
      http("370kb"+"_"+scenario_name)
        .post("/pulse/swift/upload?startTime="+System.currentTimeMillis())
        //.body(StringBody("""{ "runId": """" + 0 + """", "imageName":"""" + imageName + """" }""")).asJson
        .bodyPart(RawFileBodyPart("file","src/gatling/simulations/370kb.png")).asMultipartForm
    )

  val scenario_name2 = "only1_1in10_2"
  val scn2 = scenario(scenario_name2)
    .exec(
      http("370kb"+"_"+scenario_name2)
        .post("/pulse/swift/upload?startTime="+System.currentTimeMillis())
        //.body(StringBody("""{ "runId": """" + 0 + """", "imageName":"""" + imageName + """" }""")).asJson
        .bodyPart(RawFileBodyPart("file","src/gatling/simulations/370kb.png")).asMultipartForm
    )

  val scenario_name3 = "only1_1in10_3"
  val scn3 = scenario(scenario_name3)
    .exec(
      http("370kb"+"_"+scenario_name3)
        .post("/pulse/swift/upload?startTime="+System.currentTimeMillis())
        //.body(StringBody("""{ "runId": """" + 0 + """", "imageName":"""" + imageName + """" }""")).asJson
        .bodyPart(RawFileBodyPart("file","src/gatling/simulations/370kb.png")).asMultipartForm
    )



  setUp(
    scn.inject(
       //atOnceUsers(20)
      //rampUsers(10)during(10)
      constantUsersPerSec(1) during (10)
      //atOnceUsers(20),
    ),
    scn2.inject(
      //atOnceUsers(20)
      //rampUsers(10)during(10)
      constantUsersPerSec(1) during (10)
      //atOnceUsers(20),
    ),
    scn3.inject(
      //atOnceUsers(20)
      //rampUsers(10)during(10)
      constantUsersPerSec(1) during (10)
      //atOnceUsers(20),
    )

    //rampUsersPerSec(10) to(20) during(10) randomized)
  ).protocols(httpConf).assertions(forAll.failedRequests.percent.is(0))




}

enter image description here

enter image description here

enter image description here

seperating out in different scenarios gives similar response times. However putting all request in same scenario gives me slower response time for first group but better response time for subsequent groups. Can someone help me explain me this behavior

user2973475
  • 357
  • 1
  • 3
  • 12
  • are you by any chance restarting the server between gatling runs? – James Warr Dec 11 '19 at 03:46
  • I'm curious about your use of System.currentTimeMillis() in the .post call. All the 1st calls will have the same value, same as for the 2nd and 3rd - but I wonder if 1st, 2nd and 3rd calls all get constructed fast enough that they've got the same value? Would your app somehow cache if the same endpoint was hit? Gatling gives a separate connection to each user. – James Warr Dec 11 '19 at 05:10
  • I am not restarting the server between runs. Also I can remove the System.currentTimeMillis(). I was just trying to pass this to the server and check the response time by logging on the server, but due to clock skew im not able to do that. The server dosent do anything with this query param, it justs logs it.I am really confused as to why the same server responds quickly for subsequent groups but responds slowly for all requests in the first request group – user2973475 Dec 11 '19 at 05:22
  • maybe split up the scenario and see what happens - ie: move each call into its own scenario and have all 3 running in parallel with the same injection profile. This way each call will have its own connection. – James Warr Dec 11 '19 at 05:31
  • When I run them in parallel, all of them are slow but equally slow. If I run in parallel as in the code posted above the subsequent requests are faster. Is this because gatling dosent reuses the connection and it takes time to setup the connection for each scenario and that too for every request? Im confused why is there a difference between parallel execution in scenario vs http.exec in same scenario – user2973475 Dec 11 '19 at 18:14
  • @JamesWarr I have seperated the scenarios and I find that each one of them now take same time but all are equally slow.. I have edited my post and shared the screenshots .Why is grouping my requests in same scenario speeding the execution of all requests (except the first)? – user2973475 Dec 11 '19 at 18:26
  • By default, gatling makes a request to gatline.io as the first request, so it shouldn't be a problem on the gatling side. Could there be an overhead on your application or infrastructure side around handling a new connection? You could try overriding the .warmUp url in the protocol definition to a different endpoint in your stack – James Warr Dec 11 '19 at 23:42
  • @JamesWarr Even if there is an overhead for 1st call..im confused how does it explain overhead on the entire first group of requests (in above case all the 10 requests in the scenario). Shouldn't 1 call take more time and other 9 calls in the same group return fast...vs all 10 being slow. Is there something that the 1st group shares that might be a bottleneck? Meanwhile I will setup some dummy endpoints which do no processing and run gatling on these with seperate scenarios vs one scenario multiple groups and check the behaviour – user2973475 Dec 12 '19 at 00:09
  • Need to really understand whats the difference between one scenario parallel requests, multiple scenarios (are requests fast in one scenario parallel as same user is making requests so maybe a connection is reused?) – user2973475 Dec 20 '19 at 04:43

0 Answers0