0

I have implemented a simple linear search and shown the results on the graph with StdDraw library. I have ran the search for a randomly generated number on tables of sizes from 1000 to 100000 elements, incrementing the size by 1 each time. The points on the graph represent the average time it took to find a random number in the given table, approximated in 1000 runs on the same table size.

However there are big deviations visible on the graph which I do not know how to explain. Is it possible that this is due to the interference of other background tasks requesting CPU processing? Could it be that the spikes are because of poorly generated pseudorandom integers, because the nextInt() method is called in a really tiny time slice resulting in similar (very big or very low) random integers?

Generated graph

(The red line represents the linear search and the blue one binary search. Ignore the latter)

  • We have no idea what data are you plotting - how do you expect anyone to explain deviations? – lexicore Mar 10 '18 at 16:32
  • @lexicore Thank you for pointing that out. I have included the information in the original question now. – Damjan Kalšan Mar 10 '18 at 16:40
  • You should be more elaborate on your search. Where and how do you search? Is it a relational database? What are your "tables" exactly? – lexicore Mar 10 '18 at 16:43
  • See also [How do I write a correct micro-benchmark in Java?](https://stackoverflow.com/questions/504103/how-do-i-write-a-correct-micro-benchmark-in-java) – lexicore Mar 10 '18 at 16:44
  • I've mixed up the terminology. The 'tables' in question are actually simple integer arrays of n elements. Thank you for the link, it provided me with enough information to feed my curiosity and answer the questions. :) Most of the deviations were in fact the work of JIT compiler. – Damjan Kalšan Mar 10 '18 at 18:06

0 Answers0