According to Alex Martelli in O'Reilly's Python in a Nutshell, the complexity class O(n) + O(n) = O(n)
. So I believe it. But am confused. He explains it by saying that, "the sum of two linear functions of N is also a linear function of N."
According to wikipedia, in functional analysis a linear function is a linear map, an example of which would be f(x+y) = f(x) + f(y)
. Found what seems to be a simpler definition here simply stating, "a linear function is a function who's graph is a straight line." And it includes a few more examples that are less esoteric-looking than the wikipedia article ones.
y = f(x) = a + bx
and:
y = 25 + 5x
let x = 1
then
y = 25 + 5(1) = 30
let x = 3
then
y = 25 + 5(3) = 40
Maybe it would be fair to expect that the sum of two linear equations could be represented on a graph as a straight line which shows something similar to an average between the straight lines that would represent each one.
So am I understanding correctly that even though in the following two functions, the "complex" function takes three times as long to process as the "simple" one, they each function in big-O notation as O(n), because the arc of the processing time would be represented by a straight, diagonal line on a plot/graph, and the time difference would be represented by the fact that on a graphic representation, the angle of the more complex function would be sharper?
from timer import Timer
def complex(it):
result = []
for i in it:
result.append(i)
result.reverse()
return result
def simple(it):
result = list(it)
longlist = list(range(0, 1000000))
with Timer() as t:
reverse_(longlist)
print "=> elapsed time reverse: %s." % t.secs
with Timer() as t:
straight(longlist)
print "=> elapsed time straight: %s" % t.secs