3

is there a simple way to control time complexity of functions in haskell?

I have constructed a function that reverses a list of any type, the goal is to have it in linear time complexity which i have attempted to solve like this:

rev :: [a] -> [a]
rev(x:[]) = [x]
rev(x:xs) = (rev xs) ++ [x]

my intetion is to have it linear, O(n), but i am unsure if it actually is and would therefore like use some type of tool that might analyze the code/function to show that it is infact linear.

i would thus like to know:

Is this function linear? can i show it with any analyzing tools?

kalle konsida
  • 323
  • 2
  • 5
  • 12

2 Answers2

4

(…) have it linear, O(n), but i am unsure if it actually is and would therefore like use some type of tool that might analyze the code/function to show that it is infact linear.

The function is not linear. The append function (++) :: [a] -> [a] -> [a] is linear. Since we do this for each element in the list, we end up with O(n2) time complexity.

What we can do to make this a linear function is use an accumulator: an extra parameter that we use that each time will be passed in an update form:

myrev :: [a] -> [a]
myrev = (`go` [])
  where go (x:xs) ys = go xs (x:ys)
        go [] ys = ys

We thus start the accumulator with an empty list. For each element in the original list x we pop it from the first list, and push it to the second list. When the first list is exhausted, then ys contains the list in reverse.

Benjamin Hodgson
  • 42,952
  • 15
  • 108
  • 157
Willem Van Onsem
  • 443,496
  • 30
  • 428
  • 555
  • [`print . head . rev $ [1..n]` is *O(n)*.](https://stackoverflow.com/a/14942678/849891) – Will Ness Apr 28 '21 at 13:03
  • 2
    @WillNess Sure, and `const 1 $ rev [1..n]` is O(1). But that's, like, ....not really the point? It's fairly traditional, I think, to talk about the time complexity if the entire output is forced. – Daniel Wagner Apr 28 '21 at 13:52
  • @DanielWagner yes it is the point, and yes, it's unfortunate that it is traditional to gloss over the important distinctions. IMO. put your imperative programmer hat on, then read this answer. you'll be left with an erroneous understanding of the problem. – Will Ness Apr 28 '21 at 15:02
  • @DanielWagner also, your counterpoint is invalid. it says nothing at all about operations of `rev` / `++`, unlike my remark. – Will Ness Apr 28 '21 at 15:13
  • ... so the point is, the function itself is linear, the structure it constructs is consumed fully in quadratic time. @DanielWagner – Will Ness Apr 28 '21 at 15:43
3

Using the ghci interpreter:

As a rough first cut, you can use the ghci interpreter. For simplicity let's enter your definition of rev interactively rather than from a source file. Then switch statistics mode on:

$ ghci
 GHCi, version 8.8.4: https://www.haskell.org/ghc/  :? for help

 λ> 
 λ> let { rev :: [a] -> [a] ; rev [] = [] ; rev (x:xs) = (rev xs) ++ [x] }
 λ> 
 λ> :set +s
 λ> 

Then we just benchmark your rev function on 10,000 elements then 20,000 elements. The point of calling length on the result is to force a full evaluation:

 λ> 
 λ> length $ rev [1..10000]
10000
(1.27 secs, 4,294,492,504 bytes)
 λ> 
 λ> length $ rev [1..20000]
20000
(5.69 secs, 17,511,565,464 bytes)
 λ> 

So it appears that doubling the workload makes the cost about 4 times larger. The rev function has quadratic cost.

Why so ? well, as everything else in Haskell, the leftmost argument of operator (++) is immutable. Hence, in order to incorporate its leftmost argument into its output, the function has to duplicate it.

One can try the same benchmark using the code provided in Willem's answer. Much better... In order to get significant numbers, it is better to try it with 1,000,000 rather than just 10,000 elements.

jpmarinier
  • 4,427
  • 1
  • 10
  • 23
  • 1
    it's not the function that is quadratic, but the full use of its result. [taking `head $ rev [1..n]` is O(n)](https://stackoverflow.com/a/14942678/849891). – Will Ness Apr 28 '21 at 13:05