I'm revising the formal definitions of Big O and the other associated bounds and something is tripping me up. In the book I'm reading (Skiena) Big O is defined as:
f(n) = O(g(n)) when there exists a constant c such that f(n) is always <= c*g(n) for a some value of n > n0
This generally makes sense to me. We are only concerned with large enough values of n that the growth rates actually matter. But why multiply g(n) by c? It seem like I could choose a very large value for c, and make the whole thing arbitrary by blowing out the size of smaller g(n) values.
Secondary question: When choosing to classify an algorithm into a complexity class, is the general rule of thumb to just choose the lowest growth class that still holds according to the definition of Big O? According to the definition it seems valid to classify a constant time algorithm as O(n!), simply because f(n) will be <= c*g(n). Off course this offers no value.
Thanks!