I'm completely new when it comes to computational theory, and I've been asked to describe what the computational cost (as a function of n) is for a python function. I'm a bit confused on how to do this.
Am I to plug in different inputs and then calculate the difference between the run time of the algorithms? And if I did that how would I describe it as a function of n?
Sorry if this is a very beginner level question, I'm very new to this topic and I'd love some further explanation as well as some guidance on how to apply it to a real function using python.