Shortly put, you treat relative execution time as an appropriate function of relative input size, and you select the 'appropriate function' based on the assumed complexity of the algorithm.
Specifically, a linear algorithm's execution time will scale as a linear function of relative input size, a quadratic algorithm's execution time scales quadratically with input size, etc.
In this example, you know the algorithm takes 0.5ms to execute on input size 100, and if you want to know how it performs on input size 500, the first question you ask yourself is 'What is the relative change in input size?' (ie 'by what factor did it change?') Answer = new_size/old_size = 500/100 = 5: the input is 5 times larger.
Next, to determine (estimate) the new execution time, you apply the appropriate function to this scale factor and the original execution time based on the algorithm's assumed complexity. Let's call this function for the new execution time T, denoting the initial time as t_0 and the input scale factor as f. You evaluate T to get the new execution time using the assumed complexity as so:
Linear: T(f,t_0) = (f)t_0 = (5)0.5ms = 0.25ms
Linearithmic: T(f,t_0) = flog(f)t_0 = (5log(5))0.5ms = 0.58ms (assuming base 2)
Quadratic: T(f,t_0) = (f^2)t_0 = (5^2)0.5ms = 1.25ms
Cubic: T(f,t_0) = (f^3)t_0 = (5^3)0.5ms = 62.5ms
You can generalize this approach to other situations where you want to estimate the execution time of an algorithm on a new input size based on its assumed complexity and a datapoint about its execution time on some input by calculating the new scale factor 'f' and plugging in the known time for t_0.