Another answer with better complexity of O(n log n)
Code
appointments = [[4,5], [2,3], [3,6], [5,7], [7,8]]
appointments.sort(key= lambda x:x[0]) #appointments get sorted by start time.
for i, a in enumerate(appointments):
for i2 in range(i+1, len(appointments)):
b = appointments[i2]
if a[1] > b[0]: #if start of second appointment is before end of first appointment
print(f"{a} and {b} conflict")
else:
break
Output
[3, 6] and [4, 5] conflict
[3, 6] and [5, 7] conflict
Explanation
By sorting first the code becomes way more efficient. We start by checking the first ever appointment and comparing its end-time to the start time of the second appointment. If they overlap we add print the pair and keep comparing the first ever appointment to third, forth, fifth appointment until they don't overlap anymore. And because we sorted the appointments by start-time, we know that all appointments after that won't intersect.
Then we continue by comparing the second ever appointment to the appointments following it, and so on.
Complexity
The sorting is in python is O(n log n)
.
Because I assume that there are only very few conflicting appointments and that they are of similar length we can deduce that the loop for checking the conflicts is O(n)
. Only the outer loop scales with O(n)
the inner loops complexity does not necessarily grow at all given that the appointment-density stays constant so O(1)
. Which is combined O(n).
(Worst case would be O(n^2)
but that only happens when every event intersects every other event which would be strange. )
But because the sorting beforehand takes O(n log n)
the algorithm as a hole also runs a O(n log n)
.
Verification of complexity
To see how the algorithm performs we can generate time the execution for different sizes of appointment-lists. I generated datasets that should be close to real-world data using the following function:
def generate_test_data(test_size):
appointments = []
start_time = 0
for i in range(test_size):
appointments.append([start_time, start_time+choice(range(1,3))])
start_time += choice(range(8))
shuffle(appointments)
return appointments
This produces lists like the example of @QiangSuper but of specific length. Using this, time the algorithms for different n_(length of input)_ and plot the runtime.
I took my algorithm and @Paddy3118s algorithm as an example:
w = []
p = []
repetitions = 10**2 #runs 100 times for higher accuracy
for test_size in range(0,1000):
w.append(0); p.append(0)
for i in range(repetitions):
a = generate_test_data(test_size*10)
b = deepcopy(a)
start = timer()
conflicts_w1(b)
end = timer()
w[test_size] += end-start
b = deepcopy(a)
start = timer()
conflicts_p1(b)
end = timer()
p[test_size] += end - start
print(f"{test_size*10}, {w[test_size]/repetitions}, {p[test_size]/repetitions}")
This produces the following result:

One can see that the blue line rises linearly while the orange line behaves like a quadratic function even though both implementations look very similar. But this difference can be explained.
My algorithm scales with O(n log n). But because the sort-function of python is implemented in c it's effects only become visible for way larger sets. So most of my algorithms runtime can be attributed to the for loop, which scales linearly.
For @Paddy3118s algorithm, the main difference is the use of a[i+1:]
. Slices in python scale with O(n)
. So the for-loop in his algorithm scales with O(n^2)
.
If we we plot the same data on a log log diagram then we get the following:

We can see that indeed @Paddy3118s algorithm IS FASTER as he already claimed and successfully proofed. But only for list smaller than 100 items. That's why OP asked about complexity and not about speed in the specific case.
But we live in a free world, so everyone can choose the algorithm they like best.