The problem I'm trying to solve is finding all the intersections between line segments which span a given rectangle. The lines are randomly generated up to a given quantity, on the order of 1E3-1E4, and all the endpoints can be found on the rectangle. It's basically the rectangular case of generating random chords in a circle. At low densities the system looks like this.
The output I'm mainly interested in is the graph generated from the lines and intersections, preferably in a form which I could feed to matplotlib from a .csv file.
To get a sense of dimensions, I first went for a naive implementation, checking each pair for intersects using this with Numpy. It works relatively ok, but, probably because I didn't pay any attention whatsoever to memory management, it freezes at large numbers of lines.
I have now been trying for a few days to adapt my extremely messy code to use the Bentley-Ottman line sweep algorithm, but I got to a point where I realised I needed to rewrite it from scratch in order to work, mainly because I've been mixing lists with Numpy arrays and explicit indices instead of proper data structures.
My question is, in this particular case of very dense, very long segments, is a line sweep algorithm necessarily faster than the naive approach, and by what margin?