your complexity analysis is wrong as the stuff depends on more than just single parameter and their weights are very different between the two methods of rendering...
standard 2D rasterizer depends on:
- sum of rendered/visible area of primitives or their BBOX in pixels (huge impact)
- sum of number_of_vertexes of primitives (big impact)
- number_of_primitives (medium impact)
- view resolution ("tiny" impact if there is enough memory)
where raytracer depends on:
- number_of_primitives (huge impact)
- view resolution (huge impact)
- max level of recursion
- material settings
The weight of the parameters also depends on used HW and rendering technique (order might change slightly).
However the major difference is you do not need to check each primitive on per pixel basis for rasterizer while raytracer needs to do it (even multiple times)...
Another aspect is used HW. Common GFX cards are designed for rasterization and raytracing is/was achieved with huge pain to overcome HW design (at least until compute shaders) you can look at it as HW accelerated vs. SW render speed difference.
both of the aspects are in favor for rasterization and that is why ratsterization performs better.
However Raytracing HW is finally starting to emerge (for few years now) and on that the raytracers are fast (even on relatively small clock like 66MHz FPGA)...