If this question is more suited for the Mathematics stack exchange, please let me know.
I know the basic idea of vector images: images are represented by mathematical equations, rather than a bitmap, which makes them infinitely scalable.
Computing this seems straight forward for something like a straight line, but for one that is more complex, such as:
https://www.wolframalpha.com/input/?i=zoidberg-like+curve
I am wondering how would the program even begin to produce that equation as the output.
Does it break the image into little curve segments and try to approximate each one? What if a large, multi-segment part of an image could be represented efficiently using only one equation, but because the computer only "sees" one segment at a time, it doesn't realize this. Would the computer test every combination of segments?
I was just curious and wondering if anyone could provide a high-level description of the basic process.
As an example, consider an image represented by equation (1):
y = abs(x)
It could also be represented as (2):
y = -x (-inf, 0)
y = x (0, inf)
But you would only be able to figure out that it could be represented as (1) if you knew what the entire image looked like. If you were scanning from left to right and trying to represent the image as an equation, then you would end up with (2).