Say we are creating a calibration lookup table for a device, shown in the plot below. The theta represents different phase values, and the r represents different magnitude values. The calibration setpoints are shown in blue circles, and are taken at every N degrees of phase and N values of magnitude. For every setpoint, we measure the actual device output and obtain the red coordinates, which describe the resulting phase and magnitude. Thus for every blue setpoint, we observe the device outputting red points.
The question now is, I want to set the device to a value of the green circle with orange ring. How do I calculate what the setpoint should be (green circle) to set the device to in order to obtain green/orange on the output?
The issue I am having is that for every 2D setpoint (mag, phase), the resultant data is 2D (mag, phase). In addition, magnitude and phase are not independent variables (fixing phase and changing only magnitude, the resulting phase output does change).
So what basic math/logic should I use to perform the necessary interpolation?