There is no one right answer to this. While the PVRUniScoEditor is a great tool for relative estimates of shader performance, you can't just say that a fragment shader which consumes X estimated cycles will lead to Y framerate on a given device.
How heavy a particular shader might be is just one piece in the puzzle. How many fragments will it cover onscreen? Is your performance bottleneck on the fragment processing side (Renderer Utilization in the OpenGL ES Driver instrument near 100%)? Is blending enabled? All of these factors will affect how long it takes to render a frame.
The tile-based deferred renderers on iOS also have some interesting performance characteristics, where adjusting the cycle count for a particular fragment shader does not lead to a linear change in rendering time, even for a fill-rate-limited application. You can see an example of this in this question of mine, where I encountered sudden performance changes with slight variations of a fragment shader. In that case, adjusting the shader wasn't the the primary solution, preventing the blending of unnecessary fragments was.
In addition to straight cycle counts reported by the profiler are the limitations for texture bandwidth, and the severe effect that I've found cache misses can have in these shaders.
What I'm trying to say is that the only real way to know what the performance will be for your shaders in your application is to run them and see. There are general hints that can be used to tune something that isn't fast enough, but there are so many variables that every solution will be application-specific.