1

Alright, so I know there are a lot of questions referring to normalized device coordinates here on SO, but none of them address my particular issue.

So, everything I draw it's specified in 2D screen coordinates where top,left is (0,0) and bottom right is (screenWidth, screenHeight) then in my vertex shader I do this calculation to get out NDC (basically, I'm rendering UI elements):

float ndcX = (screenX - ScreenHalfWidth) / ScreenHalfWidth;
float ndcY = 1.0 - (screenY / ScreenHalfHeight);

where ScreenX/ScreenY is pixel coordinates, for example (600, 700) and screenHalf_____ is half of the screen width/height.

And the final position that I return from the vertex shader for the rasterization state is:

gl_Position = vec4(ndcX, ndcY, Depth, 1.0);

Which which works perfectly fine in Opengl ES.

Now the problem is that when I try it just like this in Metal 2, it doesn't work.

I know Metal's NDC are 2x2x1 and Opengl's NDC are 2x2x2 but I thought depth here didn't play an important part in this equation since I am passing it in my self per vertex.

I tried this link and this so question but was confused and the links weren't that helpful since I am trying to avoid matrix calculations in the vertex shader since I am rendering everything 2D for now.

So my questions...What is the formula to transform pixel coordinates to NDC in Metal? Is it possible without using an orthographic projection matrix? Why doesn't my equation work for Metal?

Miguel
  • 872
  • 1
  • 12
  • 26
  • "it doesn't work"? How, specifically, did it not work? – Ken Thomases Feb 22 '19 at 20:20
  • So, what I was getting we I was zooming in after the viewport transformation....However I figured out that it wasn't the math the issue, it was how I was configuring the Metal drawable layer [here](https://developer.apple.com/library/archive/qa/qa1909/_index.html) and [here](https://developer.apple.com/library/archive/documentation/3DDrawing/Conceptual/MTLBestPracticesGuide/NativeScreenScale.html) helped me figure it out. – Miguel Feb 22 '19 at 22:01

1 Answers1

2

It is of course possible without a projection matrix. Matrices are just a useful convenience for applying transformations. But it's important to understand how they work when situations like this arise, since using a general orthographic projection matrix would perform unnecessary operations to arrive at the same results.

Here are the formulae I might use to do this:

float xScale =  2.0f / drawableSize.x;
float yScale = -2.0f / drawableSize.y;
float xBias = -1.0f;
float yBias =  1.0f;

float clipX = position.x * xScale + xBias;
float clipY = position.y * yScale + yBias;

Where drawableSize is the dimension (in pixels) of the renderbuffer, which can be passed in a buffer to the vertex shader. You can also precompute the scale factors and pass those in instead of the screen dimensions, to save some computation on the GPU.

warrenm
  • 31,094
  • 6
  • 92
  • 116
  • That's pretty much the same math I have except that I do less calculation in the shader. I did find the problem on why I thought it wasn't working. It had to do with the native scaling that happens in IOS. My math was correct but my set up of the view port and drawable surface was just wrong. PS I love your work with Metal. I've actually used it. – Miguel Feb 22 '19 at 21:34
  • 1
    Glad you found the issue. I've reworked my answer to be more computationally efficient, at the expense of some readability. – warrenm Feb 22 '19 at 22:21