1

so I'm trying to implement a screen waypoint system, like you've probably seen in dozens of different games, where an icon is visible through walls, wherever the target is in the player's viewport. This can be done automatically in my case fortunately, but to cover the case of objects off the screen, we encounter the hard part.

Ideally the icon should be visible at the edge of the screen somewhere at all times, such that the view can look toward that icon to be align the target with their viewport again.

I found a way to do this with some trial and error. The method involves projecting the vector from camera to target onto the screen plane & using dot products to get x and y components of a direction in pixel space/screen space. We record the signs of these components so they don't get lost in the upcoming trigonometry. then we calculate the point on the edge of the screen that would be crossed (if it turned out that was the edge that would be crossed at all) and constrain that to the padded boundary. when we calculate the intercepts we end up losing relevant signs on our numbers because [trigonometry] so we reapply the signs we recorded before. the rest is just aligning our icon and a little pointer arrow pointing off screen.

this method works perfectly - but it's a bit of a mess, and probably wasteful. I'm not sure how much of this is necessary or if any of it can be simplified by better use of trig identities. I think it might be possible to collapse the if-elses into one block by applying mins and maxes to the intercepts, but i think that would be less performant. I saw a few other questions around with a number of different answers to this problem, but didn't find or overlooked this particular one, which is the only one that worked for me.

given that this sort of behaviour is extremely common in games, i feel like there must be a more 'canonical' method out there. So, I'm hoping to get some feedback on the performance and readability of this method, and if there's a better method, an idea of why that method is better.

Thanks in advance for any insight! or if this question is deemed redundant, no worries

local camera = workspace.CurrentCamera
local worldPoint = self.target.CFrame.Position;
local vector, inViewport = camera:WorldToScreenPoint(worldPoint);
local screenPoint = Vector2.new(vector.X, vector.Y);
local depth = vector.Z;

local screenGuiSizePixels = self.screenGui.AbsoluteSize;
local halfHeight = screenGuiSizePixels.Y/2-100;
local halfWidth = screenGuiSizePixels.X/2-100;
local maxlength = math.sqrt(math.pow(halfWidth,2) + math.pow(halfHeight,2)); -- the diagonal of one screen quadrant

local screenPointOffsetDirection = Vector2.new(screenPoint.X-screenGuiSizePixels.X/2, screenPoint.Y-screenGuiSizePixels.Y/2).Unit

local screenPosition = clampUdim2InPixels(UDim2.new(0, screenPoint.x, 0, screenPoint.y), screenGuiSizePixels.x, screenGuiSizePixels.y);
self.imageLabel.Position = screenPosition;
if inViewport then
    self.pointImageLabel.ImageTransparency = 1; 
else
    
    local v = (worldPoint-camera.CFrame.Position).Unit;
    local n = camera.CFrame.LookVector;
    local vec_p = (v - (v:Dot(n)) * n).Unit
    
    local cameracframe = camera.CFrame
    
    local x = vec_p:Dot(cameracframe.rightVector)
    local y = -vec_p:Dot(cameracframe.upVector)
    
    local viewportDirection = Vector2.new(x, y);
    local signs = Vector2.new(math.sign(x), math.sign(y));
    local angle = math.atan2(viewportDirection.Y, viewportDirection.X);
    local horizontal_intercept = halfHeight * math.abs(math.cos(angle)/math.sin(angle))*signs.X;--this is cotangent btw
    local vertical_intercept = halfWidth * math.abs(math.tan(angle))*signs.Y;
    
    local screen_edgepoint;
    if signs.x > 0 and signs.y > 0 then
        print("bottom right")
        screen_edgepoint = Vector2.new(math.min(halfWidth, horizontal_intercept), math.min(halfHeight, vertical_intercept));
    elseif signs.x > 0 and signs.y <=0 then
        print("top right")
        screen_edgepoint = Vector2.new(math.min(halfWidth, horizontal_intercept), math.max(-halfHeight, vertical_intercept));
    elseif signs.x < 0 and signs.y > 0 then
        print("bottom left")
        screen_edgepoint = Vector2.new(math.max(-halfWidth, horizontal_intercept), math.min(halfHeight, vertical_intercept));
    elseif signs.x < 0 and signs.y <= 0 then
        print("top left")
        screen_edgepoint = Vector2.new(math.max(-halfWidth, horizontal_intercept), math.max(-halfHeight, vertical_intercept));
    end
    
    local offsetDirection = Vector2.new(screenPosition.X.Offset-screenGuiSizePixels.X/2, screenPosition.Y.Offset-screenGuiSizePixels.Y/2).UnitscreenGuiSizePixels.x, screenGuiSizePixels.y);
    screenPosition = UDim2.fromOffset(screenGuiSizePixels.X/2+ screen_edgepoint.x, screenGuiSizePixels.Y/2 + screen_edgepoint.y);
    
    self.imageLabel.Position = screenPosition;
    
    self.pointImageLabel.ImageTransparency = 0;
    local offsetDirection = Vector2.new(screenPosition.X.Offset-screenGuiSizePixels.X/2, screenPosition.Y.Offset-screenGuiSizePixels.Y/2).Unit
    self.pointImageLabel.Position = UDim2.fromOffset(screenPosition.X.Offset+offsetDirection.X*(waypointRadius+arrowRadius), screenPosition.Y.Offset+offsetDirection.Y*(waypointRadius+arrowRadius));
    local angle = math.atan2(offsetDirection.Y, offsetDirection.X);
    self.pointImageLabel.Rotation = math.deg(angle)+90;
end
Kylaaa
  • 6,349
  • 2
  • 16
  • 27
Swanijam
  • 77
  • 1
  • 8

0 Answers0