0

I'm currently running two Cloud Run services (static web + web server).

The web server is connected to AlloyDB instance, which is all through a VPC. I have also set up Ingress Control to allow traffic from the vpc.

However, when I do set this Ingress Control, the static web Cloud Run (will just call it client) can't seem to connect to it. A status code of 403 raises and ends up showing just a CORS error (fastapi CORS is set and checked when Ingress Control is not set). The only odd error I've seen so far from Log Explorer is operationalError: SSL SysCall error: eof detected which I can't seem to decipher.

The VPC is a must, since the web server is connected to AlloyDB. (I'm sure there are other ways, but I'd like to follow the current documented method of connecting within the same VPC network).

What could be the issue here?

funtkungus
  • 238
  • 1
  • 2
  • 12

1 Answers1

0

Try to write a schema of your architecture. On your backend, you require traffic coming from the VPC. I'm also sure you added a serverless VPC connector to control the egress and to be able to reach AlloyDB

Now, think about the frontend. Firstly, if you set ingress = internal, only traffic coming from your VPC will be allowed to access your service. Your client (the brosers) that reach the Cloud Run service are, obviously, not connected to your VPC, therefore it does not work.

But, if you want to make the connection between the frontend and the backend, you could think that you need to control the egress of the frontend to be compliant wiyth the ingress of the backend. I mean: all in the same VPC. Set a serverless VPC connector on the frontend (egress = all) and that's all!!

But not again. Think about your schema: the static frontend is served from Cloud Run, but run on client side. And your client (the browsers) can't use the VPC connector of your Cloud Run services!! Because it's not on Google Cloud, but on their own computer.


Therefore, no much options... You have to let the ingress = all on all your Cloud Run services, else you won't be able to access your services.

guillaume blaquiere
  • 66,369
  • 2
  • 47
  • 76
  • you mentioned the browsers run `on their own computer` and not on the GC (possibly the focal point of the issue here?). Are there no other ways? we essentially want to limit access to both frontend and backend only for certain users, but give frontend access to the backend. – funtkungus Mar 04 '23 at 00:36
  • If you have server side rendering, you can imagine achieving what you want. But my prefered solution is to rely on the ideneity of the users and not "the network from where the request are coming". Google itself say "don't trust the network" – guillaume blaquiere Mar 04 '23 at 13:02
  • what we've done is: set up two LoadBalancers for both services and it seems to be working(?) just getting issues with cloud armor now (seems like our sqli sensitivity is too high since some requests get blocked??). question for another day tho. let me know if you think having two load balancers (1 for each service) for distributing traffic is okay to do for what we want to achieve (if you have time) – funtkungus Mar 08 '23 at 11:14
  • 1
    Yes, that design is OK if your load balancer serve only 1 service. The bad pattern is to have 2 load balancers to serve the same service, it does not make sense. – guillaume blaquiere Mar 08 '23 at 13:37