-1

I have configured my backend following this article: How to deploy multiple micro-services under one API domain with Serverless

I understand that deploying the services one by one is possible using the serverless offline plugin. But the plugin creates multiple mini-APIGateways on different ports.

My front end doesn't know that. It assumes that everything is deployed and ready to be used on one port.

If I want to, let's say, test out a feature that requires a valid session, I can't do that locally as my session and the feature, are managed by 2 different services.

Any manually testing in this kind of situation is only possible once I've deployed all the changes. This takes a lot of time.

Is there a way to deploy all the services on the same port, behind single API gateway?

Akshay Kumar
  • 875
  • 13
  • 29

2 Answers2

1

I can't say I understand the question completely, but I'll give it a shot. I assume you mean that a single 'microservice' is a separate API with it's own subdomain (e.g. service1.yourdomain.com, service2.yourdomain.com etc.). And you try to locally test this on your machine using serverless offline.

While I don't know how that would work on subdomain level, there is a path based option it seems. As mentioned here, there is also a plugin that internally routes requests based on their path as well. It seems to basically put a proxy in front of the other api's and forwards to the correct port https://github.com/edis/sls-multi-gateways. Full medium article here: https://aws.plainenglish.io/run-multiple-serverless-applications-d8b38ef04f37.

And having said that, it's always possible to set up a proxy yourself using docker that forwards the requests to services running on different ports based on hostname or path.

LRutten
  • 1,634
  • 7
  • 17
  • The services do not have a seperate API gateway, they are configured to be deployed on seperate paths. For example, service1 would be on yourdomain.com/service1 and service2 would be on yourdomain.com/service2. Also, the services, although can be deployed seperately, but are part of single API gateway. – Akshay Kumar Aug 27 '21 at 09:52
  • 1
    Then it should actually be quite straightforward right? Just a single api gateway should be mapped to a single hostname (e.g. localhost) and a single httpPort (e.g. 8080). Then be accessing localhost/service1:8080 should access your service1 lambda. Same for localhost/service2:8080 which should invoke your service2 lambda. If you app cannot handle non-80 ports in general, you can always try to deploy on httpPort 80 (assuming you have no other processes running on that port). – LRutten Aug 27 '21 at 10:02
  • 1
    Sorry that should be localhost:8080/serviceX of course, not localhost/serviceX:8080 :P – LRutten Aug 27 '21 at 10:38
  • I tried setting up sls-multi-gateways and it does help, running multiple services parallely. But they are still running on different ports. I've also created an issue on sls-multi-gateways repo. But this has taken me closer to where I want to be. Now all I need is another proxy which forwards requests to sls-multi-gateways. – Akshay Kumar Aug 27 '21 at 10:40
1

The sls-multi-gateways package runs multiple API gateways. If you have multiple services and want to run than locally at the same time, you can add the package. But that's not a complete solution as ultimately you probably want the backend to be accessible on a single host.

That means you are adding a dependency, and it gets you halfway.

When you try to run multiple gateways locally without this package, you get error stating that the port 3002 is already being used. That is because the serverless offline plugin has 3002 assigned as the default port for lambda functions.

Since we are trying to run multiple services, the first service will hog the 3002, and the rest will fail to start. To fix this, you have to tell serverless offline, which ports it should use for deploying lambda functions for each service by specifying the lambdaPort in serverless.yml files for your services. This can be done like:

custom: 
  serverless-offline:
    httpPort: 4001 // this is the port on which the service will run
    lambdaPort: 4100 // this is the port which will be assigned to the first lambda function in the service

So for each service, the port will be 400n and the lambda port will be 4n00 This pattern is safe if you have less than 100 lambda functions in your service. Looks like you just need to assign a port to support manual lambda invocations.

Now you can run all the services parallelly using concurrently. We are now where we would be with sls-multi-gateways.

Next what we need is a proxy. I used the createHttpProxy middleware with express. But you can setup any depending on your project.

Here is what the code for proxy service looks like:

const express = require("express");
const {
  createProxyMiddleware,
} = require("http-proxy-middleware");

const port = process.env.PORT || 5000; // the port on which you want to access the backend
const app = express();

app.use(
  `/service2`,
  createProxyMiddleware({
    target: `${urlOfService2}`,
  })
);
app.use(
  `/service1`,
  createProxyMiddleware({
    target: `${urlOfService1}`,
  })
);


app.listen(port, () => {
  console.log(`Proxy service up on ${port}.`);
});

Akshay Kumar
  • 875
  • 13
  • 29