0

I have a single back-end running node/express providing API endpoints and 2 static (react) front-ends. The front-ends interact with the users and communicate with the back-end.

I need to use https through-out once enter production stage. The front-ends will require their own domain names.

I’ve been thinking on the simplest way to have these configured and have come up with Option 1 (see diagram). Node.js API server running on one VPS and as the front-ends are static sites these can be loaded on separate servers (UPDATE- Mean't to say hosting providers) hence get their own domains. As an option, and I’m unsure if its needed, add cloudflare to the front-end to provide a layer of security. This will allow front-ends to have separate domain names.

As this is a start-up project and doubt a large number of visitors, I’m wondering if the above is over-engineered and un-necessarily complicated.

So I’m considering Option 2 of hosting back-end api app and the two front-ends on the same linux vps. As the front-ends are static, I added the front-ends into the public folder of node.js. This allowed me to access the front-ends as http://serverIP:8080/siteA
As I want to access front-end as http://siteA.com I’m assuming I require a reverse proxy (nginX) The questions to help me decide between the two options are:

  1. For a start-up operation and given he above scenario which option is best ?
  2. I understand that node.js requires a port number regardless to work, for the API I don’t mind having a port number (as its not applicable for end users i.e. http://10.20.30.40:3000), however the two front-ends require their own domain names (www.siteA.xom, www.siteB.com), therefore will I need to employ a reverse proxy (nginX) regardless if they are static sites or not ?
  3. I’m concerned that someone could attack API end-points (http://10.20.30.40:3000). In this case, is it true with Option 2 is safer than 1 - that I could potentially block malicious direct API calls as all sites are hosted on the same VPS and the API can be easily be secured, this is not exposed to the outside world?
  4. My developer once upon a time told me that option 1 is best as nginX adds un-needed complication, but not sure what he meant – hence my confusion, to be honest I don’t think he wanted to add nginX to the server.

I’m looking at a high-level guidance to get me on the right track. Thank

enter image description here

1 Answers1

0

This is - as you have also doubted - unnecessarily complex, and incorrect in some cases. Here's a better (and widely used across the industry) design. I'm strongly recommending to drop the whole VM approach and go for a shared computing unit, unless you are using that machine for some other computation and utilizing it that way is saving your company a lot of money. I strongly doubt this is the case. Otherwise, you're just creating problems for yourself.

  1. One of the most common mistakes that you can make when using Node.js is to host the static content through the public folder (for serious projects) Don't. Use a CDN instead. You'll get better telemetry depending on the CDN, redundancy, faster delivery, etc. If you aren't expecting high volumes of traffic and performance of delivering that static content isn't outrageously important at the moment, you can even go for a regular hosting server. I've done this with namecheap and GoDaddy before.

  2. Use a direct node-js shared - or dedicated depending on size - hosting for your app and use CI/CD to deploy it. You can use CNAMEs to map whatever domain name you want to have your app on (ex: https://something.com) to map to the domain name of the cloud-hosting provider url for your app. I've used multiple things, Azure, Heroku, Namecheap for the apps and primarily Azure DevOps to manage the CI/CD pipeline, although Jenkins is super popular as well. I'd recommend Heroku - since it provides a super easy setup.

  3. When designing any API on HTTP, you should assume people will call your API directly. See this answer for more details: How to prevent non-browser clients from sending requests to my server I'm not suggesting to put something like CloudFlare, but you may be overthinking it, look into your traffic first. Get it when you need it. As long as you have the right authentication / authorization mechanism in place, security of the API shouldn't be a big problem on these platforms. If you deploy it on one of these platforms, you won't have to deal with ports either. Unless you reach absolutely massive scale, it will definitely be cheaper for you operate with high-reliability this way.

  4. You won't need to deal with nginx anymore.

Mavi Domates
  • 4,262
  • 2
  • 26
  • 45
  • Hi, Thanks - I've updated my question as made a mistake, meant to say that as Option A front-ends could be uploaded to a hosting provider. I think this is in alignment with your answer ? I didn't understand your comment on CDN as a CDN does not provide hosting ? About the back-end node.js hosting, I'm using MERN stack, and utilising a vps i.e. linux droplet - which I've already configured - agree its a pain as steep learning curve; are you suggesting, in my scenario I could use a shared service instead as per your point 2 (Azure, Heroku, Namecheap...)? - Sorry about all the question – Orange Juice Jones Sep 19 '20 at 21:13
  • Yeah - for static content (js, css, pure html files, images etc...) you should use a CDN. I'd suggest to drop the linux droplet and configure a shared unit from somewhere like heroku, you will setup what you need in literally 20 minutes. – Mavi Domates Sep 21 '20 at 08:24
  • @Jet I'd appreciate the green tick if this solves your problem – Mavi Domates Sep 21 '20 at 20:32
  • I've never delved into 'shared unit', so forgive my ignorance. I'm assuming this refers to Dyno, Elastic Beanstalk and Azure Web App. Previously, I FTP my files to server and then I can inspect the individual remote files using FTP if I need. Is this still possible with the above products? Also I need to use Mongo and websocket, do these platforms allow this? Just trying to compare and understand VPS and 'shared unit' before taking the plunge. – Orange Juice Jones Sep 22 '20 at 09:25
  • Those platforms allow the web-socket / you have a Mongo extension even that you can onboard with Heroku. You shouldn't need to FTP the files - just setup a github account and connect to github from Heroku. Then you can tell the PaaS - when I commit something new to master, deploy it to my unit. – Mavi Domates Sep 22 '20 at 10:20
  • VPS is a whole VM - which is why you're trying to do so much configuration. You have to handle the routing, configure localhost etc...With these units that I'm suggesting, a lot of those functionality would come by default so you wouldn't have to deal with nginx as an example. – Mavi Domates Sep 22 '20 at 10:21
  • Check these out to give you an idea: https://www.youtube.com/results?search_query=deploying+nodejs+app+to+heroku https://www.youtube.com/watch?v=QUvxrzINj5Q – Mavi Domates Sep 22 '20 at 10:22