0

I have a problem that people are cloning my website front and imitate calls to my API from their own domains to abuse my service. The solution I came up with is for Angular client to check the URL it works on, encrypt it and add as a header to API call. Obfuscate the JS code to prevent reverse engineering. This way API will receive an encrypted header and make sure that the domain is the proper one.

So on the client side

headers.append(`CustomHeader`, this.encryptDomain());

and on the server side

var domainEncrypted = Request.Content?.Headers?.GetValues("CustomHeader").FirstOrDefault();
var domainPlain = Decrypt(domainEncrypted);
if (domainPlain != myDomain)
{
  return BadRequest();
}

Can you please help me with code samples to match JS and C# encrypt and decrypt algorithms? So that encryptDomain works on JS side and Decrypt works on the C# side. I am aware that this is not a perfect solution, but I want to try. And if anyone has a better idea, you are welcome.

Edit: apparently what I want to do is similar to JScrambler domain lock feature

Toolkit
  • 10,779
  • 8
  • 59
  • 68
  • 4
    Have you set up CORS on your server side application? And if so, have you restricted the origin to your domain? – Plabbee May 22 '22 at 12:34
  • Wouldn't they still be able to use a proxy and programmatically imitate `Origin` header? That's what actually happening, they call my API from their API, not from their client – Toolkit May 23 '22 at 05:51
  • even more simple https://github.com/Shivam010/bypass-cors solution. So CORS is not a solution for me – Toolkit May 23 '22 at 05:58
  • If I understood correctly the solution described, then if attacker can override CORS header, then it will be also able to override you `secret url` header with the correct precalculated value (this value may be grabbed from your site just by inspecting some requests in devtools, for example). You can add the full requests signing (including not only domain, but the full requests payload + timestamp), but it will be still possible to extract a signing keys from the client and reuse them in the attacker's code. – Serg May 23 '22 at 07:19
  • @Serg yes, you got it correct. But my solution is to obfuscate JS so that its function remains unknown and its output is encrypted (but contains domain and timestamp and may be request hash). So since encryption algorithm and signing key will be obfuscated, I expect the attacker won't be able to replicate my web client and thus my API will return data only if the signature is valid and the domain is proper. It all depends of course if JS obfuscation can't be reversed, but it seems there are strong techniques including some commercial ones – Toolkit May 23 '22 at 07:48
  • 2
    Security by obscurity will only work temporarily. You would need to implement proper authentication and authorisation. Only allowing users with proper accounts access to the resources. – phuzi May 23 '22 at 09:16
  • @phuzi you are confusing two topics. No one is trying to replace authentication with code obfuscation, lol – Toolkit May 23 '22 at 09:20
  • 1
    True but why couldn't authentication provide you a solution in this instance? – phuzi May 23 '22 at 09:33
  • @phuzi please read the question carefully. Attackers clone the web client, then use their own proxy backend to replicate calls from the original web client including authentication headers to get commercial data that they resell cheaper on their website. No one is bypassing authentication me or attackers – Toolkit May 23 '22 at 09:38
  • 2
    Pulling the thread that @phuzi was going down... "Attackers clone the web client...including authentication headers." So does that mean the attackers are registered users of your web site, or are they able to leverage stale authentication credentials? If the "attackers" are registered users, then it seems there's a need for some identity management (ie, is the user a legitimate consumer of your services) or maybe there's a need for two levels of authentication / authorizations, one for browser access and another for API access, with the latter requiring more vetting... – Trentium May 25 '22 at 22:10
  • 1
    Another potential option is to limit or block the API calls at the server side based on the rate and scope of the calls, and not unblock until you vet the user. (Ie, in effect, apply SLAs to the use of the API to align with the expected use by your average user...) – Trentium May 25 '22 at 22:11
  • 1
    @Trentium attacker uses automated requests to login to my website and obtain my auth token which they then use to get the data from their proxy backend. So their users log in on their website but when they search for something there, attacker issues a request to my API imitating a request from the browser. That's why I need both web client and API to work on my domain only. Rate limiting is a valid option actually. Another one is SSL pinning, but I am still researching – Toolkit May 27 '22 at 11:04
  • Another option seems like signing up with the offender's website, and then using it to trigger some API calls that you can trace to their originating IP address. Once identifying their backend "clent" IP address, simply block it. Heck you might be able to build something like that in your API, such that if the API sees a certain request come through (eg, a query for product "xyz123"), that a random few minutes later, your server stops responding to that incoming IP address... This means you have to keep on top of whoever is abusing your site, but that's the nature of identity management... – Trentium May 28 '22 at 17:48
  • 1
    @Trentiume we do that, but there are too many of them and they use proxies. I am surprised that there is no legit solution for this tbh, something with SSL – Toolkit May 30 '22 at 12:34

3 Answers3

1

TLDR

It is not possible to prevent communicate with your API through different (cloned) clients guaranteed way in cases when white-lists of IP addresses can't/shouldn't be used.

Why

Think about it that way. You have a server that has some identification rule - client should have some identifier that marks it as trusted. In your question it is a domain.

Domain is a public information that could be passed in HTTP header or in the body of your request, it is easy, but also it will be easy for clients to replace this information on their side.

And if you use any type of cryptography to provide more secured identification mechanism - you just making it harder to hack it and again pretend as trusted client, because every mechanism you use on the client side could be reverse-engineered by a hacker. Just look at this question.

One think you can use to guaranteed access restriction is to use white-list of IP addresses on server-side, because IP address is a part of TCP/IP transport level protocol and it has "handshake" process to identify communicated points to each other, and it is kind of hard to replace it. Check this question for details.

So what can you do?

CORS

Setup CORS policy is a first step to create a trusted client-server communication. Most of browsers are support CORS policies, but of course client may be not a browser. And the questions was not about browser-server communication, but I should mention that because browser is a client too.

Client-side encryption

You can use encryption, but I don't see any reason to do that because any request to server could be read through your legal client (website). So even if you encrypted it - any person has a key and a crypto algorithm on their side to pretend as trusted client. But if you want to...

You need to create unique key every your request to make life of pretenders little harder. To make it you need few ingredients:

  • Public key for key generation (encrypted) on the client side
  • Obfuscated key generation JS code
  • Private key for decrypt generated key on the server side

JS-side RSA crypto libraries could be googled easily (for example)

Obfuscation libraries could be found just using google too (like this)

Server-side decryption could be done with System.Security.Cryptography namespace if you use C# backend.

Basically, more complex key-generation algorithm you make and more obfuscated code you make - more hard for hacker to pretend himself as a trusted client. But as I said there is no guaranteed way to completely identify trusted client.

picolino
  • 4,856
  • 1
  • 18
  • 31
  • The front-end is cloned. So the JS-side logic is cloned. This requires a back-end solution geared toward traffic monitoring and crafting rules to properly isolate the bad actors and pruning them from the system. – Allen Clark Copeland Jr Jun 07 '22 at 03:57
  • @AllenClarkCopelandJr basically - yes, as i said, there is no way to completely guarantee security for such problem. But if browsers uses CORS policy - then frontend-as-site can be cloned, but it should execute not in browser engine (where CORS policy is supports completely). – picolino Jun 07 '22 at 19:39
  • Yes there is and it's called domain lock feature. Commercial JS obfuscation and domain lock – Toolkit Jun 09 '22 at 09:15
1

You cannot prevent people from copying your website's FE assets... They are supposed to be publicly available. You could try to make it a little harder by spliting your built app in more chunks (with angular's lazzy-loading or by manipulating webpack's config). Still, Browsers require code in plain text, so although this makes it a little harder it does not prevent copying.

When we build angular for production it already does code obfuscation through its optimizations (minification, tree-shaking and so on).

To mitigate the problem of people misusing your Server resources, you need to implement robust practices on Back-End request authorization and some miss-usage detection.

Configuring CORS would not work, as you reported attackers are using BE proxies.

Make sure your request's authentication is solid. A market standard approach is the use of a JWT payload embedded in the Authorization Header of each request. It is simple, reliable and resource-inexpensive.

I would also recommend the implementation of request throttling. But this is a separated question.

If your authentication is already solid, you would need to detect when your real users are misusing your system. There are many tools to monitor traffic (like azure's) but there is no umbrella definition for "unusual traffic". Detection of "unusual traffic" is what you would need to custom built for the specifics of your system. Once you have a network traffic tool in place that should help you getting started.

The Fabio
  • 5,369
  • 1
  • 25
  • 55
-2

Couple of solutions for you. Firstly you can block by applying a CORS policy on server. If you still want to do from code then you can block on this basis of hostname in c# like this.

var hostname = requestContext.HttpContext.Request.Url.Host;
if (hostname != myDomain)
{
  return BadRequest();
}