0

I'm working on a webapp that lets you build tools in the browser to be used by others.

So, for example, someone might make a calculator app with custom buttons that take user input, process it in a certain way (currently using eval()) and display the output. They type their script into an input and we store it. When the button is pressed, we call eval on the script, and then display the result. Not only this, but we want to let them manipulate the dom, adding elements to the screen etc - just as though we let them upload a javascript file and run it. We don't vouch for the apps' safety, just like Squarespace doesn't vouch for their users websites' safety.

In other words, we actually want to let users execute code in the browser arbitrarily, similarly to how Squarespace lets people who build websites execute arbitrary javascript to customize their website.

This can't be moved to a sandboxed server because it needs to happen in many places in real-time (some buttons trigger multiple scripts in sequence with our native functionality in between them) and interact with the DOM.

It appears we can use the Function constructor to prevent access to local variables, but clearly it's not much different from eval: eval vs function constructor

Perhaps an alternative to directly executing their functions using eval is to generate a javascript file with each script defined as a function and then to import it and call those functions? But is that any better?

We effectively want to let them create a custom javascript file and call the functions on button press (this is what Squarespace allows basically), we're just using eval because it's simpler right now.

If there's some scoping/spoofing risk with evaluating a javascript string vs importing it as a file then that's obviously worth changing to a file, but we want to let "developers" write scripts and we want to run the scripts on their behalf just like Squarespace does.

Is what we're currently doing as secure as what platforms like Squarespace are (letting you write code to a .js file and importing it on the user-created website)? Or should we be generating a javascript file with the code in it and importing it/using some other method?

Max Hudson
  • 9,961
  • 14
  • 57
  • 107
  • If the user is executing their own provided code in their own browser, you don't really need to worry about using `eval()`. Anything they do this way they can easily do by hand in DevTools. – Barmar Jul 11 '23 at 01:11
  • You only need to worry if user 1 can create functions that would be executed by user 2. – Barmar Jul 11 '23 at 01:12
  • The tools would be publishable just like squarespace sites are publishable - visitors 1 & 2 both run the code written by the website author, but visitor 1 can't impact visitor 2 - so it's being run by other users, but it's one-directional - the author of the tool controls what code runs - that's how visitors 1 & 2 actually get any functionality – Max Hudson Jul 11 '23 at 01:17
  • Then what's the problem? Just call `eval()`. – Barmar Jul 11 '23 at 01:21
  • There may be no problem - I updated my question to more clearly state what I'm asking at the end, but it sounds like your answer would be what we're doing is as secure as what Squarespace – Max Hudson Jul 11 '23 at 01:24
  • I'm not failiar with SquareSpace, so I couldn't tell how what you're doing compares. – Barmar Jul 11 '23 at 01:26
  • Squarespace is an arbitrary example of user-provided code being imported from a js file into a visitor's website, but if you're not sure how to answer the generic version of the question no worries! – Max Hudson Jul 11 '23 at 01:29
  • The general answer is that you don't need to run code in a sandbox if you trust the code. – Barmar Jul 11 '23 at 01:36
  • But is there some risk with using eval specifically that running it in the form of an imported js file doesn't have? – Max Hudson Jul 11 '23 at 01:51
  • No. They're all just executing arbitrary JS code that you supply. – Barmar Jul 11 '23 at 01:51

1 Answers1

0

After reading the clarification and the discussion comments, the following is my $0.02 on the issue.

Assumptions about your application context: You are building a platform where a user (say, x) can write arbitrary JS code, that another user (say, y) can execute. I assume that "x" type of users are content creators and "y" type of users are consumers/customers/competitors of "x".

The problem. In general, allowing one user to run code injected by another user breaks one of the most fundamental security guarantees offered by the browser -- the same-origin policy. This means that your "x" type of users will enjoy the same privilege as you (the website owner) from the perspective of "y" type of users. From the perspective of the browsers of "y" users, your code and the code from the "x" type of users are indistinguishable.

This means, user "x" can read, and alter any content/view of user "y". They can exfiltrate all activities, and sensitive information, such as session tokens, usernames, passwords, credit card info, etc. to a website they control.

Work-in-progress Solution. The bad news is there is no working solution to sidestep the issues without sandboxing/restricting the capabilities of the user-submitted code and sandboxing will not guarantee foolproof security, but that's the best we can do here (I think). The main idea would be to block API calls that you think are problematic. For example, "eval" can be a great candidate to start with. I personally prefer the allow-list-based approach for sandboxing to the "deny-list-based" approach. This is because with allow lists, I know exactly what I am trading off, which is important to postmortem issues. In contrast, it is hard to exhaustively cover what you want to block in a deny-list-based approach.

One approach you can follow to create an allow list is -- first create a corpus of "legitimate" codes that you would normally allow and trust. Then, run them and log all the API calls they are making. Then analyze their usage context. Based on your analysis you might be able to categorize the APIs you logged into i) safe APIs, and ii) risky APIs. For risky APIs, you might want to dig more to find heuristics that might help you potentially identify the "legitimate" context and block the other types of use. This is a rabbit hole unfortunately, I don't know how to avoid if you want to offer your users a safe space.

Sazzadur Rahaman
  • 6,938
  • 1
  • 30
  • 52
  • Separate users on their own domains would be sufficient to address this concern, right? Same origin policy only applies if they're running from the same domain I'd think. Presumably this can be done somehow with programmatically generated subdomains? – Max Hudson Jul 13 '23 at 01:56
  • Yes, that's right. That's a good line of thinking! – Sazzadur Rahaman Jul 13 '23 at 03:16