12

I'm using RequireJS for my web application. I'm using EmberJS for the application framework. I've come to a point where, I think, I should start bundling my application into a single js file. That is where I get a little confused:

If I finally bundle everything into one file for deployment, then my whole application loads in one shot, instead of on demand. Isn't bundling contradictory to AMD in general and RequireJS in particular?

What further confuses me, is what I found on the RequireJS website:

Once you are finished doing development and want to deploy your code for your end users, you can use the optimizer to combine the JavaScript files together and minify it. In the example above, it can combine main.js and helper/util.js into one file and minify the result.

I found this similar thread but it doesn't answer my question.

Community
  • 1
  • 1
Code Poet
  • 11,227
  • 19
  • 64
  • 97

2 Answers2

22

If I finally bundle everything into one file for deployment, then my whole application loads in one shot, instead of on demand. Isn't bundling contradictory to AMD in general and RequireJS in particular?

It is not contradictory. Loading modules on demand is only one benefit of RequireJS. A greater benefit in my book is that modularization helps to use a divide-and-conquer approach. We can look at it in this way: even though all the functions and classes we put in a single file do not benefit from loading on demand, we still write multiple functions and multiple classes because it helps break down the problem in a structured way.

However, the multiplicity of modules we create in development do not necessarily make sense when running the application in a browser. The greatest cost of on-demand loading is sending multiple HTTP requests over the wire. Let's say your application has 10 modules and you send 10 requests to load it because you load these modules individually. Your total cost is going to be the cost you have to pay to load the bytes from the 10 files (let's call it Pc for payload cost), plus an overhead cost for each HTTP request (let's call it Oc, for overhead cost). The overhead has to do with the data and computations that have to occur to initiate and close these requests. They are not insignificant. So you are paying Pc + 10*Oc. If you send everything in one chunk you pay Pc + 1*Oc. You've saved 9*Oc. In fact the savings are probably greater because (since compression is often used at both ends to reduce the size of the data transmitted) compression is going to provide greater benefits if the entire data is compressed together than if it is compressed as 10 chunks. (Note: the above analysis omits details that are not useful to cover.)

Someone might object: "But you are comparing loading all the modules in separately versus loading all the modules in one chunk. If we load on demand then we won't load all the modules." As a matter of fact, most applications have a core of modules that will always be loaded, no matter what. These are the modules without which the application won't work at all. For some small applications this means all modules, so it make sense to bundle all of them together. For bigger applications, this means that a core set of modules will be used every single time the application runs, but a small set will be used only on occasion. In the latter case, the optimization should create multiple bundles. I have an application like this. It is an editor with modes for various editing needs. A good 90% of the modules belong to the core. They are going to be loaded and used anyway so it makes sense to bundle them. The code for the modes themselves is not always going to be used but all the files for a given mode are going to be needed if the mode is loaded at all so each mode should be its own bundle. So in this case a model with one core bundle and a series of mode bundles makes sense to a) optimize the deployed application but b) keep some of the benefits of loading on demand. That's the beauty of RequireJS: it does not require to do one or the other exclusively.

Louis
  • 146,715
  • 28
  • 274
  • 320
  • 1
    Thanks for the detailed answer. This makes perfect sense. – Code Poet Dec 12 '13 at 03:50
  • Excellent response. How do you accomplish configuring and loading these different bundles using requirejs? – Johnathon Sanders Feb 02 '14 at 20:26
  • 1
    @Johnathon I've covered some of the details of how one does this in [this answer](https://stackoverflow.com/questions/20469611/r-js-understand-requirejss-r-js-optimizer/20469976#20469976). – Louis Feb 02 '14 at 20:47
  • It seems to me you are completely ignoring the benefits of loading separate modules asynchronously. Most browsers permit 6 simultaneous requests, and even more if you use your server in conjunction with a CDN. So even though you have the cost of opening and closing several separate http requests, all of these requests are happening in parallel, so the modules load faster being separated than if they were in one single file. This doesn't mean you shouldn't bundle, but when bundling you should take full advantage of asynchronous loading instead of just going for the minimum number of files. – wired_in Feb 13 '14 at 22:15
  • 2
    @wired_in "so the modules load faster being separated than if they were in one single file." No they don't. At multiple points in the system the communication pipe is a single lane. You get the parallelism by having the multiple transfers share the line in turn. It's like having a single core system and having six CPU-bound tasks execute in parallel through time slicing vs executing them in sequence. The six tasks will be completed earlier if they execute in sequence. – Louis Feb 13 '14 at 23:10
  • @Louis So you're telling me that browsers cannot make asynchronous/concurrent requests and have more than one file downloading at the same time? "The six tasks will be completed earlier if they execute in sequence." So this 'asynchronous' facade that browsers claim to be able to do is actually slower than just downloading files sequentially, begging the question why would they do that? You're using a lot of fluff with nothing backing up these claims, and that last sentence just doesn't make any sense. – wired_in Feb 14 '14 at 02:49
  • 1
    @wired_in You completely misunderstood. – Louis Feb 14 '14 at 11:12
  • @Louis Well most of what you said seems purposefully vague using nothing but metaphors, but I'm not sure how I'm misunderstanding the last sentence, unless that's not what you meant to say. – wired_in Feb 14 '14 at 16:08
  • @Louis I would love to hear your explanation on what I'm not understanding here. You are clearly claiming that with asynchronous loading you can't download a 100k file faster if it were split into 5 20k files that download concurrently. This goes against everything I've ever read in books and online. I'd like some source on this. – wired_in Feb 14 '14 at 16:20
  • 2
    All calls on a browser share the same transfer pipe as do all processes. There is only so much bandwidth available, so if each separate request generates `x`bytes overhead, the total load on the pipe is increased with each successive call. You do not magically get more bandwidth by increasing the number of queues. https://www.npmjs.org/package/cjs-vs-amd-benchmark – Nathaniel Johnson Feb 19 '14 at 16:04
2

While developing you want to have single-focused, small files. This causes their number to increase. When running in production, many HTTP requests really harm performance. Then again you do not want to load the entire application upfront - this is also not optimal.

To address this, I have created a small project in GitHub, require-lazy, you can call it plugin to the builder - r.js. It can lazy load parts of your application with a simple syntax and then create separately donloadable bundles during the build process; so if your application consists of 2 views that need to be independently loaded, require-lazy will (ideally) build 3 js files: (1) the bootstrap code and common libraries, (2) view 1 with all its private scripts and (3) view 2 with all its private scripts.

Lazy loading is simply defined as:

define(["lazy!view1"], function(view1) { .... });

And view1 must be accessed with a promise:

view1.get().done(function(realView1) {
    ...
});

The project is available through npm, the build process through grunt and there is a bower component.

Comments are more than welcome.

Nikos Paraskevopoulos
  • 39,514
  • 12
  • 85
  • 90