4

We are running a build of our application using Dojo 1.9 and the build itself is taking an inordinate amount of time to complete. Somewhere along the lines of 10-15 minutes.

Our application is not huge by any means. Maybe 150K LOC. Nothing fancy. Furthermore, when running this build locally using Node, it takes less than a minute.

However, we run the build on a RHEL server with plenty of space and memory, using Rhino. In addition, the tasks are invoked through Ant.

We also use Shrinksafe as the compression mechanism, which could also be the problem. It seems like Shrinksafe is compressing the entire Dojo library (which is enormous) each time the build runs, which seems silly.

Is there anything we can do to speed this up? Or anything we're doing wrong?

sma
  • 9,449
  • 8
  • 51
  • 80
  • Node is much faster than using Rhino for our build as well, is installing Node on your server an option? – Kryptic Oct 15 '13 at 20:13
  • Does your code compress the entire dojo library (dojo, dojox, and dijit) each time you build also? Seems ridiculous to me. Know of any way around that? – sma Oct 15 '13 at 22:35
  • It all gets bundled into a single layer file, and only the dojo modules that are used get included. So you should only need to reference that file in production. – Kryptic Oct 16 '13 at 17:31

1 Answers1

2

Yes, that is inordinate. I have never seen a build take so long, even on an Atom CPU.

In addition to the prior suggestion to use Node.js and not Rhino (by far the biggest killer of build performance), if all of your code has been correctly bundled into layers, you can set optimize to empty string (don’t optimize) and layerOptimize to "closure" (Closure Compiler) in your build profile so only the layers will be run through the optimizer.

Other than that, you should make sure that there isn’t something wrong with the system you are running the build on. (Build files are on NAS with a slow link? Busted CPU fan forcing CPUs to underclock? Ancient CPU with only a single core? Insufficient/bad RAM? Someone else decided to install a TF2 server on it and didn’t tell you?)

C Snover
  • 17,908
  • 5
  • 29
  • 39
  • Thanks for the response. If I were to do that with the optimization settings, then wouldn't that mean the entire Dojo library would be uncompressed at runtime? Further, is that necessarily a bad thing, too? Because theoretically any Dojo file I request should be in my layer, right? – sma Oct 16 '13 at 16:15
  • Modules that are not in layers would not be optimized. If the Dojo modules you use are in layers, then they will be optimized within those layers. The only semi-justifiable reason why all files are built by the build system to begin with is because you can conditionally require modules at runtime so they should exist to avoid the app failing—but as long as you know all your modules are built layers then it doesn’t matter. – C Snover Oct 16 '13 at 21:51
  • Thank you for the response. Makes sense and it has sped up my build. Node is out of the question, but I did switch to Closure for optimization (as opposed to Shrinksafe) and I am now only compressing my layer. – sma Oct 16 '13 at 23:51
  • 10-15 minutes is not uncommon using some of the smaller AWS isntances (small/medium). The have neither the read/write speed, nor the processing power to complete a build in a reasonable amount of time. – Andrew Dec 03 '13 at 06:18