53

I have a large scala code base. (https://opensource.ncsa.illinois.edu/confluence/display/DFDL/Daffodil%3A+Open+Source+DFDL)

It's like 70K lines of scala code. We are on scala 2.11.7

Development is getting difficult because compilation - the edit-compile-test-debug cycle is too long for small changes.

Incremental recompile times can be a minute, and this is without optimization turned on. Sometimes longer. And that's with not having edited very many changes into files. Sometimes a very small change causes a huge recompilation.

So my question: What can I do by way of organizing the code, that will improve compilation time?

E.g., decomposing code into smaller files? Will this help?

E.g., more smaller libraries?

E.g., avoiding use of implicits? (we have very few)

E.g., avoiding use of traits? (we have tons)

E.g., avoiding lots of imports? (we have tons - package boundaries are pretty chaotic at this point)

Or is there really nothing much I can do about this?

I feel like this very long compilation is somehow due to some immense amount of recompiling due to dependencies, and I am thinking of how to reduce false dependencies....but that's just a theory

I'm hoping someone else can shed some light on something we might do which would improve compilation speed for incremental changes.

Mike Beckerle
  • 725
  • 5
  • 13
  • 8
    Why hasn't this been upvoted like a hundred times in the first five minutes? – som-snytt Nov 04 '16 at 01:03
  • 1
    maybe because compilation time is like the biggest issue with scala and it has been dicussed hundreds of times before. and just a quick googling certainly would have helped. – rethab Nov 04 '16 at 06:19
  • 2
    I did google this. Lots of things talk about why scala compilation is slow, but that doesn't mean there's nothing I can do to my code to help. (Well, that could be the case that nothing will help, but I'm hoping some people with large scala code have found some techniques that help from the source code organization side.) – Mike Beckerle Nov 04 '16 at 20:03
  • Essentially, a single sbt 'project' is a unit of compilation. The more source files you have in a single project, the longer it takes to compile. Break up your pojects into root projects and subprojects. – Yawar Nov 11 '16 at 04:40
  • 1
    There is reply from Martin Odersky that gives clues about features to avoid if you want your code to compile faster http://stackoverflow.com/questions/3490383/java-compile-speed-vs-scala-compile-speed/3612212 – Serg M Ten Nov 11 '16 at 09:30
  • great question. – user6035379 Nov 16 '16 at 18:50
  • 1
    @MikeBeckerle You might find our Scala Days 2017 talk "Compile like a boss!" relevant to your quest https://www.youtube.com/watch?v=QKvzyHroKLA :) – Mirco Dotta Oct 11 '17 at 18:18

5 Answers5

7

Here are the phases of the scala compiler, along with slightly edited versions of their comments from the source code. Note that this compiler is unusual in being heavily weighted towards type checking and to transformations that are more like desugarings. Other compilers include a lot of code for: optimization, register allocation, and translation to IR.

Some top-level points: There is a lot of tree rewriting. Each phase tends to read in a tree from the previous phase and transform it to a new tree. Symbols, to contrast, remain meaningful throughout the life of the compiler. So trees hold pointers to symbols, and not vice versa. Instead of rewriting symbols, new information gets attached to them as the phases progress.

Here is the list of phases from Global:

 analyzer.namerFactory: SubComponent,
    analyzer.typerFactory: SubComponent,
    superAccessors,  // add super accessors
    pickler,         // serializes symbol tables
    refchecks,       // perform reference and override checking,
translate nested objects
    liftcode,        // generate reified trees
    uncurry,         // uncurry, translate function values to anonymous
classes
    tailCalls,       // replace tail calls by jumps
    explicitOuter,   // replace C.this by explicit outer pointers,
eliminate pattern matching
    erasure,         // erase generic types to Java 1.4 types, add
interfaces for traits
    lambdaLift,      // move nested functions to top level
    constructors,    // move field definitions into constructors
    flatten,         // get rid of inner classes
    mixer,           // do mixin composition
    cleanup,         // some platform-specific cleanups
    genicode,        // generate portable intermediate code
    inliner,         // optimization: do inlining
    inlineExceptionHandlers, // optimization: inline exception handlers
    closureElimination, // optimization: get rid of uncalled closures
    deadCode,           // optimization: get rid of dead cpde
    if (forMSIL) genMSIL else genJVM, // generate .class files

some work around with scala compiler

Thus scala compiler has to do a lot more work than the Java compiler, however in particular there are some things which makes the Scala compiler drastically slower, which include

  • Implicit resolution. Implicit resolution (i.e. scalac trying to find an implicit value when you make an implicit declartion) bubbles up over every parent scope in the declaration, this search time can be massive (particularly if you reference the same the same implicit variable many times, and its declared in some library all the way down your dependancy chain). The compile time gets even worse when you take into account implicit trait resolution and type classes, which is used heavily by libraries such as scalaz and shapeless. Also using a huge number of anonymous classes (i.e. lambdas, blocks, anonymous functions).Macros obviously add to compile time.

    A very nice writeup by Martin Odersky

    Further the Java and Scala compilers convert source code into JVM bytecode and do very little optimization.On most modern JVMs, once the program bytecode is run, it is converted into machine code for the computer architecture on which it is being run. This is called the just-in-time compilation. The level of code optimization is, however, low with just-in-time compilation, since it has to be fast. To avoid recompiling, the so called HotSpot compiler only optimizes parts of the code which are executed frequently.

    A program might have different performance each time it is run. Executing the same piece of code (e.g. a method) multiple times in the same JVM instance might give very different performance results depending on whether the particular code was optimized in between the runs. Additionally, measuring the execution time of some piece of code may include the time during which the JIT compiler itself was performing the optimization, thus giving inconsistent results.

    One common cause of a performance deterioration is also boxing and unboxing that happens implicitly when passing a primitive type as an argument to a generic method and also frequent GC.

    There are several approaches to avoid the above effects during measurement,like It should be run using the server version of the HotSpot JVM, which does more aggressive optimizations.Visualvm is a great choice for profiling a JVM application. It’s a visual tool integrating several command line JDK tools and lightweight profiling capabilities.However scala abstracions are very complex and unfortunately VisualVM does not yet support this.parsing mechanisms which was taking a long time to process like cause using a lot of exists and forall which are methods of Scala collections which take predicates,predicates to FOL and thus may pass entire sequence maximizing performance.

    Also making the modules cohisive and less dependent is a viable solution.Mind that intermediate code gen is somtimes machine dependent and various architechures give varied results.

    An Alternative:Typesafe has released Zinc which separates the fast incremental compiler from sbt and lets the maven/other build tools use it. Thus using Zinc with the scala maven plugin has made compiling a lot faster.

    A simple problem: Given a list of integers, remove the greatest one. Ordering is not necessary.

Below is version of the solution (An average I guess).

def removeMaxCool(xs: List[Int]) = {
  val maxIndex = xs.indexOf(xs.max);
  xs.take(maxIndex) ::: xs.drop(maxIndex+1)
}

It's Scala idiomatic, concise, and uses a few nice list functions. It's also very inefficient. It traverses the list at least 3 or 4 times.

Now consider this , Java-like solution. It's also what a reasonable Java developer (or Scala novice) would write.

def removeMaxFast(xs: List[Int]) = {
    var res = ArrayBuffer[Int]()
    var max = xs.head
    var first = true;   
    for (x <- xs) {
        if (first) {
            first = false;
        } else {
            if (x > max) {
                res.append(max)
                max = x
            } else {
                res.append(x)
            }
        }
    }
    res.toList
}

Totally non-Scala idiomatic, non-functional, non-concise, but it's very efficient. It traverses the list only once!

So trade-offs should also be prioritized and sometimes you may have to work things like a java developer if none else.

Community
  • 1
  • 1
khakishoiab
  • 9,673
  • 2
  • 16
  • 22
4

Some ideas that might help - depends on your case and style of development:

  • Use incremental compilation ~compile in SBT or provided by your IDE.
  • Use sbt-revolver and maybe JRebel to reload your app faster. Better suited for web apps.
  • Use TDD - rather than running and debugging the whole app write tests and only run those.
  • Break your project down into libraries/JARs. Use them as dependencies via your build tool: SBT/Maven/etc. Or a variation of this next...
  • Break your project into subprojects (SBT). Compile separately what's needed or root project if you need everything. Incremental compilation is still available.
  • Break your project down to microservices.
  • Wait for Dotty to solve your problem to some degree.
  • If everything fails don't use advanced Scala features that make compilation slower: implicits, metaprogramming, etc.
  • Don't forget to check that you are allocating enough memory and CPU for your Scala compiler. I haven't tried it, but maybe you can use RAM disk instead of HDD for your sources and compile artifacts (easy on Linux).
yǝsʞǝla
  • 16,272
  • 2
  • 44
  • 65
1

You are touching one of the main problems of object oriented design (over engineering), in my opinion you have to flatten your class-object-trait hierachy and reduce the dependecies between classes. Brake packages to different jar files and use them as mini libraries which are "frozen" and concentrate on new code.

Check some videos also from Brian Will, who makes a case against OO over-engineering

i.e https://www.youtube.com/watch?v=IRTfhkiAqPw (you can take the good points)

I don't agree with him 100% but it makes a good case against over-engineering.

Hope that helps.

firephil
  • 830
  • 10
  • 18
  • But will this even help. Other things I've read say that because scala classes and files don't have to have matching names, many more files must be opened and searched for things. If that is the case, will reorganizing must code as suggested actually help the scala compiler go faster? Ex: even if I have a bunch of jars I'm linking against, if scala has to open and search lots of them.... how does that help? – Mike Beckerle Nov 16 '16 at 20:17
  • I guess what I'm asking is "Have you actually done this class-object-trait hierarchy flattening, and did that work to improve scala compilation speed?" Or is this just "in principle" this is the issue? – Mike Beckerle Nov 16 '16 at 20:18
  • I have used a few big 3rd party libraries imported as source files and also imported them as jar files and the second works better in practise. Theoretically it's the same thing after the initial compilation, but in practise you are forced with the jars import to not change the class hierarchy or the classes contained in the library. Basically you will be forced modularise your code base and reduce your dependencies – firephil Nov 16 '16 at 21:30
  • Dependencies between Classes,objects is Definitely the number one issue of your slow compilation speed. Regards – firephil Nov 16 '16 at 21:43
0

You can try to use the Fast Scala Compiler.

jlncrnt
  • 46
  • 5
  • My compilations are taking minutes. FSC is about saving a few seconds of compiler startup time. That's in the noise for me. Also I think Eclipse already is using a persistent FSC-like approach. (I think.) – Mike Beckerle Nov 04 '16 at 20:04
0

Asides minor code improvements like (e.g @tailrec annotations), depending on how brave you feel, you could also play around with Dotty which boasts faster compile times among other things.

airudah
  • 1,169
  • 12
  • 19
  • We are heavily dependent on built-in XML literals. I understand that being out as a library, but will things like Dotty still support it as an extension? – Mike Beckerle Nov 10 '16 at 22:49
  • Yes. You can add scala-xml in your classpath if you need it. It just isn't included by default. See https://github.com/lampepfl/dotty/issues/73 – airudah Nov 14 '16 at 11:25