I've Googled and poked around the Go website, but I can't find an explanation for Go's extraordinary build times. Are they products of the language features (or lack thereof), a highly optimized compiler, or something else? I'm not trying to promote Go; I'm just curious.
-
12@Support, I'm aware of that. I think that implementing a compiler in such a way that it compiles with noticeable quickness is anything but premature optimization. More than likely, it represents the outcome of good software design and development practices. Also, I can't stand to see Knuth's words taken out of context and applied incorrectly. – Adam Crossland Jun 04 '10 at 18:31
-
67The pessimist's version of this question is "Why does C++ compile so slowly?" http://stackoverflow.com/questions/588884/why-do-compilations-take-so-long – dan04 Jul 03 '10 at 21:20
-
23I voted to reopen this question as it is not opinion-based. One can give a good technical (non-opinionated) overview of language and/or compiler choices which facility compilation speed. – Martin Tournoij Mar 28 '16 at 19:43
-
3For small projects, Go seems slow to me. This is because I remember Turbo-Pascal being far far faster on a computer that was probably thousands of times slower. http://prog21.dadgum.com/47.html?repost=true. Every time I type "go build" and nothing happens for several seconds I think back to crusty old Fortran compilers and punched cards. YMMV. TLDR: "slow" and "fast" are relative terms. – RedGrittyBrick May 11 '17 at 09:43
-
Definitely recommend reading https://dave.cheney.net/2014/06/07/five-things-that-make-go-fast for more detailed insights – Karthik Mar 06 '18 at 05:23
11 Answers
Dependency analysis.
The Go FAQ used to contain the following sentence:
Go provides a model for software construction that makes dependency analysis easy and avoids much of the overhead of C-style include files and libraries.
While the phrase is not in the FAQ anymore, this topic is elaborated upon in the talk Go at Google, which compares the dependency analysis approach of C/C++ and Go.
That is the main reason for fast compilation. And this is by design.

- 5,451
- 2
- 38
- 46

- 10,145
- 1
- 37
- 41
-
3This phrase is not in the Go FAQ anymore, but a more detailed explanation of the "dependency analysis" topic comparing the C/C++ and Pascal/Modula/Go approach is available in the talk [Go at Google](https://talks.golang.org/2012/splash.article#TOC_5.) – rob74 Dec 11 '19 at 14:15
I think it's not that Go compilers are fast, it's that other compilers are slow.
C and C++ compilers have to parse enormous amounts of headers - for example, compiling C++ "hello world" requires compiling 18k lines of code, which is almost half a megabyte of sources!
$ cpp hello.cpp | wc
18364 40513 433334
Java and C# compilers run in a VM, which means that before they can compile anything, the operating system has to load the whole VM, then they have to be JIT-compiled from bytecode to native code, all of which takes some time.
Speed of compilation depends on several factors.
Some languages are designed to be compiled fast. For example, Pascal was designed to be compiled using a single-pass compiler.
Compilers itself can be optimized too. For example, the Turbo Pascal compiler was written in hand-optimized assembler, which, combined with the language design, resulted in a really fast compiler working on 286-class hardware. I think that even now, modern Pascal compilers (e.g. FreePascal) are faster than Go compilers.

- 15,959
- 6
- 20
- 21

- 18,889
- 4
- 46
- 89
-
20Microsoft's C# compiler does not run in a VM. It is still written in C++, primarily for performance reasons. – blucz May 20 '11 at 07:04
-
23Turbo Pascal and later Delphi are the best examples for blazingly fast compilers. After the architect of both has migrated to Microsoft, we've seen vast improvements in both MS compilers, and languages. That's not a random coincidence. – TheBlastOne Dec 29 '11 at 21:22
-
-
918k lines (18364 to be exact) of code is 433334 bytes (~0,5MB) – el.pescado - нет войне Apr 28 '15 at 13:21
-
11The C# compiler has been compiled with C# since 2011. Just an update in case anyone reads this later. – Kurt Koller Oct 07 '15 at 20:48
-
4The C# compiler and the CLR that runs the generated MSIL are different things however. I'm fairly certain the CLR is not written in C#. – jocull May 12 '16 at 14:18
-
`modern Pascal compilers (e.g. FreePascal) are faster than Go compilers.` <- No, they are not. Especially not free pascal. And especially not anything made by embarcadero. (we use both at work) – nurettin Jan 27 '17 at 11:00
-
-
1@nurettin out of curiosity, just tested that freepascal can do compilation of simple files in under 100ms on my over 10 years old windows box. Go compilation takes about 700ms. I don't see compilers from other languages come even close to pascal compilation. – eis Feb 13 '22 at 18:46
There are multiple reasons why the Go compiler is much faster than most C/C++ compilers:
Top reason: Most C/C++ compilers exhibit exceptionally bad designs (from compilation speed perspective). Also, from compilation speed perspective, some parts of the C/C++ ecosystem (such as editors in which programmers are writing their code) aren't designed with speed-of-compilation in mind.
Top reason: Fast compilation speed was a conscious choice in the Go compiler and also in the Go language
The Go compiler has a simpler optimizer than C/C++ compilers
Unlike C++, Go has no templates and no inline functions. This means that Go doesn't need to perform any template or function instantiation.
The Go compiler generates low-level assembly code sooner and the optimizer works on the assembly code, while in a typical C/C++ compiler the optimization passes work on an internal representation of the original source code. The extra overhead in the C/C++ compiler comes from the fact that the internal representation needs to be generated.
Final linking (5l/6l/8l) of a Go program can be slower than linking a C/C++ program, because the Go compiler is going through all of the used assembly code and maybe it is also doing other extra actions that C/C++ linkers aren't doing
Some C/C++ compilers (GCC) generate instructions in text form (to be passed to the assembler), while the Go compiler generates instructions in binary form. Extra work (but not much) needs to be done in order to transform the text into binary.
The Go compiler targets only a small number of CPU architectures, while the GCC compiler targets a large number of CPUs
Compilers which were designed with the goal of high compilation speed, such as Jikes, are fast. On a 2GHz CPU, Jikes can compile 20000+ lines of Java code per second (and the incremental mode of compilation is even more efficient).

- 20,030
- 7
- 43
- 238
-
22Go's compiler inlines small functions. I'm not sure how targeting a small number of CPUs makes you faster slower... I assume gcc isn't generating PPC code while I'm compiling for x86. – Brad Fitzpatrick Oct 14 '12 at 12:24
-
1@BradFitzpatrick hate to resurrect an old comment but by targeting a smaller number of platforms developers of the compiler can spend more time optimizing it for each one. – ScottishTapWater Sep 16 '18 at 01:48
-
using an intermediate form allows you to support a lot more architectures since now you only have to write a new backend for each new architecture – phuclv Oct 01 '19 at 11:40
-
"the optimizer works on the assembly code" Assembly code sounds platform dependent, do they really have a separate optimizer for each supported platform? – Mark Jul 27 '20 at 17:50
-
1@Mark my understanding is that they have a platform independent assembly language which they compile Go code into. Then they translate that into the architecture-specific instruction set. https://golang.org/doc/asm – Student Aug 25 '20 at 01:46
-
@Student That sounds a lot like an "internal representation", which this answer claims Go doesn't do. Perhaps they were making some arbitrary distinction between IR and what Go is doing? – Chinoto Vokro Oct 03 '20 at 17:08
Compilation efficiency was a major design goal:
Finally, it is intended to be fast: it should take at most a few seconds to build a large executable on a single computer. To meet these goals required addressing a number of linguistic issues: an expressive but lightweight type system; concurrency and garbage collection; rigid dependency specification; and so on. FAQ
The language FAQ is pretty interesting in regards to specific language features relating to parsing:
Second, the language has been designed to be easy to analyze and can be parsed without a symbol table.

- 8,484
- 1
- 41
- 75
-
6That's not true. You cannot fully parse Go source code without a symbol table. – Dec 29 '11 at 20:02
-
14I also don't see why garbage collection enhances compile times. It just doesn't. – TheBlastOne Dec 29 '11 at 21:19
-
4These are quotes from the FAQ: http://golang.org/doc/go_faq.html I can't say if they failed to accomplish their goals (symbol table) or if their logic is faulty (GC). – Larry OBrien Dec 30 '11 at 01:49
-
1@TheBlastOne concurrency and garbage collection are added to make development/code design faster, not compilation. While the previous paragraph helps provide a bit of context to make that clear, I agree that the wording is more than a little unclear. – matthias Jun 09 '12 at 23:57
-
@Atom Can you point me out a place where parsing without a symbol table is impossible in the grammar of Go? – fuz Mar 11 '13 at 06:31
-
7@FUZxxl Go to http://golang.org/ref/spec#Primary_expressions and consider the two sequences [Operand, Call] and [Conversion]. Example Go source code: identifier1(identifier2). Without a symbol table it is impossible to decide whether this example is a call or conversion. | Any language can be to some extent parsed without a symbol table. It is true that most parts of Go source codes can be parsed without a symbol table, but it isn't true that it is possible to recognize all the grammar elements defined in golang spec. – Mar 11 '13 at 07:14
-
-
Is it possible that the compiler needs a "type table" (of all the defined types), but not a "symbol table" (all the variables)? i.e. If the code doesn't define any new types, you won't need to store anything. – BraveNewCurrency Jan 19 '14 at 02:17
-
1@Atom: It may be possible that semantically type casts *are* function calls. A function that returns its argument, with its type tag changed. – ithisa Mar 06 '14 at 21:23
-
2@Atom It's certainly possible to parse Go without a symbol table. There is no requirement that call and conversion be disambiguated during the parse. Simply parse both of them as a conversionOrCall and determine what the specific case is after parsing is complete. – Sam Harwell Mar 25 '14 at 14:33
-
@280Z28 How would you suggest to handle the following invalid piece of code: x:=aType. The parser cannot decide whether to print the error until it sees the symbol table. – Mar 25 '14 at 19:21
-
5@Atom You work hard to prevent the parser from ever being the piece of code that reports an error. Parsers generally do a poor job of reporting coherent error messages. Here, you create a parse tree for the expression as though `aType` is a variable reference, and later in the semantic analysis phase when you find out it's not you print a meaningful error at that time. – Sam Harwell Mar 25 '14 at 20:38
While most of the above is true, there is one very important point that was not really mentionend: Dependency management.
Go only needs to include the packages that you are importing directly (as those already imported what they need). This is in stark contrast to C/C++, where every single file starts including x headers, which include y headers etc. Bottom line: Go's compiling takes linear time w.r.t to the number of imported packages, where C/C++ take exponential time.

- 141
- 1
- 4
A good test for the translation efficiency of a compiler is self-compilation: how long does it take a given compiler to compile itself? For C++ it takes a very long time (hours?). By comparison, a Pascal/Modula-2/Oberon compiler would compile itself in less than one second on a modern machine [1].
Go has been inspired by these languages, but some of the main reasons for this efficiency include:
A clearly defined syntax that is mathematically sound, for efficient scanning and parsing.
A type-safe and statically-compiled language that uses separate compilation with dependency and type checking across module boundaries, to avoid unnecessary re-reading of header files and re-compiling of other modules - as opposed to independent compilation like in C/C++ where no such cross-module checks are performed by the compiler (hence the need to re-read all those header files over and over again, even for a simple one-line "hello world" program).
An efficient compiler implementation (e.g. single-pass, recursive-descent top-down parsing) - which of course is greatly helped by points 1 and 2 above.
These principles have already been known and fully implemented in the 1970s and 1980s in languages like Mesa, Ada, Modula-2/Oberon and several others, and are only now (in the 2010s) finding their way into modern languages like Go (Google), Swift (Apple), C# (Microsoft) and several others.
Let's hope that this will soon be the norm and not the exception. To get there, two things need to happen:
First, software platform providers such as Google, Microsoft and Apple should start by encouraging application developers to use the new compilation methodology, while enabling them to re-use their existing code base. This is what Apple is now trying to do with the Swift programming language, which can co-exist with Objective-C (since it uses the same runtime environment).
Second, the underlying software platforms themselves should eventually be re-written over time using these principles, while simultaneously redesigning the module hierarchy in the process to make them less monolithic. This is of course a mammoth task and may well take the better part of a decade (if they are courageous enough to actually do it - which I am not at all sure in the case of Google).
In any case, it's the platform that drives language adoption, and not the other way around.
References:
[1] http://www.inf.ethz.ch/personal/wirth/ProjectOberon/PO.System.pdf, page 6: "The compiler compiles itself in about 3 seconds". This quote is for a low cost Xilinx Spartan-3 FPGA development board running at a clock frequency of 25 MHz and featuring 1 MByte of main memory. From this one can easily extrapolate to "less than 1 second" for a modern processor running at a clock frequency well above 1 GHz and several GBytes of main memory (i.e. several orders of magnitude more powerful than the Xilinx Spartan-3 FPGA board), even when taking I/O speeds into account. Already back in 1990 when Oberon was run on a 25MHz NS32X32 processor with 2-4 MBytes of main memory, the compiler compiled itself in just a few seconds. The notion of actually waiting for the compiler to finish a compilation cycle was completely unknown to Oberon programmers even back then. For typical programs, it always took more time to remove the finger from the mouse button that triggered the compile command than to wait for the compiler to complete the compilation just triggered. It was truly instant gratification, with near-zero wait times. And the quality of the produced code, even though not always completely on par with the best compilers available back then, was remarkably good for most tasks and quite acceptable in general.

- 1
- 2
- 3
-
1_A Pascal/Modula-2/Oberon/Oberon-2 compiler would compile itself in less than one second on a modern machine_ [citation needed] – RamblingMad Jun 22 '14 at 09:45
-
3
-
1"...principles ... finding their way into modern languages like Go (Google), Swift (Apple)" Not sure how Swift made into that list: the Swift compiler is *glacial*. At a recent CocoaHeads Berlin meetup, someone provided some numbers for a mid-size framework, they came to 16 LOC per second. – mpw Dec 19 '17 at 16:57
Go was designed to be fast, and it shows.
- Dependency Management: no header file, you just need to look at the packages that are directly imported (no need to worry about what they import) thus you have linear dependencies.
- Grammar: the grammar of the language is simple, thus easily parsed. Although the number of features is reduced, thus the compiler code itself is tight (few paths).
- No overload allowed: you see a symbol, you know which method it refers to.
- It's trivially possible to compile Go in parallel because each package can be compiled independently.
Note that Go isn't the only language with such features (modules are the norm in modern languages), but they did it well.

- 40,703
- 10
- 88
- 121

- 287,565
- 48
- 449
- 722
-
Point (4) is not entirely true. Modules that depend on each other should be compiled in order of dependency to allow for cross-module inlining and stuff. – fuz Mar 11 '13 at 06:34
-
1@FUZxxl: This only concerns the optimization stage though, you can have perfect parallelism up to the backend IR generation; only cross-module optimization is thus concerned, which can be done at the link stage, and link is not parallel anyway. Of course, if you do not want to duplicate your work (re-parsing), you are better off compiling in a "lattice" way: 1/ modules with no dependency, 2/ modules depending only on (1), 3/ modules depending only on (1) and (2), ... – Matthieu M. Mar 11 '13 at 07:14
-
2
Quoting from the book "The Go Programming Language" by Alan Donovan and Brian Kernighan:
Go compilation is notably faster than most other compiled languages, even when building from scratch. There are three main reasons for the compiler’s speed. First, all imports must be explicitly listed at the beginning of each source file, so the compiler does not have to read and process an entire file to determine its dependencies. Second, the dependencies of a package form a directed acyclic graph, and because there are no cycles, packages can be compiled separately and perhaps in parallel. Finally, the object file for a compiled Go package records export information not just for the package itself, but for its dependencies too. When compiling a package, the compiler must read one object file for each import but need not look beyond these files.

- 6,626
- 3
- 22
- 21
The basic idea of compilation is actually very simple. A recursive-descent parser, in principle, can run at I/O bound speed. Code generation is basically a very simple process. A symbol table and basic type system is not something that requires a lot of computation.
However, it is not hard to slow down a compiler.
If there is a preprocessor phase, with multi-level include directives, macro definitions, and conditional compilation, as useful as those things are, it is not hard to load it down. (For one example, I'm thinking of the Windows and MFC header files.) That is why precompiled headers are necessary.
In terms of optimizing the generated code, there is no limit to how much processing can be added to that phase.

- 40,059
- 14
- 91
- 135
Simply ( in my own words ), because the syntax is very easy ( to analyze and to parse )
For instance, no type inheritance means, not problematic analysis to find out if the new type follows the rules imposed by the base type.
For instance in this code example: "interfaces" the compiler doesn't go and check if the intended type implement the given interface while analyzing that type. Only until it's used ( and IF it is used ) the check is performed.
Other example, the compiler tells you if you're declaring a variable and not using it ( or if you are supposed to hold a return value and you're not )
The following doesn't compile:
package main
func main() {
var a int
a = 0
}
notused.go:3: a declared and not used
This kinds of enforcements and principles make the resulting code safer, and the compiler doesn't have to perform extra validations that the programmer can do.
At large all these details make a language easier to parse which result in fast compilations.
Again, in my own words.
- Go imports dependencies once for all files, so the import time doesn't increase exponentially with project size.
- Simpler linguistics means interpreting them takes less computing.
What else?

- 950
- 9
- 16