Installation
brew install sbt
or similar installs sbt which technically speaking consists of
When you execute sbt
from terminal it actually runs the sbt launcher bash script. Personally, I never had to worry about this trinity, and just use sbt as if it was a single thing.
Configuration
To configure sbt for a particular project save .sbtopts
file at the root of the project. To configure sbt system-wide modify /usr/local/etc/sbtopts
. Executing sbt -help
should tell you the exact location. For example, to give sbt more memory as one-off execute sbt -mem 4096
, or save -mem 4096
in .sbtopts
or sbtopts
for memory increase to take effect permanently.
Project structure
sbt new scala/scala-seed.g8
creates a minimal Hello World sbt project structure
.
├── README.md // most important part of any software project
├── build.sbt // build definition of the project
├── project // build definition of the build (sbt is recursive - explained below)
├── src // test and main source code
└── target // compiled classes, deployment package
Frequent commands
test // run all test
testOnly // run only failed tests
testOnly -- -z "The Hello object should say hello" // run one specific test
run // run default main
runMain example.Hello // run specific main
clean // delete target/
package // package skinny jar
assembly // package fat jar
publishLocal // library to local cache
release // library to remote repository
reload // after each change to build definition
Myriad of shells
scala // Scala REPL that executes Scala language (nothing to do with sbt)
sbt // sbt REPL that executes special sbt shell language (not Scala REPL)
sbt console // Scala REPL with dependencies loaded as per build.sbt
sbt consoleProject // Scala REPL with project definition and sbt loaded for exploration with plain Scala langauage
Build definition is a proper Scala project
This is one of key idiomatic sbt concepts. I will try to explain with a question. Say you want to define a sbt task that will execute an HTTP request with scalaj-http. Intuitively we might try the following inside build.sbt
libraryDependencies += "org.scalaj" %% "scalaj-http" % "2.4.2"
val fooTask = taskKey[Unit]("Fetch meaning of life")
fooTask := {
import scalaj.http._ // error: cannot resolve symbol
val response = Http("http://example.com").asString
...
}
However this will error saying missing import scalaj.http._
. How is this possible when we, right above, added scalaj-http
to libraryDependencies
? Furthermore, why does it work when, instead, we add the dependency to project/build.sbt
?
// project/build.sbt
libraryDependencies += "org.scalaj" %% "scalaj-http" % "2.4.2"
The answer is that fooTask
is actually part of a separate Scala project from your main project. This different Scala project can be found under project/
directory which has its own target/
directory where its compiled classes reside. In fact, under project/target/config-classes
there should be a class that decompiles to something like
object $9c2192aea3f1db3c251d extends scala.AnyRef {
lazy val fooTask : sbt.TaskKey[scala.Unit] = { /* compiled code */ }
lazy val root : sbt.Project = { /* compiled code */ }
}
We see that fooTask
is simply a member of a regular Scala object named $9c2192aea3f1db3c251d
. Clearly scalaj-http
should be a dependency of the project defining $9c2192aea3f1db3c251d
and not the dependency of the proper project. Hence it needs to be declared in project/build.sbt
instead of build.sbt
, because project
is where the build definition Scala project resides.
To drive the point that build definition is just another Scala project, execute sbt consoleProject
. This will load Scala REPL with the build definition project on the classpath. You should see an import along the lines of
import $9c2192aea3f1db3c251d
So now we can interact directly with build definition project by calling it with Scala proper instead of build.sbt
DSL. For example, the following executes fooTask
$9c2192aea3f1db3c251d.fooTask.eval
build.sbt
under root project is a spcial DSL that helps define the build definition Scala project under project/
.
And build definition Scala project, can have its own build definition Scala project under project/project/
and so on. We say sbt is recursive.
sbt is parallel by-default
sbt builds DAG out of tasks. This allows it to analyse dependencies between tasks and execute them in parallel and even perform deduplication. build.sbt
DSL is designed with this in mind, which might lead to initially surprising semantics. What do you think the order of execution is in the following snippet?
def a = Def.task { println("a") }
def b = Def.task { println("b") }
lazy val c = taskKey[Unit]("sbt is parallel by-default")
c := {
println("hello")
a.value
b.value
}
Intuitively one might think flow here is to first print hello
then execute a
, and then b
task. However this actually means execute a
and b
in parallel, and before println("hello")
so
a
b
hello
or because order of a
and b
is not guaranteed
b
a
hello
Perhaps paradoxically, in sbt it is easier to do parallel than serial. If you need serial ordering you will have to use special things like Def.sequential
or Def.taskDyn
to emulate for-comprehension.
def a = Def.task { println("a") }
def b = Def.task { println("b") }
lazy val c = taskKey[Unit]("")
c := Def.sequential(
Def.task(println("hello")),
a,
b
).value
is similar to
for {
h <- Future(println("hello"))
a <- Future(println("a"))
b <- Future(println("b"))
} yield ()
where we see there is no dependencies between components, whilst
def a = Def.task { println("a"); 1 }
def b(v: Int) = Def.task { println("b"); v + 40 }
def sum(x: Int, y: Int) = Def.task[Int] { println("sum"); x + y }
lazy val c = taskKey[Int]("")
c := (Def.taskDyn {
val x = a.value
val y = Def.task(b(x).value)
Def.taskDyn(sum(x, y.value))
}).value
is similar to
def a = Future { println("a"); 1 }
def b(v: Int) = Future { println("b"); v + 40 }
def sum(x: Int, y: Int) = Future { x + y }
for {
x <- a
y <- b(x)
c <- sum(x, y)
} yield { c }
where we see sum
depends on and has to wait for a
and b
.
In other words
- for applicative semantics, use
.value
- for monadic semantics use
sequential
or taskDyn
Consider another semantically confusing snippet as a result of the dependency building nature of value
, where instead of
`value` can only be used within a task or setting macro, such as :=, +=, ++=, Def.task, or Def.setting.
val x = version.value
^
we have to write
val x = settingKey[String]("")
x := version.value
Note the syntax .value
is about relationships in the DAG and does not mean
"give me the value right now"
instead it means something like
"my caller depends on me first, and once I know how the whole DAG fits together, I will be able to provide my caller with the requested value"
So now it might be a bit clearer why x
cannot be assigned a value yet; there is no value yet available in the relationship building stage.
We can clearly see a difference in semantics between Scala proper and the DSL language in build.sbt
. Here are few rules of thumbs that work for me
- DAG is made out of expressions of type
Setting[T]
- In most cases we simply use
.value
syntax and sbt will take care of establishing relationship between Setting[T]
- Occasionally we have to manually tweak a part of DAG and for that we use
Def.sequential
or Def.taskDyn
- Once these ordering/relationship syntatic oddities are taken care of, we can rely on the usual Scala semantics for building the rest of the business logic of tasks.
Commands vs Tasks
Commands are a lazy way out of the DAG. Using commands it is easy to mutate the build state and serialise tasks as you wish. The cost is we loose parallelisation and deduplication of tasks provided by DAG, which way tasks should be the prefered choice. You can think of commands as a kind of permanent recording of a session one might do inside sbt shell
. For example, given
vval x = settingKey[Int]("")
x := 13
lazy val f = taskKey[Int]("")
f := 1 + x.value
consider the output of the following session
sbt:root> x
[info] 13
sbt:root> show f
[info] 14
sbt:root> set x := 41
[info] Defining x
[info] The new value will be used by f
[info] Reapplying settings...
sbt:root> show f
[info] 42
In particular not how we mutate the build state with set x := 41
. Commands enables us to make a permanent recording of the above session, for example
commands += Command.command("cmd") { state =>
"x" :: "show f" :: "set x := 41" :: "show f" :: state
}
We can also make the command type-safe using Project.extract
and runTask
commands += Command.command("cmd") { state =>
val log = state.log
import Project._
log.info(x.value.toString)
val (_, resultBefore) = extract(state).runTask(f, state)
log.info(resultBefore.toString)
val mutatedState = extract(state).appendWithSession(Seq(x := 41), state)
val (_, resultAfter) = extract(mutatedState).runTask(f, mutatedState)
log.info(resultAfter.toString)
mutatedState
}
Scopes
Scopes come into play when we try to answer the following kinds of questions
- How to define task once and make it available to all the sub-projects in multi-project build?
- How to avoid having test dependencies on the main classpath?
sbt has a multi-axis scoping space which can be navigated using slash syntax, for example,
show root / Compile / compile / scalacOptions
| | | |
project configuration task key
Personally, I rarely find myself having to worry about scope. Sometimes I want to compile just test sources
Test/compile
or perhaps execute a particular task from a particular subproject without first having to navigate to that project with project subprojB
subprojB/Test/compile
I think the following rules of thumb help avoid scoping complications
- do not have multiple
build.sbt
files but only a single master one under root project that controls all other sub-projects
- share tasks via auto plugins
- factor out common settings into plain Scala
val
and explicitly add it to each sub-project
Multi-project build
Iinstead of multiple build.sbt files for each subproject
.
├── README.md
├── build.sbt // OK
├── multi1
│ ├── build.sbt // NOK
│ ├── src
│ └── target
├── multi2
│ ├── build.sbt // NOK
│ ├── src
│ └── target
├── project // this is the meta-project
│ ├── FooPlugin.scala // custom auto plugin
│ ├── build.properties // version of sbt and hence Scala for meta-project
│ ├── build.sbt // OK - this is actually for meta-project
│ ├── plugins.sbt // OK
│ ├── project
│ └── target
└── target
Have a single master build.sbt
to rule them all
.
├── README.md
├── build.sbt // single build.sbt to rule theme all
├── common
│ ├── src
│ └── target
├── multi1
│ ├── src
│ └── target
├── multi2
│ ├── src
│ └── target
├── project
│ ├── FooPlugin.scala
│ ├── build.properties
│ ├── build.sbt
│ ├── plugins.sbt
│ ├── project
│ └── target
└── target
There is a common practice of factoring out common settings in multi-project builds
define a sequence of common settings in a val and add them to each
project. Less concepts to learn that way.
for example
lazy val commonSettings = Seq(
scalacOptions := Seq(
"-Xfatal-warnings",
...
),
publishArtifact := true,
...
)
lazy val root = project
.in(file("."))
.settings(settings)
.aggregate(
multi1,
multi2
)
lazy val multi1 = (project in file("multi1")).settings(commonSettings)
lazy val multi2 = (project in file("multi2")).settings(commonSettings)
Projects navigation
projects // list all projects
project multi1 // change to particular project
Plugins
Remember build definition is a proper Scala project that resides under project/
. This is where we define a plugin by creating .scala
files
. // directory of the (main) proper project
├── project
│ ├── FooPlugin.scala // auto plugin
│ ├── build.properties // version of sbt library and indirectly Scala used for the plugin
│ ├── build.sbt // build definition of the plugin
│ ├── plugins.sbt // these are plugins for the main (proper) project, not the meta project
│ ├── project // the turtle supporting this turtle
│ └── target // compiled binaries of the plugin
Here is a minimal auto plugin under project/FooPlugin.scala
object FooPlugin extends AutoPlugin {
object autoImport {
val barTask = taskKey[Unit]("")
}
import autoImport._
override def requires = plugins.JvmPlugin // avoids having to call enablePlugin explicitly
override def trigger = allRequirements
override lazy val projectSettings = Seq(
scalacOptions ++= Seq("-Xfatal-warnings"),
barTask := { println("hello task") },
commands += Command.command("cmd") { state =>
"""eval println("hello command")""" :: state
}
)
}
The override
override def requires = plugins.JvmPlugin
should effectively enable the plugin for all sub-projects without having to call explicitly enablePlugin
in build.sbt
.
IntelliJ and sbt
Please enable the following setting (which should really be enabled by default)
use sbt shell
under
Preferences | Build, Execution, Deployment | sbt | sbt projects
Key references