Before to tell you how I would process, I will explain the issue that you encounter.
Your Dockerfile relies on the build multi-stage feature.
Here stages are considered as intermediary layers that are not kept as layers in the final image. To keep files/folders between layers you have to explicit copy them as you done.
So concretely, it means that in the below instructions : maven resolves all dependencies specified in your pom.xml and it stores them in the local repository located on the layer of that stage :
FROM maven:3.5-jdk-8 as mavenDeps
COPY pom.xml pom.xml
RUN mvn dependency:resolve
But as said, the stage content is not kept by default. So all downloaded dependencies in the local maven repo are lost since you never copy that in the next stage :
FROM mavenDeps as mavenBuild
RUN mvn install
Since the local repo of that image is empty : mvn install
re-download all dependencies.
How to process ?
You have really many many ways.
The best choice depends on your requirement.
But whatever the way, the build strategy in terms of docker layers looks like :
Build stage (Maven image) :
- pom copy to the image
- dependencies and plugins downloads.
About that, mvn dependency:resolve-plugins
chained to mvn dependency:resolve
may do the job but not always.
Why ? Because these plugins and the package
execution may rely on different artifacts/plugins and even for a same artifact/plugin, these may still pull a different version.
So a safer approach while potentially slower is resolving dependencies by executing exactly the mvn package
command (which will pull exactly dependencies that you are need) but by skipping the source compilation and by deleting the target folder to make the processing faster and to prevent any undesirable layer change detection for that step.
- source code copy to the image
- package the application
Run stage (JDK or JRE image) :
- copy the jar from the previous stage
1) No explicit cache for maven dependencies : straight but annoying when pom changes frequently
If re-downloading all dependencies at every pom.xml change is acceptable.
Example by starting from your script :
########build stage########
FROM maven:3.5-jdk-8 as maven_build
WORKDIR /app
COPY pom.xml .
# To resolve dependencies in a safe way (no re-download when the source code changes)
RUN mvn clean package -Dmaven.main.skip -Dmaven.test.skip && rm -r target
# To package the application
COPY src ./src
RUN mvn clean package -Dmaven.test.skip
########run stage########
FROM java:8
WORKDIR /app
COPY --from=maven_build /app/target/*.jar
#run the app
ENV JAVA_OPTS ""
CMD [ "bash", "-c", "java ${JAVA_OPTS} -jar *.jar -v"]
Drawback of that solution ?
Any changes in the pom.xml means re-create the whole layer that download and stores the maven dependencies.
That is generally not acceptable for applications with many dependencies, overall if you don't use a maven repository manager during the image build.
2) Explicit cache for maven dependencies : require more configurations and use of buildkit but that is more efficient because only required dependencies are downloaded
The only thing that changes here is that maven dependencies download are cached in the docker builder cache :
# syntax=docker/dockerfile:experimental
########build stage########
FROM maven:3.5-jdk-8 as maven_build
WORKDIR /app
COPY pom.xml .
COPY src ./src
RUN --mount=type=cache,target=/root/.m2 mvn clean package -Dmaven.test.skip
########run stage########
FROM java:8
WORKDIR /app
COPY --from=maven_build /app/target/*.jar
#run the app
ENV JAVA_OPTS ""
CMD [ "bash", "-c", "java ${JAVA_OPTS} -jar *.jar -v"]
To enable buildkit, the env variable DOCKER_BUILDKIT=1
has to be set (you can do that where you want : bashrc, command line, docker daemon json file...)