6

I'm having some weird issues with my custom Dockerfile, compiling a .Net core app in alpine containers.

I've tried numerous different configurations to no avail - cache is ALWAYS invalidated when I implement the final FROM instruction (if I comment that and everything below it out, caching works fine). Here's the file:

FROM microsoft/dotnet:2.1-sdk-alpine3.7 AS build
ARG ASPNETCORE_ENVIRONMENT=development
ARG ASPNET_CONFIGURATION=Debug
ARG PROJECT_DIR=src/API/
ARG PROJECT_NAME=MyAPI
ARG SOLUTION_NAME=MySolution

RUN export

WORKDIR /source

COPY ./*.sln ./nuget.config ./

# Copy source project files
COPY src/*/*.csproj ./
RUN for file in $(ls *.csproj); do mkdir -p src/${file%.*}/ && mv $file src/${file%.*}/; done

# # Copy test project files
COPY test/*/*.csproj ./
RUN for file in $(ls *.csproj); do mkdir -p test/${file%.*}/ && mv $file test/${file%.*}/; done

RUN dotnet restore

COPY . ./

RUN for dir in test/*.Tests/; do (cd "$dir" && dotnet test --filter TestType!=Integration); done

WORKDIR /source/${PROJECT_DIR}

RUN dotnet build ${PROJECT_NAME}.csproj -c $ASPNET_CONFIGURATION -o /app

RUN dotnet publish ${PROJECT_NAME}.csproj -c $ASPNET_CONFIGURATION -o /app --no-restore

FROM microsoft/dotnet:2.1-aspnetcore-runtime-alpine3.7
ARG ASPNETCORE_ENVIRONMENT=development

RUN export

COPY --from=build /app .

WORKDIR /app
EXPOSE 80
VOLUME /app/logs

ENTRYPOINT ["dotnet", "MyAssembly.dll"]

Any ideas? Hints? Tips? Blazingly obvious mistakes? I've checked each layer and the COPY . ./ instruction ONLY copies the files I expect it to - and none of them change between builds.

Its also worth noting that if I remove the last FROM instruction (and other relevant lines) the cache works perfectly - but the final image size is obviously considerably bigger than the base microsoft/dotnet:2.1-aspnetcore-runtime-alpine3.7 (172Mb vs 1.8Gb) image. I have tried just commenting out the COPY instruction after the FROM, but it doesn't affect the cache invalidation. The following works as expected:

FROM microsoft/dotnet:2.1-sdk-alpine3.7 AS build
ARG ASPNETCORE_ENVIRONMENT=development
ARG ASPNET_CONFIGURATION=Debug
ARG PROJECT_DIR=src/API/
ARG PROJECT_NAME=MyAPI
ARG SOLUTION_NAME=MySolution

RUN export

WORKDIR /source

COPY ./*.sln ./nuget.config ./

# Copy source project files
COPY src/*/*.csproj ./
RUN for file in $(ls *.csproj); do mkdir -p src/${file%.*}/ && mv $file src/${file%.*}/; done

# # Copy test project files
COPY test/*/*.csproj ./
RUN for file in $(ls *.csproj); do mkdir -p test/${file%.*}/ && mv $file test/${file%.*}/; done

RUN dotnet restore

COPY . ./

RUN for dir in test/*.Tests/; do (cd "$dir" && dotnet test --filter TestType!=Integration); done

WORKDIR /source/${PROJECT_DIR}

RUN dotnet build ${PROJECT_NAME}.csproj -c $ASPNET_CONFIGURATION -o /app

RUN dotnet publish ${PROJECT_NAME}.csproj -c $ASPNET_CONFIGURATION -o /app --no-restore

WORKDIR /app
EXPOSE 80
VOLUME /app/logs

ENTRYPOINT ["dotnet", "MyAssembly.dll"]

.dockerignore below:

base-images/
docker-compose.yml
docker-compose.*.yml
VERSION

**/.*
**/*.ps1
**/*.DotSettings
**/*.csproj.user
**/*.md
**/*.log
**/*.sh
**/Dockerfile
**/bin
**/obj
**/node_modules
**/.vs
**/.vscode
**/dist
**/packages/
**/wwwroot/

Last bit of info: I'm building containers using docker-compose - specifically by running docker-compose build myservicename, but building the image with docker build -f src/MyAssembly/Dockerfile -t MyImageName . yields the same results.

Spikeh
  • 3,540
  • 4
  • 24
  • 49
  • 1
    Actually, this seems to be related to the cache_from option in the docker-compose file. For some reason when I specify a cache image (and the cached image is present on the machine), multi-stage builds no longer respect the cache. – Spikeh Jul 20 '18 at 11:21
  • 1
    I have the same problem - build cache is invalidated if I use `FROM` instruction. And I also use --cache-from option. I'm building image from command line, so it is not docker-compose problem. Have you found some solution? – hennadiy.verkh Dec 18 '19 at 08:34
  • I had and resolved a similar issue here https://stackoverflow.com/questions/67537792/docker-incremental-build-does-not-reuse-cache . Could you kindly post a complete build log in order to see which layers are removed and rebuilt? – Antonio Petricca May 18 '21 at 08:29

1 Answers1

0

If you're building locally and the cache isn't working – then I don't know what the issue is :)

But if you're building as part of CI – then the issue may be that you need to pull, build, and push the intermediate stage explicitly:

> docker pull MyImageName:build || true
> docker pull MyImageName:latest || true
> docker build --target build --tag MyImageName:build .
> docker build --cache-from MyImageName:build --tag MyImageName:latest .
> docker push MyImageName:build
> docker push MyImageName:latest

The || true part is there because the images won't be there on the initial CI build. The "magic sauce" of this recipe is docker build --target <intermediate-stage-name> and docker build --cache-from <intermediate-stage-name>.

I can't explain why building and pushing the intermediate stage explicitly is needed to get the cache to work – other than some handwaving about only the final image gets pushed, and not the intermediate stage and its layers. But it worked for me – I learned this "trick" from here: https://pythonspeed.com/articles/faster-multi-stage-builds/

qff
  • 5,524
  • 3
  • 37
  • 62
  • See also the last part of this blog post: https://hiep-le.com/2020/05/22/some-reasons-why-docker-build-does-not-use-cache/ (the "Do not respect stage dependency" section) – qff May 13 '21 at 11:25
  • I mean, I posted this nearly 3 years ago, but thanks for your response! :) I can't remember why this didn't work, but I have since worked out that docker caching is very finicky; images need to be named + tagged for each stage before caching will work. – Spikeh May 13 '21 at 17:04
  • I figured ;) – I answered mostly so that other people having this issue (including my future self) would have this as a guidepost. – qff May 16 '21 at 20:43