194

I have the following Dockerfile that uses the latest Ubuntu image pulled from dockerhub:

FROM ubuntu:latest  
RUN apt-get update  && apt-get install -y  g++ llvm lcov 

when I launch the docker build command, the following errors occur:

Err:2 http://archive.ubuntu.com/ubuntu bionic InRelease 
At least one invalid signature was encountered.

Err:1 http://security.ubuntu.com/ubuntu bionic-security InRelease
  At least one invalid signature was encountered.

Err:3 http://archive.ubuntu.com/ubuntu bionic-updates InRelease
  At least one invalid signature was encountered.

Err:4 http://archive.ubuntu.com/ubuntu bionic-backports InRelease
  At least one invalid signature was encountered.

Reading package lists...

W: GPG error: http://archive.ubuntu.com/ubuntu bionic InRelease: At least one invalid signature was encountered.
E: The repository 'http://archive.ubuntu.com/ubuntu bionic InRelease' is not signed.

W: GPG error: http://security.ubuntu.com/ubuntu bionic-security InRelease: At least one invalid signature was encountered.
E: The repository 'http://security.ubuntu.com/ubuntu bionic-security InRelease' is not signed.

W: GPG error: http://archive.ubuntu.com/ubuntu bionic-updates InRelease: At least one invalid signature was encountered.
E: The repository 'http://archive.ubuntu.com/ubuntu bionic-updates InRelease' is not signed.

W: GPG error: http://archive.ubuntu.com/ubuntu bionic-backports InRelease: At least one invalid signature was encountered.
E: The repository 'http://archive.ubuntu.com/ubuntu bionic-backports InRelease' is not signed.

I read here https://superuser.com/questions/1331936/how-can-i-get-past-a-repository-is-not-signed-message-when-attempting-to-upgr that you can pass this error using --allow-unauthenitcated or --allow-insecure-repositories but both seem to me workarounds that may compromize security of the container.

EDIT

Tried to pull ubuntu:18.04, ubuntu:19:04, ubuntu:19.10 same error with different distro name

Joshua Schlichting
  • 3,110
  • 6
  • 28
  • 54
Antonio La Marra
  • 5,949
  • 4
  • 15
  • 23

14 Answers14

352

Apparently my root partition was full (maybe I've tried too many times to download packages through apt), and running sudo apt clean solved the issue


In addition, the following commands should help clean up space:

docker system df # which can show disk usage and size of 'Build Cache'
docker image prune # add -f or --force to not prompt for confirmation
docker container prune # add -f or --force to not prompt for confirmation
Ron
  • 5,900
  • 2
  • 20
  • 30
Antonio La Marra
  • 5,949
  • 4
  • 15
  • 23
  • 127
    Using `docker image prune` and `docker container prune` resolved this for me. – Erik Schnetter Jan 25 '20 at 02:33
  • 3
    `docker image prune` saved 52GB on my disk and made my build run again, thank you Antonio and Erik! – drivenuts Mar 13 '20 at 10:21
  • 6
    can someone explain why this failure can happen? This challenges my understanding of docker: there seem to be state kept in between run that don't make the runs deterministic. – David 天宇 Wong Mar 27 '20 at 20:21
  • @David天宇Wong I believe it's a disk space issue, rather than old build/run state being reused – Salvioner Apr 24 '20 at 09:40
  • 36
    I tried all of the above but my issue resolved after nuking the docker build cache. At first I looked at my docker usage by: `docker system df`. This told me that my build cache is 50+GB. Then I nuked it using `docker builder prune`. – Raashid May 21 '20 at 21:47
  • 5
    Amazing... it was the solution, I was staring at everything but didn't realize my root partition had run out of space. – Antti Haapala -- Слава Україні Jun 01 '20 at 00:36
  • I had the same issue. I am running Docker Toolbox on VirtualBox on top of Windows 10 and the `/var` mount filled up: `E: You don't have enough free space in /var/cache/apt/archives/` – treehead Jul 29 '20 at 15:01
  • 2
    I came across this in a `docker buildx` build; `docker buildx prune` sorted things... – psychemedia Nov 29 '20 at 17:43
  • For me, `docker image prune` was not helping and I discovered I had to use `docker image prune -a`. I reclaimed about 50GB and fixed my build! – Redtama Mar 03 '21 at 17:35
  • 1
    @Raashid following this advice, I ran `docker system df` and realized I needed to run `docker volume prune` to resolve the issue. – nikojpapa Jun 01 '21 at 02:39
  • Doing ```docker system prune``` wasn't enough for me. I needed to delete images and run ```docker volume prune``` as well. – Jonas Eicher Jan 12 '22 at 12:26
  • 1
    Running 'docker system prune' only reclaimed a couple of gigabytes (out of a couple of hundred), but seems to have solved the problem nevertheless. I have no idea why, no disks were anywhere close to full, AFICT. But thanks anyway, upvoted. – Ketil Malde Feb 01 '22 at 11:34
  • Using docker image prune and docker container prune resolved this for me as well. – Winnie x Mar 02 '22 at 20:35
  • Geez docker, can you give a better error message pls. Spend 1hr just to realise this is the root cause. – gerrytan Jun 05 '23 at 03:56
129

Since Docker API v1.25+ ( released: Nov 18, 2019 )

Running the command below fixed the problem for me:

docker system prune --force

The --force flag stands for noninteractive prune.

Additionally, you may want to give a try to the prune volume commands:

docker volume prune --force
Innat
  • 16,113
  • 6
  • 53
  • 101
Andriy Ivaneyko
  • 20,639
  • 6
  • 60
  • 82
58

fixed by

docker image prune -f

looks like docker has a limit on maximum apt cache size on the host system

Danila Plee
  • 698
  • 5
  • 5
37

If you're using Docker Desktop, take care of the maximum disk image size you've specified in the settings. It can cause the issue if it gets full during the build (source).

enter image description here

Amirreza Nasiri
  • 536
  • 5
  • 14
31

For Raspbian, upgrade libseccomp manually on the host system by using:

curl http://ftp.us.debian.org/debian/pool/main/libs/libseccomp/libseccomp2_2.5.1-1_armhf.deb --output libseccomp2_2.5.1-1_armhf.deb
sudo dpkg -i libseccomp2_2.5.1-1_armhf.deb

This resolved my issue.

Original post is here.

Emre Tapcı
  • 1,743
  • 17
  • 16
14

As @Danila and @Andriy pointed out this issue can easily be fixed running:

docker image prune -f
docker container prune -f

but posting this answer, as running just one of them didn't work for me (on MacOS X) - running both however does.

Magnus
  • 2,016
  • 24
  • 32
8

This helps me:

 docker volume prune
celcin
  • 304
  • 1
  • 4
  • 14
4

I had to run container with --security-opt seccomp:unconfined.

Rok Jarc
  • 18,765
  • 9
  • 69
  • 124
3

I had this problem on one of my two machines. Doing a ls -ld /tmp I got

drwxrwxrwt 3 root root 4096 May 15 20:46 /tmp

for the working one and

drwxr-xr-t 1 root root 4096 May 26 05:44 /tmp

for the failing one. After I did chmod 1777 /tmp, it worked!!

EDIT:

So, I dived a little deeper into this problem and realized there was something fundamentally wrong. I put my problems in another question and later found the answer that solved this myself: https://stackoverflow.com/a/62088961/7387935

The key point here is that on the machine that was working correctly I had aufs as storage driver and on the faulty one it was overlay2. After I changed that, all permissions were correct.

Brian Burns
  • 20,575
  • 8
  • 83
  • 77
Florian Bachmann
  • 532
  • 6
  • 14
  • this one worked for me. I had moved the tmp folder to another drive and placed a simlink in root folder. That started to create problems with signature and many other errors. I did as you explained here, but then I moved the tmp folder back to root directory and deleted the simlink. I might just leave the tmp folder where it should stay. – tavalendo Jan 20 '22 at 20:51
2

I tried again later and it worked.

From https://github.com/docker-library/php/issues/898#issuecomment-539234070:

That usually means the mirror is having issues (possibly partially out of date; i.e. not completely synced from other mirrors) and often clears itself up.

tschumann
  • 2,776
  • 3
  • 26
  • 42
1

The existing answers all talk about creating more space for Docker by removing existing Docker items via docker container prune, docker image prune and docker volume prune. Alternatively it is also possible to increase the disk limit that Docker has set for it self (provided that you have enough disk space).

In Docker Desktop go to Settings > Resources and increase the "Virtual disk limit".

This has helped me since I work with several large docker images and don't want to bothered every time to prune.

AV_Jeroen
  • 81
  • 5
0

I was able to resolve this issue by stopping all of my running containers (2, in my case).

Joshua Schlichting
  • 3,110
  • 6
  • 28
  • 54
-1

I added --network=host to the build command.

docker build --network=host -t REPOSITORY:TAG  ./
-1

this worked for me docker system prune -af --volumes and these other ones as well

docker image prune 
docker container prune
docker builder prune
docker volume prune

This running docker system df and see if you need free space on one of your volumes

d1jhoni1b
  • 7,497
  • 1
  • 51
  • 37