1

I have used kompose.io to convert the docker-compose.yml (given at the end) into kubernetes yaml deployments and then applied kubeclt apply -f . to deploy them in my 4-node k8s cluster in cloudlab. But most of the pods are in pending due to the following error:

  Warning  FailedScheduling  24h   default-scheduler  0/4 nodes are available: 4 pod has unbound immediate PersistentVolumeClaims.

I would be grateful if anyone can give some pointers on how can I resolve this issue.

$ kubectl describe pod pgadmin-6897994987-q2kxj
Name:           pgadmin-6897994987-q2kxj
Namespace:      default
Priority:       0
Node:           <none>
Labels:         io.kompose.service=pgadmin
                pod-template-hash=6897994987
Annotations:    kompose.cmd: kompose convert -f docker-compose.yml
                kompose.version: 1.26.1 (a9d05d509)
Status:         Pending
IP:             
IPs:            <none>
Controlled By:  ReplicaSet/pgadmin-6897994987
Containers:
  pgadmin-container:
    Image:      dpage/pgadmin4
    Port:       80/TCP
    Host Port:  0/TCP
    Environment:
      PGADMIN_DEFAULT_EMAIL:     
      PGADMIN_DEFAULT_PASSWORD:  
    Mounts:
      /root/.pgadmin from pgadmin-volume (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-lt9r6 (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  pgadmin-volume:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  pgadmin-volume
    ReadOnly:   false
  default-token-lt9r6:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-lt9r6
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  24h   default-scheduler  0/4 nodes are available: 4 pod has unbound immediate PersistentVolumeClaims.

kubectl get pods:

NAME                                 READY   STATUS    RESTARTS   AGE
callback-755dd5d9cd-jnsbq            0/1     Pending   0          24h
create-bucket-7bd9cdd74f-hddcp       0/1     Pending   0          24h
distribution-8556bcd7d7-lzs9k        0/1     Pending   0          24h
download-6dd7fbf4d5-mpvxm            0/1     Pending   0          24h
efgs-fake-77777895d4-7kt86           1/1     Running   0          24h
objectstore-5bd9cfdc9-jpvfh          0/1     Pending   0          24h
pgadmin-6897994987-q2kxj             0/1     Pending   0          24h
postgres-978d8867b-czdpj             0/1     Pending   0          24h
submission-5db76bc69d-hckl7          0/1     Pending   0          24h
upload-5748b4857d-w8n5c              0/1     Pending   0          24h
verification-fake-6f4d75944f-ssttg   1/1     Running   0          24h

docker-compose.yml

version: '3'
services:
  callback:
    build:
      context: ./
      dockerfile: ./services/callback/Dockerfile
    depends_on:
      - postgres
      - efgs-fake
    ports:
      - "8010:8080"
    environment:
      SPRING_PROFILES_ACTIVE: debug,disable-ssl-client-postgres
      POSTGRESQL_SERVICE_PORT: '5432'
      POSTGRESQL_SERVICE_HOST: postgres
      POSTGRESQL_DATABASE: ${POSTGRES_DB}
      POSTGRESQL_PASSWORD_CALLBACK: ${POSTGRES_CALLBACK_PASSWORD}
      POSTGRESQL_USER_CALLBACK: ${POSTGRES_CALLBACK_USER}
      POSTGRESQL_PASSWORD_FLYWAY: ${POSTGRES_FLYWAY_PASSWORD}
      POSTGRESQL_USER_FLYWAY: ${POSTGRES_FLYWAY_USER}
      SSL_CALLBACK_KEYSTORE_PATH: file:/secrets/ssl.p12
      SSL_CALLBACK_KEYSTORE_PASSWORD: 123456
      SSL_FEDERATION_TRUSTSTORE_PATH: file:/secrets/contains_efgs_truststore.jks
      SSL_FEDERATION_TRUSTSTORE_PASSWORD: 123456
      FEDERATION_GATEWAY_KEYSTORE_PATH: file:/secrets/ssl.p12
      FEDERATION_GATEWAY_KEYSTORE_PASS: 123456
      FEDERATION_GATEWAY_BASE_URL: https://efgs-fake:8014
      # for local testing: FEDERATION_GATEWAY_BASE_URL: https://host.docker.internal:8014
    volumes:
      - ./docker-compose-test-secrets:/secrets
  submission:
    build:
      context: ./
      dockerfile: ./services/submission/Dockerfile
    depends_on:
      - postgres
      - verification-fake
    ports:
      - "8000:8080"
      - "8006:8081"
    environment:
      SPRING_PROFILES_ACTIVE: debug,disable-ssl-client-postgres
      POSTGRESQL_SERVICE_PORT: '5432'
      POSTGRESQL_SERVICE_HOST: postgres
      POSTGRESQL_DATABASE: ${POSTGRES_DB}
      POSTGRESQL_PASSWORD_SUBMISION: ${POSTGRES_SUBMISSION_PASSWORD}
      POSTGRESQL_USER_SUBMISION: ${POSTGRES_SUBMISSION_USER}
      POSTGRESQL_PASSWORD_FLYWAY: ${POSTGRES_FLYWAY_PASSWORD}
      POSTGRESQL_USER_FLYWAY: ${POSTGRES_FLYWAY_USER}
      VERIFICATION_BASE_URL: http://verification-fake:8004
      SUPPORTED_COUNTRIES: DE,FR
      SSL_SUBMISSION_KEYSTORE_PATH: file:/secrets/ssl.p12
      SSL_SUBMISSION_KEYSTORE_PASSWORD: 123456
      SSL_VERIFICATION_TRUSTSTORE_PATH: file:/secrets/contains_efgs_truststore.jks
      SSL_VERIFICATION_TRUSTSTORE_PASSWORD: 123456
    volumes:
      - ./docker-compose-test-secrets:/secrets
  distribution:
    build:
      context: ./
      dockerfile: ./services/distribution/Dockerfile
    depends_on:
     - postgres
     - objectstore
     - create-bucket
    environment:
      SUPPORTED_COUNTRIES: DE,FR
      SPRING_PROFILES_ACTIVE: debug,signature-dev,testdata,disable-ssl-client-postgres,local-json-stats
      POSTGRESQL_SERVICE_PORT: '5432'
      POSTGRESQL_SERVICE_HOST: postgres
      POSTGRESQL_DATABASE: ${POSTGRES_DB}
      POSTGRESQL_PASSWORD_DISTRIBUTION: ${POSTGRES_DISTRIBUTION_PASSWORD}
      POSTGRESQL_USER_DISTRIBUTION: ${POSTGRES_DISTRIBUTION_USER}
      POSTGRESQL_PASSWORD_FLYWAY: ${POSTGRES_FLYWAY_PASSWORD}
      POSTGRESQL_USER_FLYWAY: ${POSTGRES_FLYWAY_USER}
      # Settings for the S3 compatible objectstore
      CWA_OBJECTSTORE_ACCESSKEY: ${OBJECTSTORE_ACCESSKEY}
      CWA_OBJECTSTORE_SECRETKEY: ${OBJECTSTORE_SECRETKEY}
      CWA_OBJECTSTORE_ENDPOINT: http://objectstore
      CWA_OBJECTSTORE_BUCKET: cwa
      CWA_OBJECTSTORE_PORT: 8000
      services.distribution.paths.output: /tmp/distribution
      # Settings for cryptographic artifacts
      VAULT_FILESIGNING_SECRET: ${SECRET_PRIVATE}
      FORCE_UPDATE_KEYFILES: 'false'
      STATISTICS_FILE_ACCESS_KEY_ID: fakeAccessKey
      STATISTICS_FILE_SECRET_ACCESS_KEY: secretKey
      STATISTICS_FILE_S3_ENDPOINT: https://localhost
      DSC_TRUST_STORE: /secrets/dsc_truststore
      DCC_TRUST_STORE: /secrets/dcc_truststore
    volumes:
      - ./docker-compose-test-secrets:/secrets
  download:
    build:
      context: ./
      dockerfile: ./services/download/Dockerfile
    depends_on:
      - postgres
    ports:
      - "8011:8080"
    environment:
      SPRING_PROFILES_ACTIVE: debug,disable-ssl-server,disable-ssl-client-postgres,disable-ssl-client-verification,disable-ssl-client-verification-verify-hostname,disable-ssl-efgs-verification
      POSTGRESQL_SERVICE_PORT: '5432'
      POSTGRESQL_SERVICE_HOST: postgres
      POSTGRESQL_DATABASE: ${POSTGRES_DB}
      POSTGRESQL_PASSWORD_CALLBACK: ${POSTGRES_CALLBACK_PASSWORD}
      POSTGRESQL_USER_CALLBACK: ${POSTGRES_CALLBACK_USER}
      POSTGRESQL_PASSWORD_FLYWAY: ${POSTGRES_FLYWAY_PASSWORD}
      POSTGRESQL_USER_FLYWAY: ${POSTGRES_FLYWAY_USER}
      FEDERATION_GATEWAY_KEYSTORE_PATH: file:/secrets/ssl.p12
      FEDERATION_GATEWAY_KEYSTORE_PASS: 123456
      SSL_FEDERATION_TRUSTSTORE_PATH: file:/secrets/contains_efgs_truststore.jks
      SSL_FEDERATION_TRUSTSTORE_PASSWORD: 123456
    volumes:
      - ./docker-compose-test-secrets:/secrets
  upload:
    build:
      context: ./
      dockerfile: ./services/upload/Dockerfile
    depends_on:
      - postgres
    ports:
      - "8012:8080"
    environment:
      SPRING_PROFILES_ACTIVE: disable-ssl-client-postgres, connect-efgs
      POSTGRESQL_SERVICE_PORT: '5432'
      POSTGRESQL_SERVICE_HOST: postgres
      POSTGRESQL_DATABASE: ${POSTGRES_DB}
      POSTGRESQL_PASSWORD_FLYWAY: ${POSTGRES_FLYWAY_PASSWORD}
      POSTGRESQL_USER_FLYWAY: ${POSTGRES_FLYWAY_USER}
      VAULT_EFGS_BATCHIGNING_SECRET: ${SECRET_PRIVATE}
      VAULT_EFGS_BATCHIGNING_CERTIFICATE: file:/secrets/efgs_signing_cert.pem
      SSL_FEDERATION_TRUSTSTORE_PATH: file:/secrets/contains_efgs_truststore.jks
      SSL_FEDERATION_TRUSTSTORE_PASSWORD: 123456
      FEDERATION_GATEWAY_KEYSTORE_PATH: file:/secrets/ssl.p12
      FEDERATION_GATEWAY_KEYSTORE_PASS: 123456
    volumes:
      - ./docker-compose-test-secrets:/secrets
  postgres:
    image: postgres:11.8
    restart: always
    ports:
      - "8001:5432"
    environment:
      PGDATA: /data/postgres
      POSTGRES_DB: ${POSTGRES_DB}
      POSTGRES_USER: ${POSTGRES_USER}
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
    volumes:
      - postgres_volume:/data/postgres
      - ./setup/setup-roles.sql:/docker-entrypoint-initdb.d/1-roles.sql
      - ./local-setup/create-users.sql:/docker-entrypoint-initdb.d/2-users.sql
      - ./local-setup/enable-test-data-docker-compose.sql:/docker-entrypoint-initdb.d/3-enable-testdata.sql
  pgadmin:
    container_name: pgadmin_container
    image: dpage/pgadmin4
    volumes:
       - pgadmin_volume:/root/.pgadmin
    ports:
      - "8002:80"
    restart: unless-stopped
    depends_on:
      - postgres
    environment:
      PGADMIN_DEFAULT_EMAIL: ${PGADMIN_DEFAULT_EMAIL}
      PGADMIN_DEFAULT_PASSWORD: ${PGADMIN_DEFAULT_PASSWORD}
  objectstore:
    image: "zenko/cloudserver"
    volumes:
      - objectstore_volume:/data
    ports:
      - "8003:8000"
    environment:
      ENDPOINT: objectstore
      REMOTE_MANAGEMENT_DISABLE: 1
      SCALITY_ACCESS_KEY_ID: ${OBJECTSTORE_ACCESSKEY}
      SCALITY_SECRET_ACCESS_KEY: ${OBJECTSTORE_SECRETKEY}
  create-bucket:
    image: amazon/aws-cli
    environment:
      - AWS_ACCESS_KEY_ID=${OBJECTSTORE_ACCESSKEY}
      - AWS_SECRET_ACCESS_KEY=${OBJECTSTORE_SECRETKEY}
    entrypoint: [ "/root/scripts/wait-for-it/wait-for-it.sh", "objectstore:8000", "-t", "30", "--" ]
    volumes:
      - ./scripts/wait-for-it:/root/scripts/wait-for-it
    command: aws s3api create-bucket --bucket cwa --endpoint-url http://objectstore:8000 --acl public-read
    depends_on:
      - objectstore
  verification-fake:
    image: roesslerj/cwa-verification-fake:0.0.5
    restart: always
    ports:
      - "8004:8004"
  efgs-fake:
    image: roesslerj/cwa-efgs-fake:0.0.5
    restart: always
    ports:
      - "8014:8014"
volumes:
  postgres_volume:
  pgadmin_volume:
  objectstore_volume:
akazad@node-0:~/cwa-server$ kubectl get pvc
NAME                   STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
callback-claim0        Pending                                                     26h
create-bucket-claim0   Pending                                                     26h
distribution-claim0    Pending                                                     26h
download-claim0        Pending                                                     26h
objectstore-volume     Pending                                                     26h
pgadmin-volume         Pending                                                     26h
postgres-claim1        Pending                                                     26h
postgres-claim2        Pending                                                     26h
postgres-claim3        Pending                                                     26h
postgres-volume        Pending                                                     26h
submission-claim0      Pending                                                     26h
upload-claim0          Pending                                                     26h
akazad@node-0:~/cwa-server$ kubectl describe pvc pgadmin-volume
Name:          pgadmin-volume
Namespace:     default
StorageClass:  
Status:        Pending
Volume:        
Labels:        io.kompose.service=pgadmin-volume
Annotations:   <none>
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
VolumeMode:    Filesystem
Used By:       pgadmin-6897994987-q2kxj
Events:
  Type    Reason         Age                   From                         Message
  ----    ------         ----                  ----                         -------
  Normal  FailedBinding  69s (x6322 over 26h)  persistentvolume-controller  no persistent volumes available for this claim and no storage class is set

Azad Md Abul Kalam
  • 115
  • 1
  • 1
  • 7
  • What did Kompose convert the various `volumes:` blocks into? Does your cluster have a working persistent volume provisioner? – David Maze Mar 24 '22 at 16:58
  • I am using a k8s cluster setup at https://www.cloudlab.us/ how do I know if it has a PV provisioner? Thanks a lot! – Azad Md Abul Kalam Mar 24 '22 at 17:12
  • 1
    [Here](https://stackoverflow.com/a/52669115/17126151) is an good answer for similar issue. Let me know if this is helpful for you – RadekW Mar 25 '22 at 12:13
  • @RadekW Thank you yes it was helpful. But I had to create separate PVs for all claims. Now the error is gone. – Azad Md Abul Kalam Mar 30 '22 at 17:02
  • Glad to hear that problem has been resolved. Could you post an answer with explanation? As you can read [here](https://stackoverflow.com/help/self-answer) it is very good practice and it will be helpful in future for other people – RadekW Mar 31 '22 at 08:43

0 Answers0