2

The following containers are not starting after installing IBM Cloud Private. I had previously installed ICP without a Management node and was doing a new install after having done and 'uninstall' and did restart the Docker service on all nodes.

Installed a second time with a Management node defined, Master/Proxy on a single node, and two Worker nodes.

Selecting menu option Platform / Monitoring gets 502 Bad Gateway

Event messages from deployed containers

Deployment - monitoring-prometheus

TYPE        SOURCE          COUNT   REASON  MESSAGE         
Warning     default-scheduler   2113        FailedScheduling    

No nodes are available that match all of the following predicates:: MatchNodeSelector (3), NoVolumeNodeConflict (4).


Deployment - monitoring-grafana

TYPE        SOURCE              COUNT   REASON  MESSAGE         
Warning     default-scheduler   2097        FailedScheduling        

No nodes are available that match all of the following predicates:: MatchNodeSelector (3), NoVolumeNodeConflict (4).


Deployment - rootkit-annotator

TYPE        SOURCE                  COUNT   REASON  MESSAGE     
Normal      kubelet 169.53.226.142  125     Pulled      
Container image "ibmcom/rootkit-annotator:20171011" already present on machine

Normal      kubelet 169.53.226.142  125     Created     
Created container

Normal      kubelet 169.53.226.142  125     Started     
Started container

Warning     kubelet 169.53.226.142  2770        BackOff     
Back-off restarting failed container

Warning     kubelet 169.53.226.142  2770    FailedSync  
Error syncing pod
philant
  • 34,748
  • 11
  • 69
  • 112

1 Answers1

0

The management console sometimes displays a 502 Bad Gateway Error after installation or rebooting the master node. If you recently installed IBM Cloud Private, wait a few minutes and reload the page.

If you rebooted the master node, take the following steps:

  1. Configure the kubectl command line interface. See Accessing your IBM Cloud Private cluster by using the kubectl CLI.

  2. Obtain the IP addresses of the icp-ds pods. Run the following command:

kubectl get pods -o wide -n kube-system | grep "icp-ds"

The output resembles the following text:

icp-ds-0 1/1 Running 0 1d 10.1.231.171 10.10.25.134

In this example, 10.1.231.171 is the IP address of the pod.

In high availability (HA) environments, an icp-ds pod exists for each master node.

  1. From the master node, ping the icp-ds pods. Check the IP address for each icp-ds pod by running the following command for each IP address:

ping 10.1.231.171

If the output resembles the following text, you must delete the pod:

connect: Invalid argument

  1. Delete each pod that you cannot reach:

kubectl delete pods icp-ds-0 -n kube-system

In this example, icp-ds-0 is the name of the unresponsive pod.

In HA installations, you might have to delete the pod for each master node.

  1. Obtain the IP address of the replacement pod or pods. Run the following command:

kubectl get pods -o wide -n kube-system | grep "icp-ds"

The output resembles the following text:

icp-ds-0 1/1 Running 0 1d 10.1.231.172 10.10.2

  1. Ping the pods again. Check the IP address for each icp-ds pod by running the following command for each IP address:

ping 10.1.231.172

If you can reach all icp-ds pods, you can access the IBM Cloud Private management console when that pod enters the available state.

Damien Gu
  • 77
  • 8