0

I am able to access my JupyterHub domain but am unable to get a pod to spawn.

This is from kubectl logs <pod> -n

[W 2021-06-04 19:59:07.037 SingleUserNotebookApp configurable:190] Config option `open_browser` not recognized by `SingleUserNotebookApp`.  Did you mean `browser`?
[I 2021-06-04 19:59:07.052 SingleUserNotebookApp notebookapp:1593] Authentication of /metrics is OFF, since other authentication is disabled.
[W 2021-06-04 19:59:08.649 LabApp] 'ip' has moved from NotebookApp to ServerApp. This config will be passed to ServerApp. Be sure to update your config before our next release.
[W 2021-06-04 19:59:08.649 LabApp] 'port' has moved from NotebookApp to ServerApp. This config will be passed to ServerApp. Be sure to update your 
config before our next release.
[W 2021-06-04 19:59:08.649 LabApp] 'port' has moved from NotebookApp to ServerApp. This config will be passed to ServerApp. Be sure to update your 
config before our next release.
[W 2021-06-04 19:59:08.649 LabApp] 'port' has moved from NotebookApp to ServerApp. This config will be passed to ServerApp. Be sure to update your 
config before our next release.
[I 2021-06-04 19:59:08.656 LabApp] JupyterLab extension loaded from /opt/conda/lib/python3.9/site-packages/jupyterlab
[I 2021-06-04 19:59:08.657 LabApp] JupyterLab application directory is /opt/conda/share/jupyter/lab
Patching auth into jupyter_server.base.handlers.JupyterHandler(jupyter_server.base.handlers.AuthenticatedHandler) -> JupyterHandler(jupyterhub.singleuser.mixins.HubAuthenticatedHandler, jupyter_server.base.handlers.AuthenticatedHandler)
[I 2021-06-04 19:59:08.715 SingleUserNotebookApp mixins:576] Starting jupyterhub-singleuser server version 1.4.1
[E 2021-06-04 19:59:28.728 SingleUserNotebookApp mixins:449] Failed to connect to my Hub at http://hub:8081/phys101/hub/api (attempt 1/5). Is it running?
    Traceback (most recent call last):
      File "/opt/conda/lib/python3.9/site-packages/jupyterhub/singleuser/mixins.py", line 447, in check_hub_version
        resp = await client.fetch(self.hub_api_url)
    tornado.simple_httpclient.HTTPTimeoutError: Timeout while connecting

If I go to the webpage the loading bar is stuck at blue about 95% done and the message just says 2021-06-04T19:59:05Z [Normal] Started container notebook

Here is the rest of the event log from that page.

Event log
Server requested
2021-06-04T19:58:47Z [Normal] Successfully assigned phys101/jupyter-admin1 to gke-tf-jh-cluster-user-pool-40bd378f-sjj4
2021-06-04T19:58:55Z [Normal] AttachVolume.Attach succeeded for volume "pvc-03c4e983-363e-401f-8616-e7df5c64fdac"
2021-06-04T19:59:04Z [Normal] Container image "jupyterhub/k8s-network-tools:0.11.1" already present on machine
2021-06-04T19:59:05Z [Normal] Created container block-cloud-metadata
2021-06-04T19:59:05Z [Normal] Started container block-cloud-metadata
2021-06-04T19:59:05Z [Normal] Pulling image "jupyter/datascience-notebook:latest"
2021-06-04T19:59:05Z [Normal] Successfully pulled image "jupyter/datascience-notebook:latest"
2021-06-04T19:59:05Z [Normal] Created container notebook
2021-06-04T19:59:05Z [Normal] Started container notebook
Server requested
2021-06-04T19:58:47Z [Normal] Successfully assigned phys101/jupyter-admin1 to gke-tf-jh-cluster-user-pool-40bd378f-sjj4
2021-06-04T19:58:55Z [Normal] AttachVolume.Attach succeeded for volume "pvc-03c4e983-363e-401f-8616-e7df5c64fdac"
2021-06-04T19:59:04Z [Normal] Container image "jupyterhub/k8s-network-tools:0.11.1" already present on machine
2021-06-04T19:59:05Z [Normal] Created container block-cloud-metadata
2021-06-04T19:59:05Z [Normal] Started container block-cloud-metadata
2021-06-04T19:59:05Z [Normal] Pulling image "jupyter/datascience-notebook:latest"
2021-06-04T19:59:05Z [Normal] Successfully pulled image "jupyter/datascience-notebook:latest"
2021-06-04T19:59:05Z [Normal] Created container notebook
2021-06-04T19:59:05Z [Normal] Started container notebook

I see the pod come up, go into a running 1/1 state, then it terminates.

At one point I did see this error below and commented out the userPods.nodeAffinity which seems to have removed the message.

0/3 nodes are available: 1 pod has unbound immediate persistentvolumeclaims, 2 node(s) didn't match node selector.

Here's the rest of my config YAML.

proxy:
  secretToken:
  service:
    type: NodePort

ingress:
  enabled: true
  pathSuffix: "/*"
  hosts:
    - "<DOMAIN.COM>"
  annotations: 
    kubernetes.io/ingress.class: "gce"
    kubernetes.io/ingress.global-static-ip-name: "jh-lb-global-test"
    networking.gke.io/managed-certificates: "google-managed-ssl-certificate"

hub:
  baseUrl: /${class-section}
  config:
    Authenticator:
      admin_users:
        - admin1
        - admin2
      allowed_users:
        - user1
        - user2
    DummyAuthenticator:
      password: <PASSWORD>
    JupyterHub:
      authenticator_class: dummy
singleuser:
  startTimeout: 120
  profileList:
    - display_name: "Minimal environment"
      description: "To avoid too much bells and whistles: Python."
      default: true
    - display_name: "Datascience environment"
      description: "If you want the additional bells and whistles: Python, R, and Julia."
      kubespawner_override:
        image: jupyter/datascience-notebook:latest
    - display_name: "Spark environment"
      description: "The Jupyter Stacks spark image!"
      kubespawner_override:
        image: jupyter/all-spark-notebook:latest  
  memory:
    limit: 1G
    guarantee: 1G
  cpu:
    limit: .5
    guarantee: .5
  image:
    # You should replace the "latest" tag with a fixed version from:
    # https://hub.docker.com/r/jupyter/datascience-notebook/tags/
    # Inspect the Dockerfile at:
    # https://github.com/jupyter/docker-stacks/tree/master/datascience-notebook/Dockerfile
    name: jupyter/datascience-notebook
    pullPolicy: Always
    tag: latest
  defaultUrl: "/lab"

scheduling:
  userScheduler:
    enabled: false
  podPriority:
    enabled: false
  # userPlaceholder:
  #   enabled: true
  #   replicas: 2
  userPods:
    nodeAffinity:
      matchNodePurpose: require
  # corePods:
  #   nodeAffinity:
  #     matchNodePurpose: require

cull:
  enabled: true
  timeout: 3600
  every: 3600

# prePuller:
#   hook:
#     enabled: false
rk92
  • 551
  • 4
  • 21
  • Does this answer your question? [pod has unbound PersistentVolumeClaims](https://stackoverflow.com/questions/52668938/pod-has-unbound-persistentvolumeclaims) – Alex G Jun 07 '21 at 22:17

0 Answers0