2025-02-20T14:52:34Z [Warning] MountVolume.SetUp failed for volume "files" : secret "singleuser" not found

I was trying to get the secret from jupyter hub to pass into the singleuser image. I am not sure where am i going wrong here? I am constantly getting an error that singleuser secret is missing in my new namespace where the pod was trying to start(jupyter server start process)

hub:
  baseUrl: /notebooks
  existingSecret: hub-secret-overrides
  config:
    JupyterHub:
      authenticator_class: generic-oauth
      hub_connect_url: http://hub.jupyter:8081
    Authenticator:
      auto_login: true
      enable_auth_state: true
      allow_all: true
    GenericOAuthenticator:
      client_id: eodh-workspaces
      username_claim: preferred_username
      scope:
        - openid
    KubeSpawner:
      enable_user_namespaces: true
      user_namespace_template: ws-{username}
      volumes:
        - name: files
          secret:
            secretName: singleuser
            optional: False
      volume_mounts:
        - name: files
          mountPath: /mnt/secrets

  extraConfig:
    load_workspace.py: |
      print("loading workspace...")

  nodeSelector:
    role: general

singleuser:
  image:
    name: "public.ecr.aws/eodh/jupyter/notebooks"
    tag: "0.1.0"

  extraFiles:
    jupyter_server_config.py:
      mountPath: /etc/jupyter/jupyter_server_config.py
      stringData: |
        from s3contents import S3ContentsManager
        from hybridcontents import HybridContentsManager
        from jupyter_server.services.contents.filemanager import FileContentsManager
        from traitlets.config import get_config
        import os
        import boto3
        import logging
        import sys
        import stat

        c = get_config()

        EFS_HOME = "/home/jovyan"
        S3_STORAGE_PATH = "/home/jovyan/s3"

        # ✅ Set up HybridContentsManager for Local (EFS) + S3
        c.ServerApp.contents_manager_class = HybridContentsManager

        c.HybridContentsManager.manager_classes = {
            "": FileContentsManager,  # EFS as default storage
            "s3": S3ContentsManager 
        }

        c.HybridContentsManager.manager_kwargs = {
            "": {
                "root_dir": "/home/jovyan"  # Default to EFS
            },
            "s3": {
                "bucket": "workspaces-jupyter-test-bucket",
                "prefix": "workspaces",
                "region_name": "eu-west-2",
                "endpoint_url": "https://s3.eu-west-2.amazonaws.com",
                "sse": "AES256",
                "signature_version": "s3v4"
            }
        }

        # ✅ Set Jupyter's working directory to HybridContentsManager
        c.ServerApp.root_dir = "/home/jovyan"

        # ✅ Set up Logging
        logging.basicConfig(
            level=logging.INFO,
            format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
            stream=sys.stdout
        )
        logger = logging.getLogger('hybridcontents')
        logger.info("HybridContentsManager initialized: EFS + S3.")

        # ✅ If credentials are available,
        session = boto3.Session()
        credentials = session.get_credentials()
        if credentials:
            c.S3ContentsManager.access_key_id = credentials.access_key
            c.S3ContentsManager.secret_access_key = credentials.secret_key
            c.S3ContentsManager.session_token = credentials.token

  storage:
    type: static
    static:
      pvcName: pvc-{username}
      subPath: ""
    homeMountPath: /home/jovyan
    extraVolumes: [] # ✅ Removed duplicate volume here

  serviceAccountName: default
  profileList:
    - display_name: "Data Science (Python)"
      description: "Standard Python DS environment."
      default: true
    - display_name: "Jupyter/Base-notebook"
      description: "Base Notebook environment."
      kubespawner_override:
        image: "quay.io/jupyter/base-notebook:lab-4.3.5"

  nodeSelector:
    role: processing

proxy:
  service:
    type: ClusterIP
  chp:
    nodeSelector:
      role: general

scheduling:
  userScheduler:
    nodeSelector:
      role: general

prePuller:
  hook:
    nodeSelector:
      role: processing

debug:
  enabled: true

Are you sure the singleuser secret exists in the namespace? How did you create it? What’s the full error message?

volumes:
        - name: files
          secret:
            secretName: singleuser
            optional: False

The above code snippet did not work and kept throwing an error:
" 2025-02-20T14:52:34Z [Warning] MountVolume.SetUp failed for volume “files” : secret “singleuser” not found"

When I try this:

(⎈|eodhp-dev:jupyter)]$ k get secret -n jupyter
NAME                   TYPE     DATA   AGE
hub                    Opaque   4      5d23h
hub-secret-overrides   Opaque   1      5d23h
singleuser             Opaque   1      5d23h

I can see that the singleuser secret is definitely present. However, because I have enabled the kubespawner to create namespaces and the new container would be within that namespace, it seems that the single user is unable to pull the singleuser secret generated in the jupyter namespace. I am unsure of the right step to ensure that the single user running in a new namespace also has access to the singleuser secret. I am uncertain why it is preventing the creation of a new secret in the new namespace, although the secret appears to be present in the jupyter namespace.

I have another secret yaml which overrides the hub-secret like this as I am using keycloak

apiVersion: v1
kind: Secret
metadata:
  name: hub-secret-overrides
  annotations:
    argocd.argoproj.io/sync-options: ServerSideApply=true
data: {}
---
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
  name: hub-secret-overrides
spec:
  refreshInterval: 1m
  secretStoreRef:
    name: secret-store
    kind: ClusterSecretStore
  target:
    name: hub-secret-overrides
    creationPolicy: Merge
    template:
      data:
        values.yaml: |
          hub:
            config:
              GenericOAuthenticator:
                client_secret: {{.client_secret}}
                authorize_url: https://${[.vars.platform.domain]}/keycloak/realms/eodhp/protocol/openid-connect/auth
                token_url: https://${[.vars.platform.domain]}/keycloak/realms/eodhp/protocol/openid-connect/token
                oauth_callback_url: https://${[.vars.workspaces.domain]}/notebooks/hub/oauth_callback
                userdata_url: https://${[.vars.platform.domain]}/keycloak/realms/eodhp/protocol/openid-connect/userinfo
  data:
    - secretKey: client_secret
      remoteRef:
        key: ${[.vars.aws.secretStores.eodhp.name]}
        property: eodh-workspaces.oidc.clientSecret

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

namespace: jupyter

resources:
  - ingress.yaml
  - namespace.yaml
  - roles.yaml
  - secrets.yaml
  - jupyter-user-sa.yaml
  - jupyter-user-workspace-role.yaml

helmCharts:
  - name: jupyterhub
    repo: https://hub.jupyter.org/helm-chart/
    releaseName: jupyterhub
    namespace: jupyter
    version: 4.1.0
    valuesFile: values.yaml

Let me know @manics If you need few more details then I can provide you.
Thank you

Kubernetes Secrets can only be accessed from pods in the same namespace. How are you creating the secret?

Thank you for your response. @manics

I believe that the singleuser secret is generated by Jupyter itself, as I am not creating it manually. The key point is that I have multiple namespaces. The primary namespace is jupyter, and various users will be using the JupyterHub to spawn their notebooks. When they start their notebook, the JupyterHub kubespawner creates new namespaces for user workspaces.

The error I am encountering states that the secret named singleuser is not mounting properly. I can see that this secret is created by the JupyterHub in the jupyter namespace. However, I am unable to understand why this is happening. My main concern is that my notebook should spawn correctly, but it keeps failing with the error that the singleuser secret is not present. I checked the new namespace and confirmed that the singleuser secret is indeed missing.

What are your thoughts on this issue? I believe the secret is created by the Helm chart of Jupyter. Do you have any suggestions on how to fix this issue? I am experiencing this problem in the new namespace that JupyterHub creates for each user.

I suspect the Jupyter configuration file, which is:

jupyter_server_config.py:
      mountPath: /etc/jupyter/jupyter_server_config.py

is placed within the singleuser block to allow S3 bucket access and manage the hybrid content manager. It seems this file is not being applied correctly. My guess is that it may be looking for the secret to function properly when new namespaces are created, with Jupyter running as an individual pod in those namespaces. However, it is unable to find the required secret.

If you refer my code in this channel you would find the values.yaml and secrets.yaml(this secret is only overriding the auth secret for user).

Sorry, you’re correct! The singleuser secret is created statically by the Helm chart and will only work in a single namespace:

You could try a third party extension to automatically mirror objects into other namespaces, for example

I haven’t used that so do your own due diligence.

Another option is to fetch the credentials dynamically, e.g. from a custom webservice. Since jupyter_server_config.py is a Python file you can make web requests (or run an arbitrary command) to fetch configuration from somewhere. A potential advantage of this is each user can have their own credentials, instead of sharing a single credential for everyone.

Another option (probably easiest!) is to build the configuration file into your singleuser image, and pass the credentials as environment variables.

Let me give it a try and I will let you know thanks.