1

So I have configured an OpenShift 3.9 build configuration such that environment variables are populated from an OpenShift secret at build-time. I am using these environment variables for setting passwords up for PostgreSQL roles in the image's ENTRYPOINT script.

Apparently these environment variables are baked into the image, not just the build image, but also the resulting database image. (I can see their values when issuing set inside the running container.) On one hand this seems necessary because the ENTRYPOINT script needs access to them, and it executes only at image run-time (not build-time). On the other this is somewhat disconcerting, because FWIK one who obtained the image could now extract those passwords. Unsetting the environment variables after use would not change that.

So is there a better way (or even best practice) for handling such situations in a more secure way?

UPDATE At this stage I see two possible ways forward (better choice first):

  1. Configure DeploymentConfig such that it mounts the secret as a volume (not: have BuildConfig populate environment variables from it).

  2. Store PostgreSQL password hashes (not: verbatim passwords) in secret.

rookie099
  • 2,201
  • 2
  • 26
  • 52
  • Can't you only define them in the deployment config instead if only needing them at run time? – Graham Dumpleton Oct 03 '18 at 21:15
  • @GrahamDumpleton I'm doing this at runtime mainly because the database resides on a mounted/persistent volume, so PostreSQL's `initdb` also happens at run-time. Maybe I'm missing something. – rookie099 Oct 04 '18 at 07:12
  • Possibly. You can consume the contents of the secret as environment variables in ``DeploymentConfig`` instead of the ``BuildConfig``. You don't need to switch to mounting it as a file using a volume. – Graham Dumpleton Oct 04 '18 at 08:03
  • @GrahamDumpleton "... in `DeploymentConfig`": excellent, I'll try that next. – rookie099 Oct 04 '18 at 08:18
  • @GrahamDumpleton So this seemed to work out fine. Again, thank you very much! I'm curious about one more point though: I tried to sanity-check by `oc exec ... bash`-ing into the build container to confirm with `set` that the environment variables did not (yet) exist. OpenShift guards against this and reports: " exec operation is not allowed because the pod's security context exceeds your permissions". Is there any other sanity-check that I could perform as non-admin on this OpenShift cluster? – rookie099 Oct 04 '18 at 08:47
  • It isn't possible to get into a build container. You could probably pull down the built image from the internal registry and then locally look at the history of the image and see that there are no env settings for them in any layer. – Graham Dumpleton Oct 04 '18 at 08:50

1 Answers1

0

As was suggested in a comment, what made sense was to shift the provision of environment variables from the secret from BuildConfig to DeploymentConfig. For reference:

oc explain bc.spec.strategy.dockerStrategy.env.valueFrom.secretKeyRef
oc explain dc.spec.template.spec.containers.env.valueFrom.secretKeyRef
rookie099
  • 2,201
  • 2
  • 26
  • 52
  • @GrahamDumpleton Unless I am mistaken, this approach has the significant downside that `oc describe pods/my-deployed-pod` lists verbatim password values under `Environment:`. Should security rest in this case on only authorized users being able to run this command, or would it not be better to access these secrets via a mounted volume? – rookie099 Oct 18 '18 at 07:59