2

Im trying to update the tls-cipher-suites for the daemonset.apps/node-exporter of openshift-monitoring namespace using oc edit daemonset.apps/node-exporter -n openshift-monitoring

.
.
.
      - args:
        - --secure-listen-address=:9100
        - --upstream=http://127.0.0.1:9101/
        - --tls-cert-file=/etc/tls/private/tls.crt
        - --tls-private-key-file=/etc/tls/private/tls.key
        - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
        image: quay.io/coreos/kube-rbac-proxy:v0.3.1
        imagePullPolicy: IfNotPresent
        name: kube-rbac-proxy
        ports:
.
.
.

Once the tls-cipher-suites are updated, i see that node-exporter pods are getting re-deployed. But when I check the daemonset.apps/node-exporter using oc get -o yaml daemonset.apps/node-exporter -n openshift-monitoring i see that the updates done to tls-cipher-suites are lost and it has got re-set to old value. How can i set this value permanently?

Note: The purpose of updating tls-cipher-suites is Nessus scan has reported SWEET32 vulnerability for port 9100 for the Medium Strength Ciphers: ECDHE-RSA-DES-CBC3-SHA and DES-CBC3-SHA.

Rakesh Kotian
  • 175
  • 3
  • 20
  • Did you try deleting the daemonset, correcting the yaml and deploying it again? Without live editing? – Matt Oct 01 '20 at 11:22
  • I have tried now, but observed that daemonset gets re-created automatically, after deletion. – Rakesh Kotian Oct 01 '20 at 11:24
  • Did you mean pod? Pod gets recreated? Because it doesn't make sens. Daemonset should not get recreated when deleted. Are you using some kind of custom controller? – Matt Oct 01 '20 at 11:27
  • Daemonset is getting re-created, thats really weird behavior. I'm not using any custom controller. I have installed the okd 3.11 using https://github.com/openshift/openshift-ansible/tree/release-3.11. Not sure if this set-ups some functionality due to which we are seeing this behavior. – Rakesh Kotian Oct 01 '20 at 12:10
  • how did you install the node-exporter? – Matt Oct 01 '20 at 12:12
  • node-exporter gets installed with okd 3.11 installation using above link – Rakesh Kotian Oct 01 '20 at 12:13

1 Answers1

1

Openshift 3.11 seems to indeed be using openshift_cluster_monitoring_operator. This is why when you delete or change anything it recovers it to its defaults.

It manages node-exporter installation and it doesn't seem to allow for customizing node-exporter installation. Take a look at the cluster-monitoring-operator docs

My recommendation would be to uninstall openshift monitoring operator and install node-exporter yourself from official node-exporter repository or with helm chart where you actually have full controll over deployment.

Matt
  • 7,419
  • 1
  • 11
  • 22