4

I'm currently trying to configure a Logstash cluster with Kubernetes and I would like to have each of the logstash nodes mount a volume as read-only with the pipelines. This same volume would then be mounted as read/write on a single management instance where I could edit the configs.

Is this possible with K8s and GCEPersistentDisk?

Rico
  • 58,485
  • 12
  • 111
  • 141
CodeCorrupt
  • 175
  • 13
  • It seems to me like this shouldn't violate any access issues as there is only one writer and many readers. – CodeCorrupt Nov 30 '18 at 23:14
  • I found [this article](https://medium.com/platformer-blog/nfs-persistent-volumes-with-kubernetes-a-case-study-ce1ed6e2c266) in my struggle with a viable solution. Although I feel it shouldn't be this messy, it works. – CodeCorrupt Dec 03 '18 at 17:55

1 Answers1

3

By Logstash I believe you mean an ELK cluster. Logstash is just a log forwarder and not an endpoint for storage.

Not really. It's not possible with a GCEPersistentDisk. This is more of GCE limitation where you can only mount a volume on an instance at a time.

Also, as you can see in the docs supports the ReadWriteOnce and the ReadOnlyMany but not at the same time.

Important! A volume can only be mounted using one access mode at a time, even if it supports many. For example, a GCEPersistentDisk can be mounted as ReadWriteOnce by a single node or ReadOnlyMany by many nodes, but not at the same time.

You could achieve this but just using a single volume on a single K8s node and then partition your volume to be used by different Elasticsearch pods on the same node but this would defeat the purpose of having a distributed cluster.

Elasticsearch works fine if you have your nodes in different Kubernetes nodes and each of them has a separate volume.

Rico
  • 58,485
  • 12
  • 111
  • 141
  • 1
    Just for clarity, I am just talking about a volume to hold the pipeline configs for Logstash specifically. That way I can have a custom service manage the pipeline configs while all logstash instances just read from that same volume and run with the `--config.reload.automatic` flag. Currently I found a workaround using a `ConfigMap` and having that customer service just push updates to that ConfigMap. This has it's own limitations with the [byte cap](https://github.com/kubernetes/kubernetes/issues/19781) of config maps though – CodeCorrupt Dec 03 '18 at 17:46