97

I am evaluating Kubernetes as a platform for our new application. For now, it looks all very exciting! However, I’m running into a problem: I’m hosting my cluster on GCE and I need some mechanism to share storage between two pods - the continous integration server and my application server. What’s the best way for doing this with kubernetes? None of the volume types seems to fit my needs, since GCE disks can’t be shared if one pod needs to write to the disk. NFS would be perfect, but seems to require special build options for the kubernetes cluster?

EDIT: Sharing storage seems to be a problem that I have encountered multiple times now using Kubernetes. There are multiple use cases where I'd just like to have one volume and hook it up to multiple pods (with write access). I can only assume that this would be a common use case, no?

EDIT2: For example, this page describes how to set up an Elasticsearch cluster, but wiring it up with persistent storage is impossible (as described here), which kind of renders it pointless :(

Marco Lamina
  • 3,326
  • 4
  • 22
  • 22
  • What would you be writing to disk? Logs? – Christian Grabowski Jul 29 '15 at 18:58
  • This specifically asks about GCE, but coming from google I expected a general answer. Here's what eventually answered the question in title: https://stackoverflow.com/questions/37649541/kubernetes-persistent-volume-accessmode That is use AssessMode: ReadWriteMany – Vanuan Sep 28 '18 at 23:05

10 Answers10

88

Firstly, do you really need multiple readers / writers?

From my experience of Kubernetes / micro-service architecture (MSA), the issue is often more related to your design pattern. One of the fundamental design patterns with MSA is the proper encapsulation of services, and this includes the data owned by each service.

In much the same way as OOP, your service should look after the data that is related to its area of concern and should allow access to this data to other services via an interface. This interface could be an API, messages handled directly or via a brokage service, or using protocol buffers and gRPC. Generally, multi-service access to data is an anti-pattern akin to global variables in OOP and most programming languages.

As an example, if you where looking to write logs, you should have a log service which each service can call with the relevant data it needs to log. Writing directly to a shared disk means that you'd need to update every container if you change your log directory structure, or decided to add extra functionality like sending emails on certain types of errors.

In the major percentage of cases, you should be using some form of minimal interface before resorting to using a file system, avoiding the unintended side-effects of Hyrum's law that you are exposed to when using a file system. Without proper interfaces / contracts between your services, you heavily reduce your ability to build maintainable and resilient services.

Ok, your situation is best solved using a file system. There are a number of options...

There are obviously times when a file system that can handle multiple concurrent writers provides a superior solution over a more 'traditional' MSA forms of communication. Kubernetes supports a large number of volume types which can be found here. While this list is quite long, many of these volume types don't support multiple writers (also known as ReadWriteMany in Kubernetes).

Those volume types that do support ReadWriteMany can be found in this table and at the time of writing this is AzureFile, CephFS, Glusterfs, Quobyte, NFS and PortworxVolume.

There are also operators such as the popular rook.io which are powerful and provide some great features, but the learning curve for such systems can be a difficult climb when you just want a simple solution and keep moving forward.

The simplest approach.

In my experience, the best initial option is NFS. This is a great way to learn the basic ideas around ReadWriteMany Kubernetes storage, will serve most use cases and is the easiest to implement. After you've built a working knowledge of multi-service persistence, you can then make more informed decisions to use more feature rich offerings which will often require more work to implement.

The specifics for setting up NFS differ based on how and where your cluster is running and the specifics of your NFS service and I've previously written two articles on how to set up NFS for on-prem clusters and using AWS NFS equivalent EFS on EKS clusters. These two articles give a good contrast for just how different implementations can be given your particular situation.

For a bare minimum example, you will firstly need an NFS service. If you're looking to do a quick test or you have low SLO requirements, following this DO article is a great quick primer for setting up NFS on Ubuntu. If you have an existing NAS which provides NFS and is accessible from your cluster, this will also work as well.

Once you have an NFS service, you can create a persistent volume similar to the following:

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-name
spec:
  capacity:
    storage: 1Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteMany
  nfs:
    server: 255.0.255.0 # IP address of your NFS service
    path: "/desired/path/in/nfs"

A caveat here is that your nodes will need binaries installed to use NFS, and I've discussed this more in my on-prem cluster article. This is also the reason you need to use EFS when running on EKS as your nodes don't have the ability to connect to NFS.

Once you have the persistent volume set up, it is a simple case of using it like you would any other volume.

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-name
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
spec:
  template:
    spec:
      containers:
        - name: p-name
          volumeMounts:
            - mountPath: /data
              name: v-name
      volumes:
        - name: v-name
          persistentVolumeClaim:
            claimName: pvc-name
Ian Belcher
  • 5,583
  • 2
  • 34
  • 43
  • 1
    This is not a direct answer to the question, but I would consider it the best answer! Not sure about the SO policies; Is it OK to accept this as the correct answer? – Marco Lamina Apr 11 '16 at 05:49
  • 1
    IMHO, I believe it is the only solution to the problem. When I first started with Kubernetes the shared storage issue was something I struggled with for a while until I realised that it was the incorrect way of doing things. I'm now working on a system with 30+ services and it scares me to think how hard it would have been if each service had the ability to reach inside other services data. Hope this helps! – Ian Belcher Apr 11 '16 at 06:07
  • It does! Thanks for your great answer, better late than never :) – Marco Lamina Apr 11 '16 at 06:54
  • 8
    The problem is you try to run **other people's** applications which have not held this mantra from the beginning. (I'm looking at you *Magento*) – quickshiftin May 03 '18 at 19:27
  • 4
    I can see the validity of this argument, but I wonder how the Kubernetes gods would like us to run Rails, Laravel, Drupal, Wordpress and a bunch of others? They all like to save files in some form or other, which have to be shared over all "servers" (pods in this case). As for GKE, it's a shame you can't use an external NFS disk, such as from Filestore :-( – Ralph Bolton Jan 28 '21 at 09:34
  • I have the same trouble with airflow, that shares DAG's and plugins using filesystem. – minus one Jun 02 '21 at 08:44
  • "Generally, multi-service access to data is an anti-pattern akin to global variables in OOP and most programming languages.", for better or for worse, the Tekton framework was built on this and uses this as its primary communication mechanism, as far as I can tell. Personally I'm inclined to agree that an API or even vintage UNIX message passing might be better; filesystems do take a lot of work and care to use as IPC (they are one of the deceptively harder options, one of those things that sounds easy on paper but with a million edge cases) but I'm not aware of an "easy" IPC method. – jrh Jan 26 '22 at 19:46
  • 3
    It's not very helpful to say "you should re-architect your software to follow MSA best practices" when somebody runs into a stumbling block which is actually straightforward to get past. These best practices are generally a lot more complex (and thus error-prone) than just using a filesystem. – Leopd Mar 06 '22 at 18:14
  • On the other hand, I think it is quite helpful giving others some insight into the to the potential failure paths that exist when giving advice that is against best-practices? This answer gives a pretty good explanation for how to get past this common stumbling block, but also gives some background as to why it might not be the best option. It's not very helpful to say "just use the filesystem because it's easier" without giving some indication as to when and why that approach is problematic? – Ian Belcher Mar 07 '22 at 19:27
64

First of all. Kubernetes doesn't have integrated functionality to share storage between hosts. There are several options below. But first how to share storage if you already have some volumes set up.

To share a volume between multiple pods you'd need to create a PVC with access mode ReadWriteMany

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
    name: my-pvc
spec:
    accessModes:
      - ReadWriteMany
    storageClassName: myvolume
    resources:
        requests:
            storage: 1Gi

After that you can mount it to multiple pods:

apiVersion: v1
kind: Pod
metadata:
  name: myapp1
spec:
  containers:
...
      volumeMounts:
        - mountPath: /data
          name: data
          subPath: app1
  volumes:
    - name: data
      persistentVolumeClaim:
        claimName: 'my-pvc'
---
apiVersion: v1
kind: Pod
metadata:
  name: myapp2
spec:
  containers:
...
      volumeMounts:
        - mountPath: /data
          name: data
          subPath: app2
  volumes:
    - name: data
      persistentVolumeClaim:
        claimName: 'my-pvc'

Of course, persistent volume must be accessible via network. Otherwise you'd need to make sure that all the pods are scheduled to the node with that volume.

There are several volume types that are suitable for that and not tied to any cloud provider:

  • NFS
  • RBD (Ceph Block Device)
  • CephFS
  • Glusterfs
  • Portworx Volumes

Of course, to use a volume you need to have it first. That is, if you want to consume NFS you need to setup NFS on all nodes in K8s cluster. If you want to consume Ceph, you need to setup Ceph cluster and so on.

The only volume type that supports Kubernetes out of the box is Portworks. There are instruction on how to set it up in GKE.

To setup Ceph cluster in K8s there's a project in development called Rook.

But this is all overkill if you just want a folder from one node to be available in another node. In this case just setup NFS server. It wouldn't be harder than provisioning other volume types and will consume much less cpu/memory/disk resources.

Vanuan
  • 31,770
  • 10
  • 98
  • 102
  • 1
    this will work only in case of pods created within same node but not if pods are created in different nodes as GKE doesn't support that – Tushar Seth Oct 13 '19 at 12:18
  • @TusharSeth this answer doesn't cover how to setup a persistent volume. Only how to use it. Kubernetes doesn't have functionality needed to share volumes. Only to consume pre-shared volumes. – Vanuan Oct 15 '19 at 10:52
  • Mh, the loaded yaml will become a list, but podTemplate does not accept a list? – Nikolai Ehrhardt Mar 28 '22 at 17:41
39

NFS is a built-in volume plugin and supports multiple pod writers. There are no special build options to get NFS working in Kube.

I work at Red Hat on Kubernetes, focused mainly on storage.

Mark Turansky
  • 731
  • 4
  • 4
  • 4
    AzureFile, CephFS, Glusterfs, Quobyte, (VsphereVolume,) and PortworxVolume also supports multiple writers, see the table under "Access modes" at https://kubernetes.io/docs/concepts/storage/persistent-volumes – Yngvar Kristiansen Nov 23 '17 at 12:19
7

Update: The best choice is probably Cloud Filestore, a managed NFS system. This gives you full random read/write access to files, unlike GCS which only supports upload/download. See docs here.

Original: Have you tried Google Cloud Storage? You might even be able to use the FUSE adapter to map it like a network disk.

Sandeep Dinesh
  • 2,035
  • 19
  • 19
  • I've just recently started to use GCE, so I'm not quite familiar with Google Cloud Storage. How would I connect this to Kubernetes (as a volume)? As I understand it, my only options are GCE disks (which doesn't support parallel write access) and glusterFS (which seems to complicated). I assumed that shared storage would be a common use case for kubernetes clusters? – Marco Lamina Jul 30 '15 at 09:21
5

If it is logs that you are looking to write to disk, I suggest you look at logspout https://github.com/gliderlabs/logspout. This will collect each pod's logging and then you can use google cloud platforms' fairly new logging service that uses fluentd. That way all logs from each pod are collected into a single place.

If it is data that would normally write to a database or something of that nature, I recommend having a separate server outside of the kubernetes cluster that runs the database.

EDIT

For sharing files amongst pods, I recommend mounting a google cloud storage drive to each node in your kubernetes cluster, then setting that up as a volume into each pod that mounts to that mounted directory on the node and not directly to the drive. Having it mount to each node is good because pods do not run on designated nodes, so it's best to centralize it in that case.

Christian Grabowski
  • 2,782
  • 3
  • 32
  • 57
  • Thanks for the hint! logspout looks very interesting :) My intention was not to write logs, but to share a persistent maven repository between my pods! – Marco Lamina Jul 30 '15 at 09:08
  • How would I do that? I can mount a GCE PersistentDisk to multiple nodes, but only in read mode (I need read/write). The Kubernetes volumes docs don't say anything about Google Cloud Storage drives: https://github.com/GoogleCloudPlatform/kubernetes/blob/release-1.0/docs/user-guide/volumes.md – Marco Lamina Jul 31 '15 at 06:34
  • You would do just that, the persistent disk to each node, then the kubernetes volumes mount to that on the node, not directly to the drive. – Christian Grabowski Jul 31 '15 at 13:47
  • I see what you mean, but isn't that the same problem, just on a different level of abstraction? The GCE docs say "It is not possible to attach the persistent disk to multiple instances in read-write mode.", so pods would not be allowed to write to that disk! – Marco Lamina Aug 01 '15 at 11:16
  • Not GCE, the Google cloud storage, similar to S3, this allows multiple mounts to the same bucket https://cloud.google.com/storage/ – Christian Grabowski Aug 03 '15 at 02:18
  • Current documentation on this solution: https://cloud.google.com/storage/docs/gcs-fuse#using_feat_name – Mike S. Oct 25 '17 at 14:58
3

Have you looked at kubernetes Volumes ? You are probably looking at creating a gcePersistentDisk

A gcePersistentDisk volume mounts a Google Compute Engine (GCE) Persistent Disk into your pod. Unlike emptyDir, which is erased when a Pod is removed, the contents of a PD are preserved and the volume is merely unmounted. This means that a PD can be pre-populated with data, and that data can be “handed off” between pods. Important: You must create a PD using gcloud or the GCE API or UI before you can use it There are some restrictions when using a gcePersistentDisk: the nodes on which pods are running must be GCE VMs those VMs need to be in the same GCE project and zone as the PD A feature of PD is that they can be mounted as read-only by multiple consumers simultaneously. This means that you can pre-populate a PD with your dataset and then serve it in parallel from as many pods as you need. Unfortunately, PDs can only be mounted by a single consumer in read-write mode - no simultaneous writers allowed. Using a PD on a pod controlled by a ReplicationController will fail unless the PD is read-only or the replica count is 0 or 1.

To support multiple writes from various pods you will probably need to create one beefy pod which exposes a thrift or socket types service which exposes readFromDisk and WriteToDisk methods.

hardkoded
  • 18,915
  • 3
  • 52
  • 64
varun
  • 4,522
  • 33
  • 28
  • That beefy pod with read/write methods is basically Google Storage or Amazon S3 ;-) Right idea but probably best to use FUSE and one of those solutions instead of rolling your own. – Mike S. Oct 25 '17 at 14:56
  • @MikeS. Google storage/s3 are immutable buckets not good for say exposing levelDB/sqlite etc as a microservice. This is also a rare use case, one is better off using kubernaties volumes and a RDBMS or an external document store like cassandra etc which wills till be installed in your kubernaties cluster. S3/Fuse are api accessable and are good for blob storage. – varun Oct 25 '17 at 15:05
  • I've had several legacy apps use S3 for image/document storage and add a CNAME alias to bucket for serving (like CDN) and use fuse on servers for app read/write. I wouldn't use as backend for a database, but for docs and images it's perfectly suitable. – Mike S. Oct 25 '17 at 20:19
3

Google recently released cloud filestore, with a tutorial here: https://cloud.google.com/filestore/docs/accessing-fileshares

Might be a good alternative to cloud storage/buckets for some scenarios.

Geige V
  • 1,389
  • 1
  • 16
  • 28
2

Helm: if you use helm to deploy

If you have a PVC that only supports RWO and you want many pods to be able to read from the same PVC and share that storage, then you can install the helm chart stable/nfs-server-provisioner if your cloud provider does not support RWX access mode.

This chart provisions "out-of-tree" storage PVCs with RWX access mode that access the underlying PVC from a cloud provider that only supports RWO, like Digital Ocean.

In your pods, you mount the PVC provisioned by the nfs server and you can scale them while they read and write from the same PVC.

Important!

You have to modify the values file to add configuration suited to your deployment like your storage class.

For more information on the chart: https://github.com/helm/charts/tree/master/stable/nfs-server-provisioner

Margach Chris
  • 1,404
  • 12
  • 20
2

I just achieve this with an application made with 3 containerized micro-services, I have one of this that is responsible to store and share files, so the application is storing files and retrieving them on a folder, this folder is passed via application property. There is a secured rest entry point that is allowing submission and retrieving of files (basically at every submission it is creating a unique id that is returned and can be used to scan the folder for a file). Passing this application from docker-compose to kubernetes I had your same problem : I need a global disk so I can have multiple replica of the container, so when the other micro-services will send a request to one of the replica, they will always be able to send any submitted file, not only the replica file managed at submission. I solved by creating a persistent volume, associated to an persistent volume claim, this volume claim is associated to a deployment (note: not a Statefulset, that it will create a disk for every pod), at this point you have to associate the mounted volume path with the container storing folder path.

So what is important is just the persistent volume claim name and the fact that PV has more available memory of PVC, and obviously the matching with the deployment with the right labels. Then in the deployment you can pass in the spec:

volumes:
      - name: store-folder
        persistentVolumeClaim:
          claimName: [pvc_name]

into the container settings:

volumeMounts:
        - name: store-folder
          mountPath: "/stored-files"

and in env. block:

containers:
....
      - env:
        - name: any-property-used-inside-the-application-for-saving-files
          value: /stored-files

So, from volume, you bind the pvc to the deployment, then from volume mounts, you bind the disk to a directory, then via environment variable you are able to pass the persistent disk directory. It is fundamental that your declare both PVC and PV, without PV it will work like any pods has its own folder.

1

@Marco - in regards to the Maven related question my advice would be to stop looking at this as a centralized storage problem and perhaps think of it as a service issue.

I've run Maven repositories under HTTP in the past (read-only). I would simply create a Maven repo and expose it over Apache/Nginx in its own pod (docker container) with what ever dedicated storage you need for just that pod and then use service discovery to link it to your application and build systems.

Adrnalnrsh
  • 31
  • 4
  • 1
    This does not provide an answer to the question. To critique or request clarification from an author, leave a comment below their post - you can always comment on your own posts, and once you have sufficient [reputation](http://stackoverflow.com/help/whats-reputation) you will be able to [comment on any post](http://stackoverflow.com/help/privileges/comment). – aschipfl Oct 28 '15 at 23:47
  • 1
    I don't have 50 repuatation points so I can't comment. However my experience with Docker and Kubernetes I feel its a valid answer as it give him an alternative method to help solve his problem with sharing Maven between pods (Make maven its own pod) – Adrnalnrsh Nov 20 '15 at 04:27