2

I'm trying to set up an app on OpenShift Online Next Gen and I need to store a small file at runtime and read it again during startup. The content of the file changes, so I cannot simply add it to my source code.

My project is already up and running, all I need is persistent storage. So, I open the Web Console, click Browse->Storage and it says there are no volumes available. Same things if I go to Browse->Deployments and try to attach a volume.

So, I logged in via cli and issued the following command:

oc volume dc/mypingbot --add --type=pvc --claim-name=data1 --claim-size=1Gi

Now my volume appears both in the Storage section and in the deplyment section. I attach it to my deployment config using the Web Console and set its mount point to /data1.

The deployment process now takes a while and then fails with the following two errors:

Error syncing pod, skipping: Could not attach EBS Disk "aws://us-east-1c/vol-ba78501e": Error attaching EBS volume: VolumeInUse: vol-ba78501e is already attached to an instance status code: 400, request id: 

Unable to mount volumes for pod "mypingbot-18-ilklx_mypingbot(0d22f712-58a3-11e6-a1a5-0e3d364e19a5)": Could not attach EBS Disk "aws://us-east-1c/vol-ba78501e": Error attaching EBS volume: VolumeInUse: vol-ba78501e is already attached to an instance status code: 400, request id: 

What am I missing?

Zagorax
  • 11,440
  • 8
  • 44
  • 56
  • EBS volumes can only be attached to one node at a time. AWS thinks it is currently attached elsewhere. Are you using it in any other pod or deployment? Or is your deployment doing a build (in a pod) that also requires the volume? – Mark Turansky Aug 02 '16 at 14:20
  • The current restriction around mounting on one node means that you can't use a volume on an application which you have scaled up to more than one replica. This is because a second instance could land on a different node. Have you scaled your application? – Graham Dumpleton Aug 03 '16 at 02:43
  • @GrahamDumpleton I did execute the operations above and the application uses only one replica. However, I noticed that if I kill the running pod then the deployment works properly, so there might be some mistake in the way I create the persistentVolumeClaim (as I can't see my volume mounted if I ssh to the current running pod). Still, even after I manage to deploy the pod wit the volume, everytime I push new code on Github, the system pulls the code, builds it and make a new deployment... which will fail until I manually shutdown the previosuly running pod. Is this behaviour intentional?? – Zagorax Aug 03 '16 at 08:42
  • 2
    You might be hitting a problem with the fact that the default deployment strategy is 'Rolling'. This will create a new pod with new code before shutting down the old. That would trigger multi node issue. You would need to change the deployment strategy to 'Recreate'. https://docs.openshift.com/enterprise/latest/dev_guide/deployments.html#recreate-strategy – Graham Dumpleton Aug 03 '16 at 09:44
  • @GrahamDumpleton This seems to be a partial solution, because if you use lifecycle hooks, they runs in another pod, and again the error persists. – Stan Sep 29 '17 at 21:56
  • If you use Recreate deployment strategy and a mid lifecycle hook you would be fine as the mid lifecycle hook runs when all instances are shutdown. – Graham Dumpleton Sep 29 '17 at 21:58

0 Answers0