0

I have an application which makes requests to AWS to start batch jobs. The jobs vary, and therefore resource requirements change for each job.

It is clear how to change CPUs and memory, however I cannot figure out how to specify root volume size, or if it is even possible

Here is an example of the code I am running:

import boto3

client = boto3.client('batch')

JOB_QUEUE = "job-queue"
JOB_DEFINITION="job-definition"



container_overrides = {
    'vcpus': 1,
    'memory': 1024,
    'command': ['echo', 'Hello World'],
    # 'volume_size': 50 # this is not valid
    'environment': [ # this just creates env variables
        {
            'name': 'volume_size',
            'value': '50'
        }
    ]
}


response = client.submit_job(
    jobName="volume-size-test",
    jobQueue=JOB_QUEUE,
    jobDefinition=JOB_DEFINITION,
    containerOverrides=container_overrides)

My question is similar to this However I am specifically asking if this is possible at runtime. I can change the launch template however that doesn't solve the issue of being able to specify required resources when making the request. Unless the solution is to create multiple launch templates and then select that at run time, though that seems unnecessarily complicated

johnchase
  • 13,155
  • 6
  • 38
  • 64
  • Not familiar with `BATCH`, but according to your description it is running as `ECS` task under the hood. AFAIK it is not possible to change volume size of **running** ECS container. Instead, you may try to run batch task with lower possible disk size (let's say 10GB) and create bigger [NFS](https://en.wikipedia.org/wiki/Network_File_System)-ready (or [EFS](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/tutorial-efs-volumes.html)) volume inside batch task script, attach it, and remove in the end of task. – rzlvmp Jan 05 '23 at 05:45
  • @rzlvmp This case is using ec2 compute resources. I can create launch templates that specify the root volumes size, I just can't figure out how to do it at runtime – johnchase Jan 05 '23 at 15:54

1 Answers1

2

You can use AWS Elastic File System for this. EFS volumes can be mounted to the containers created for your job definition. EFS doesn't require you to provide a specific volume size because it automatically grows and shrinks depending on the usage.

You need to specify an Amazon EFS file system in your job definition through the efsVolumeConfiguration property

{
  "containerProperties": [
    {
      "image": "amazonlinux:2",
      "command": [
        "ls",
        "-la",
        "/mount/efs"
      ],
      "mountPoints": [
        {
          "sourceVolume": "myEfsVolume",
          "containerPath": "/mount/efs",
          "readOnly": true
        }
      ],
      "volumes": [
        {
          "name": "myEfsVolume",
          "efsVolumeConfiguration": {
            "fileSystemId": "fs-12345678",
            "rootDirectory": "/path/to/my/data",
            "transitEncryption": "ENABLED",
            "transitEncryptionPort": integer,
            "authorizationConfig": {
              "accessPointId": "fsap-1234567890abcdef1",
              "iam": "ENABLED"
            }
          }
        }
      ]
    }
  ]
}

Reference: https://docs.aws.amazon.com/batch/latest/userguide/efs-volumes.html

Brian
  • 1,056
  • 9
  • 15