3

I am trying load a Simple CSV file from GCS to BQ using Google Data Fusion Free version. The pipeline is failing with error . it reads

com.google.api.gax.rpc.InvalidArgumentException: io.grpc.StatusRuntimeException: INVALID_ARGUMENT: Insufficient 'DISKS_TOTAL_GB' quota. Requested 3000.0, available 2048.0.
    at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:49) ~[na:na]
    at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72) ~[na:na]
    at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60) ~[na:na]
    at com.google.api.gax.grpc.GrpcExceptionCallable$ExceptionTransformingFuture.onFailure(GrpcExceptionCallable.java:97) ~[na:na]
    at com.google.api.core.ApiFutures$1.onFailure(ApiFutures.java:68) ~[na:na]

same error is repeated for both Mapreduce and Spark execution pipeline. Appreciate any help in fixing this issue . Thanks

Regards KA

Jack Fleeting
  • 24,385
  • 6
  • 23
  • 45
user11953315
  • 33
  • 1
  • 3

2 Answers2

6

It means that the requested total compute disks would put the project over the GCE quota for the project. There are both project wide and regional quotas. You can see that documentation here: https://cloud.google.com/compute/quotas

To resolve this, you should increase the quota in your GCP project.

Ali Anwar
  • 431
  • 2
  • 8
4

@Ksign provided the following answer to a similar question which can be seen here.

The specific quota related to DISKS_TOTAL_GB is the Persistent disk standard (GB) as you can see in the Disk quotas documentation.

You can edit this quota by region in the Cloud Console of your project by going to the IAM & admin page => Quotas and select only the metric Persistent Disk Standard (GB).

Blake Enyart
  • 115
  • 1
  • 9