0

We are currently evaluating spark 2.0 upgrade from spark 1.6 but we have one very strange bug that is preventing us from making this conversion.

One of our requirement is to read multiple data points from S3 and union them together. When we load 50 datasets, there is no problem. However, on 51th dataset load everything hangs looking for metadata. This is not intermittent and this happens consistently.

Data format is avro container, we are using spark-avro 3.0.0.

is there any answer to this?

  • this is not related to socket timeout issue, all socket threads are not blocked.

<<main thread dump>>
java.lang.Thread.sleep(Native Method)
com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient.doPauseBeforeRetry(AmazonHttpClient.java:1475)
com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient.pauseBeforeRetry(AmazonHttpClient.java:1439)
com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:794)
com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:607)
com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient.doExecute(AmazonHttpClient.java:376)
com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient.executeWithTimer(AmazonHttpClient.java:338)
com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:287)
com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3826)
com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:1015)
com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:991)
com.amazon.ws.emr.hadoop.fs.s3n.Jets3tNativeFileSystemStore.retrieveMetadata(Jets3tNativeFileSystemStore.java:212)
sun.reflect.GeneratedMethodAccessor19.invoke(Unknown Source)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:498)
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
com.sun.proxy.$Proxy36.retrieveMetadata(Unknown Source)
com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem.getFileStatus(S3NativeFileSystem.java:780)
org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1428)
com.amazon.ws.emr.hadoop.fs.EmrFileSystem.exists(EmrFileSystem.java:313)
org.apache.spark.sql.execution.datasources.DataSource.hasMetadata(DataSource.scala:289)
org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:324)
org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:149)
org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:132)
Community
  • 1
  • 1
jk-kim
  • 1,136
  • 3
  • 12
  • 20

1 Answers1

0

It seems that avro-spark exhausts connection pool by not releasing the connection.

https://github.com/databricks/spark-avro/issues/156

jk-kim
  • 1,136
  • 3
  • 12
  • 20