Some parameters are actually ignored from the query by design, such as : from
, size
, fields
, etc.
They are used internally inside the elasticsearch-spark
connector.
Unfortunately this list of unsupported parameters isn't documented. But if you wish to use the size
parameter you can always rely on the pushdown
predicate and use the DataFrame
/Dataset
limit
method.
So you ought using the Spark SQL DSL instead e.g :
val df = sqlContext.read.format("org.elasticsearch.spark.sql")
.option("pushdown","true")
.load("index_name/doc_type")
.limit(10) // instead of size : 10
This query will return the first 10 documents returned by the match_all
query that is used by default by the connector.
Note: The following isn't correct on any level.
This is actually on purpose. Since the connector does a parallel query, it also looks at the number of documents being returned so if the user specifies a parameter, it will overwrite it according to the es.scroll.limit setting (see the configuration option).
When you query elasticsearch it also run the query in parallel on all the index shards without overwriting them.