The default # of buckets is 113. Why? Why not 110? Does the bucket logic perform better with a certain "divisible by" value.
There are a lot of examples in SnappyData with less buckets. Why is that? What logic went into determining to use less buckets than the default 113?
What are the implications of choosing less? What about more buckets? I see a lot of logging in my Spark SQL queries looking for data at each bucket. Is it worse on performance of a query to have more buckets?