Questions tagged [rocksdb-java]

45 questions
6
votes
0 answers

I want to make possibility creating only in-memory rocksDB in java

I have rocksDB java implementation and I want to possibility of only in-memory storing data without it writing on hdd. I think I can do this using options while creating rocksDB. I tried to make same what is writting here:…
Gleb
  • 61
  • 4
3
votes
1 answer

Can RocksDB settings be changed with the java library while the database is open?

Using the java library, can any configuration changes take effect without requiring a reopen of the database? For example the level0SlowdownWritesTrigger. More context: I'm trying to flip between a bulk-load mode and regular mode. e.g. Disable…
Dan Tanner
  • 2,229
  • 2
  • 26
  • 39
3
votes
1 answer

Combine multiple Rocksdb databases

There is a use case for which I have to read huge Parquet file and convert into Rocksdb binary, So I decided to use spark (because everybody is familiar with it in my team). And from Rocksdb side I know it's not distributed and you can not…
Kaushal
  • 3,237
  • 3
  • 29
  • 48
2
votes
1 answer

Could not create class Caused by: com.esotericsoftware.kryo.KryoException: java.io.EOFException: No more bytes left

I'm finding issues running a job in a specific small cluster and in my local machine. The job runs smoothly on larger machines. I'm using: com.twitter "chill-protobuf" 0.7.6 .excluding com.esotericsoftware.kryo "kryo" com.google.protobuf…
Juancki
  • 1,793
  • 1
  • 14
  • 21
2
votes
1 answer

RocksDB is not freeing up space after a delete

Most of our services are using a Kafka store, which as you know is using RocksDB under the hood. We are trying to delete outdated and wrongly formatted records every 6 hours, in order to free up space. Even though the record gets deleted from…
iliev951
  • 33
  • 4
2
votes
1 answer

Having consumer issues when RocksDB in flink

I have a job which consumes from RabbitMQ, I was using FS State Backend but it seems that the sizes of states became bigger and then I decide to move my states to RocksDB. The issue is that during the first hours running the job is fine, event after…
Alter
  • 903
  • 1
  • 11
  • 27
2
votes
1 answer

How to use RocksDB tailing iterator?

I am using RocksDB Java JNI and would like to get new entries as they are added to the RocksDB. Thread t = new Thread(() -> { for (int i = 0; i < 1000; i++) { try { System.out.println("Putting " + i); …
JavaTechnical
  • 8,846
  • 8
  • 61
  • 97
2
votes
1 answer

Whenever I put value in rocksdb for the same key, the value get updated and count also gets increased

Whenever I put value in rocksdb for the same key. The value get updated. But the count by the following method db.getLongProperty(columnFamily, "rocksdb.estimate-num-keys") gets incremented. Why am I getting this weird behavior?
Venkatesh R
  • 21
  • 1
  • 3
2
votes
3 answers

Creating RocksDB SST file in Java for bulk loading

I am new to RocksDB abd trying to create a SST file in Java for bulk loading. Eventual usecase is to create this in Apache Spark. I am using rocksdbjni 6.3.6 in Ubuntu 18.04.03 I am keep getting this error, org.rocksdb.RocksDBException: Keys must be…
Saba
  • 41
  • 4
2
votes
0 answers

kafka streams - number of open file descriptors keeps going up

Our kafka streaming app keeps opening new file descriptors as long as they are new incoming messages without ever closing old ones. It eventually leads to exception. We've raised the limit of open fds to 65k but it doesn't seem to help. Both Kafka…
sumek
  • 26,495
  • 13
  • 56
  • 75
2
votes
1 answer

Kafka kstream-kstream joins with sliding window memory usage grows over the time till OOM

I'm having a problem using kstream joins. What i do is from one topic i seperate 3 different types of messages to new streams. Then do one innerjoin with two of the streams which creates another stream, finally i do a last leftjoin with the new…
kambo
  • 129
  • 2
  • 11
1
vote
1 answer

Flink RocksDB custom options factory config error disable block cache

I am running Flink 1.15.2 and am trying to define a custom options factory in RocksDB to disable the block cache. Following the example from this blog post: https://shopify.engineering/optimizing-apache-flink-applications-tips However, my Flink…
1
vote
1 answer

Tuning rocksDB to handle a lot of missing keys

I'm trying to configure the rocksdb I'm using as a backend for my flink job. The state rocksdb needs to hold is not too big (around 5G) but it needs to deal with a lot of missing keys. I mean that 80% of the get requests will not find the key in the…
1
vote
2 answers

Do I need to wait for background compaction to finish after creating test data to do a good read benchmark?

I am doing some benchmarks with RocksDB Java for my own application data and would like to be sure the created data is stored as optimally as possible before starting to measure read performance (i.e. if any background compaction etc. is going on…
Tristpost
  • 19
  • 4
1
vote
0 answers

Getting org.rocksdb.RocksDBException: bad entry in block

I'm using rocksDB to store data where the key is a string and the value is an integer. Recently my application threw the following exception while writing into rocks. java.lang.Exception: org.rocksdb.RocksDBException: bad entry in block at…
Rafat
  • 78
  • 7
1
2 3