I have a connector (Kafka-connect) streaming data from kafka to some other system. After processing 60 000 + records, it slows down dramatically to the point that I actually end up killing my connector.
I looked at GC with the jmap command and it seems that only my Survivor Space is full (100% used).
How can this be ? isn't the Survivor Space only a temporary place ? As I understood from this post
What I can not understand is the fact that it processes 60 000 records before fully using the Survivor Space. Why doesn't this happen before ? Shouldn't he free this space by putting a part of it to the Old Gen ?
Btw: I am runnin this connector in standalone mode, with 256MB for the heap (107 for Eden + 1 for survivor + 95 for Old)
Here is a sample jmap
Heap Configuration:
MinHeapFreeRatio = 40
MaxHeapFreeRatio = 70
MaxHeapSize = 268435456 (256.0MB)
NewSize = 1363144 (1.2999954223632812MB)
MaxNewSize = 160432128 (153.0MB)
OldSize = 5452592 (5.1999969482421875MB)
NewRatio = 2
SurvivorRatio = 8
MetaspaceSize = 21807104 (20.796875MB)
CompressedClassSpaceSize = 1073741824 (1024.0MB)
MaxMetaspaceSize = 17592186044415 MB
G1HeapRegionSize = 1048576 (1.0MB)
Heap Usage:
G1 Heap:
regions = 256
capacity = 268435456 (256.0MB)
used = 130770392 (124.71236419677734MB)
free = 137665064 (131.28763580322266MB)
48.71576726436615% used
G1 Young Generation:
Eden Space:
regions = 107
capacity = 167772160 (160.0MB)
used = 112197632 (107.0MB)
free = 55574528 (53.0MB)
66.875% used
Survivor Space:
regions = 1
capacity = 1048576 (1.0MB)
used = 1048576 (1.0MB)
free = 0 (0.0MB)
100.0% used
G1 Old Generation:
regions = 18
capacity = 99614720 (95.0MB)
used = 17524184 (16.712364196777344MB)
free = 82090536 (78.28763580322266MB)
17.591962312397204% used`