3

I have a java.nio.ByteBuffer in my code:

ByteBuffer bb = ByteBuffer.allocateDirect(1024);
... 

I want to be able to swap it in place for a new implementation of ByteBuffer (i.e. it must extend java.nio.ByteBuffer) so that it will re-allocate, copy and discard the previous small ByteBuffer under-the-hood, to allow for seamless dynamic growing.

I can't just have a wrapper because it must be a java.nio.ByteBuffer.

It would have to be something like that:

ByteBuffer bb = new MyImplementationOfByteBufferThatExtendsByteBuffer(1024);

Has anyone seen or done that? Is it possible?

Just want to note that if java.nio.ByteBuffer was an interface instead of an abstract class, that would be trivial to implement. Like the old saying goes, favor interfaces over abstract classes if you want flexibility.

  • Possible duplicate of [Growing ByteBuffer](https://stackoverflow.com/questions/1774651/growing-bytebuffer) – Nolequen Jun 16 '19 at 15:35
  • 1
    I don't think it is a duplicate because that question does not require the bytebuffer to extend java.nio.ByteBuffer. All answers given are wrappers. – Eddie Bravo Jun 16 '19 at 15:37
  • All constructors of `ByteBuffer` are package-private. So there will likely be no "clean" solution for that... – Marco13 Jun 16 '19 at 16:54
  • or implement your own https://github.com/wjtxyz/VarSizedByteBuffer – Yessy Apr 13 '20 at 11:13

2 Answers2

2

No, this does not exist, and cannot exist without violating the contract of ByteBuffer's superclass Buffer:

A buffer is a linear, finite sequence of elements of a specific primitive type. Aside from its content, the essential properties of a buffer are its capacity, limit, and position:

  • A buffer's capacity is the number of elements it contains. The capacity of a buffer is never negative and never changes.

Therefore, if you were to allow "seamless dynamic growing", ByteBuffer would no longer act as a Buffer. This is also true regardless of interfaces versus abstract classes: Neither subclasses nor interface implementations should break invariants defined in the classes they extend. After all, consumers of your HypotheticalGrowingByteBuffer might rely on Buffer's fixed capacity to cache a single call to capacity(), which otherwise would be defined not to change.

Another motivating factor behind making ByteBuffer an abstract class rather than an interface: The memory backing the buffer is defined to be necessarily fixed and contiguous, which allows for higher performance than a more flexible definition.

That said, Buffer defines a mutable concept of limit, where a large originally-allocated buffer might have its availability artificially constrained. You can use this to constrain a large buffer to start small and then grow, though the growing will not be "seamless"; it will be limited by the original buffer capacity you define. If the goal is simply to concatenate smaller fixed-size buffers where the overall total size is predictable, it may be worthwhile to generate them as pieces of a larger buffer using ByteBuffer.duplicate and setting the mark and limit to constrain the writable area.

Community
  • 1
  • 1
Jeff Bowman
  • 90,959
  • 16
  • 217
  • 251
1

I'm pretty sure it's intentional that ByteBuffers have a fixed capacity. I encourage you not to try to work around it as if it is a design flaw. Accept it and embrace it as a purposeful limitation.

If you allow ByteBuffers to grow then compact() will no longer have a predictable maximum runtime. The larger your buffer grows the longer it will take to compact it. I encountered exactly this problem in my NIO-based socket library. I had internal buffers growing to accommodate large injections of data and the consequence was that performance slowed down proportional to how much data was buffered.

For example, somebody might try to send 100MB of data in one shot, so I would save that data in a single 100MB ByteBuffer. The processing code could only handle about 32KB at a time, so it would pull 32KB of data from the buffer and then compact it. Then another 32KB and another compaction. It would continue reading and compacting until the buffer was drained.

Each compaction is an O(n) operation, and I was performing O(n) many of them, leading to an overall runtime of O(n2). Really bad news. I noticed the problem when the buffers grew so large that I/O requests started timing out because the I/O thread was spending all its time compacting ByteBuffers.

Solution: If you want dynamic buffers, create a Queue<ByteBuffer>. The queue can grow and accommodate an unlimited number of ByteBuffers while each buffer remains a fixed size. This will let your application scale properly without encountering the O(n2) problem.

John Kugelman
  • 349,597
  • 67
  • 533
  • 578