Memory management-1 (Reference counting)

Tue 25 July 2017

Regardless if your are on a managed or unmanaged platform, having a broad understanding of basic memory management algorithms can pay off when working on performance critical applications. In this series of posts I will provide an overview of most fundamental garbage collection algorithms, as most garbage collection strategies are based on either one or a combination of these fundamental algorithms, to compare various algorithms, we need to look at the effect of each algorithm on throughput, latency (pause time) and space usage. We get started by looking at reference counting.

Reference Counting

In any memory management algorithm we need to know if an object is considered live or garbage, reference counting maintains a simple invariant, any object is considered live if the number of references to that object is greater than zero. A reference to an object is created when a pointer to the object is copied from one location to another via an assigment. Each time a reference to an object is created, the objects reference count is incremented, similarly when an existing reference to an object is deleted, the reference count is decremented.

Advantages

Disadvantages

Final thoughts

In my opinion reference counting is not the best choice for a general purpose memory manager when there small reference rich structures with high rates of pointer reads or writes as reference counting incurs a write even when a pointer is read. However given it can be implemented as a part of a library and independent of the runtime of the language, it be an attractive option when restricting it to managing a smaller set of resources.

Netty ByteBuffer

A good example of how reference counting to manage a subset of resources is Netty. Java's NIO Apis use ByteBuffers when performing I/O operations, during an IO operation the data needs to be passed to the OS kernel, however byte buffers allocated on the JVM heap cannot be direclty used by the kernel (the garbage collector could move the buffer) so under the covers, the JVM heap ByteBuffers are copied to temporary 'native' memory buffers on each I/O call. Java also has DirectByteBuffer which is a wrapper around native memory (memory that is not managed by JVM), when DirectByteBuffers are used the buffer can be passed directly to the kernel avoiding a copy. However allocating and deallocating a DirectByteBuffer is much more expensive than an ordinary JVM byte buffer, to avoid this cost the Netty framework pre allocates and manages a pool of direct Byte buffer, reference counting is used here to manage the life cycle of these pooled buffers. Reference counting is ideal in this case because