Tag Archives: garbage collection

Hypothesis: Reference stealing should be default (POLA) behaviour.

Those of you who have done programming in C++ are likely familiar with the auto_ptr smart pointer, or are familiar with coding standards that caution against the use of this type of smart pointer. With more and more parts of the new C++ standard making their way into C++ compilers, a new smart pointer type uniq_ptr is now available on the latest and hippest versions of some of the popular C++ compilers as part of the standard template library.

This new smart pointer type is said to deprecate the old auto_ptr smart pointer. If we combine the numerous C++ coding standards warning us against the use of auto_ptr with the C++ standard chromite deprecating the auto_ptr, it is hard not to jump to the conclusion that there must be something very wrong with this type of smart pointer.

In C++ we have a multiple ways to allocate objects and other data types. We have two ‘no-worries’ ways of allocating and one that requires the programmer to think about what he is doing as the language provides no default resource management for that type of allocation. An object or other data type can be allocated:

  1. On the stack
  2. Part of a composite object
  3. On the heap.

When an object is allocated on the heap using new, the new invocation returns a raw pointer to the newly allocated object. To any seasoned C++ programmers a bare raw pointer in a random place in the code is an abomination, something that needs fixing.  You know the when you go to the beach and run into your mom’s old friend wearing a bathing-suite that is way to tiny for someone of her age who doesn’t go to the gym every second day, and you think: “damn, please put some cloths on that” ?

Well thats how any C++ programmer (that is any good at his/her job) looks at raw heap pointers.  In C++ there are many cloths to choose from for raw heap pointers. The most elementary cloths are simple RAII objects that wrap a heap object pointer inside a stack object and thus controls the heap object its lifetime.  On the other side of the spectrum there is the shared_ptr, that as its name betrays is a shared reference counted smart pointer, the closest you will get to garbage collection in C++. Somewhere between those two we find a strange little creature called auto_ptr.

The auto_ptr is probably the simplest of all smart-pointers. It basically works like a RAII object with pointer semantics most of the time, but it has one funny little property that makes it behave rather odd.   Copy and assignment will under the hood result in a move instead of a copy. There are situations where its clear that this is desired behavior. For example when implementing the factory method or the abstract factory design patterns. These patterns can return an auto_ptr instead of a  raw pointer and the move will be exactly what anyone would want and would expect.

The problem however arises when an auto_ptr is used as an argument to a method. A method might steal our pointer by for example assigning it to a temporary or to a member variable of the invoked object.  I believe everyone can see there is a problem in there somewhere, but maybe, just maybe there is actually a glimpse of a solution to a problem hidden in there. A problem by the way that most programmers, especially C++ programmers don’t realize exists.

So lets have a look at the problem. When trying to build high integrity systems, there is one important principle that we must apply to every aspect of the system design and development. This principle is called the Principle Of Least Authority or POLA for short.  According to this principle we should, to state it simple, minimize the dependencies of each piece of code to the minimum amount possible. So what type of dependencies do we see in our pointer stealing example?  We see an object Bob with method foo and we see that this foo method takes some kind of reference to a Carol object. We see some object Alice with a reference to Bob and a reference to Carol that invokes :

  • bob->foo(somewrapper(carol))

So what does bob->foo do with carol and more importantly, if we treat bob and its foo method like a black box, what would we want bob->foo to be able to do with carol? Basic usable security doctrine states that the default action should be the most secure one.  So what would be tha sane default most secure option here?  Basically this depends on the concurrency properties of Bob with respect to Alice. First lets assume that bob is a regular (non active) object. Than the least authority  option would be for somewrapper to be a RAII style temporary object that when going out of scope revokes access to carol. That way bobs access to carol would be scoped to the foo invocation.

Basically when building software using multi threading or multi process architecture, there are two mode’s of concurency we could use.

  1. Shared state concurrency using raw threads, mutexes, semaphores, etc
  2. Message passing concurrency using communicating event loops,  lock-free fifo’s, active objects etc.

As with many shared state solutions, when looking for high system integrity guards, the shared state solutions tend to be the solutions we should steer away from, so basically message passing concurrency should be the way to go here.

Now lets assume that we are using a message passing style of concurrency where alice, bob (and not necessarily carol) are active objects. What would be the sane least authority action for this invocation. The foo invocation scoping would make absolutely no sense in this scenario. The foo method will be handled asynchronously by bob. This means we do need to give bob a reference to Carol that will not be automatically revoked.  Given that carol is not necessarily an active object,  this reference would thus make carol an object that is shared between two concurrently running active objects.  Now given that sharing between two threads of operations implies quite some more authority and thus less implicit integrity for the overall system, the safest default operation may be that provided by what we considered to be wrong with auto_ptr in C++.  With single core computers becoming hard to find and the normal  amount of cores in a computer still going up, I believe its safe to say that we should assume modern programs will use some form of concurrent processing that according to POLA should probably be message passing concurrency. That is, the foo method should by default steal the reference to carol and allice should be explicit if she wants to retain her own copy of the reference to carol. This line of reasoning makes me come to the following hypothesis with respect to least authority.

  • Reference stealing should be default (POLA) behavior for method invocation.

If this hypothesis is right, than the C++ auto_ptr flaw that the new C++ standard claims to fix by deprecating auto_ptr, might actually not be a flaw that needs fixing,  but a starting-point for,  in line with usable security doctrine,  making high integrity concerns into default behavior.

Advertisements

Why garbage collection is anti-productive.

While the non GC language C(++) is still the most popular language in the world (at least if we count C and C++ as the same language, if we look at them as separate languages, Java is the most popular language), GC languages as a whole currently make up a larger share than non GC languages, and new languages seem to build on virtual machine environments that mandate GC. It thus would be good to look at what GC has gotten us so far.

Lets start by looking what GC is for. GC is basically a form of automated resource management for one specific type of resource: Random Access Memory or RAM. In computer programming there are multiple types of resources: RAM, open file handles, locks, threads, etc,etc. GC automates resource management for just one of these resource types: memory.

This means that in a GC language, there are two types of resource, memory (a resource you should no longer worry about), and anything else, for what you should do some manual resource management.

Looking at this, it would seem that GC is a pretty good deal. You don’t have to worry anymore about resource management for memory so you gain productivity there, and for other resources nothing changes. But it is this last assumption that ends up as a giant mistake.

To understand why thinking nothing changes for non-memory resource management in GC languages, lets have a look of how a C++ programmer would do resource management on raw memory.

class RawMemoryWrapper {
char *mRaw;
public:
RawMemoryWrapper(size_t bytes):
mRaw(new char[bytes]){}
~RawMemoryWrapper(){ delete []mRaw;}
operator char * () { return mRaw;}
};

We see a special class (refered in C++ as a RAII class) that wraps the resource. If a RAII class is created, the constructor acquires the resource. Hence the name ‘Resource Acquisition Is Initialization’ or RAII. If the RAII object goes out of scope, its destructor is called and the resource is released auto-magically. A third construct that is not actualy part of the RAII idiom but is used often together with RAII is a C++ cast operator that allows RAII classes to be used as drop in replacement for the resource they wrap. RAII helps wrapping any resource we might want to use in a fully encapsulating way.  You can make composite classes with RAII member variables or without them, there is nothing transitive about resources in RAII, what is a good thing from an OO perspective.

Although a RAII class isn’t that big, for memory its clear there IS overhead in having to write RAII classes when compared to GC.  From this we might conclude that this alone makes GC a winner, after-all, if at one front we win and at an other front we don’t lose we would have won. This assumption however is wrong for one elementary property of GC. GC works with the abstraction of eternal objects.  If an object goes out of scope, basically nothing happens. No destructor is called, no memory is freed. In most GC implementations, even when the memory the object held is freed, no destructor is called. This property of GC basically means that GC exludes the posibility for using the RAII idiom on non memory resources.

So what is the cost of not being able to use RAII for non memory resources in GC languages?  This cost can be expressed with a thing that shocked me when I grasped its impications:

  • Without RAII, ‘being a resource’  becomes transitive to composition.

Once you grasp what this means for object orientation, you will be as shocked as me by the popularity of GC and about the fact that the productivity gain that GC offers is so clearly a myth. To me it has become absolutely clear that GC does not only not improve resource management related productivity, it actually decreases resource management related productivity in such a dramatic way that the persistence of the productivity myth is absolutely shocking.

Imagine that you have a relatively large program. You will be pretty sure to have many resources in need of manual management.  These resources with RAII classes could be managed once down deep in some library code. In a large program compositional patterns may easily go 10 or 20 layers deep, and at each of these layers, without RAII, given the transitive nature of being a resource,  the result of this would be that there should hardly be any non resource classes within the higher abstraction layers of such a project.

So we started off trying to reduce the amount of manual resource management, but what happened? We exploded the amount of manual resource management and in the process broke the encapsulation principle good of OO programing  practice. A user of a class should have no knowledge if the class being used happens to be composed using non memory resources.  We can clearly state that:

  • GC breaks OO encapsulation.

But it doesn’t end there.It gets worse.  Given that we are using OO design and polymorphism,  an interface of a class should not change depending on class implementation.  So if we can conceive of the idea that a class could either be implemented without a composition that used non-memory resources, but also one that does, than given the  nature of being a resource in GC languages, we MUST when designing an interface assume there might be an implementation that is composed using non-memory resources.This means that we need to treat any polymorphic interface as a resource even if its implementation isn’t composed using any actual non memory resources. We can thus come to the following conclusion:

  • GC breaks OO polymorphism.

This in term means that our already larger need for manual resource management expands once more to even larger dimensions.

So lets look at all the facts once more:

  • Without RAII, ‘being a resource’  becomes transitive to composition.
  • GC breaks OO encapsulation.
  • GC breaks OO polymorphism.
  • As a result GC ‘explodes’ the need for manual resource management and thus reduces productivity.

With GC having started out as a way to reduce manual resource management and increase productivity, its pretty clear that GC has ended up being one big mistake.