2014-10-21

Why the 'final' (Java) or 'readonly' (C#) keyword is a bad idea

A quick look at the source code that I have written over the past couple of decades in various work projects and hobby projects of mine shows that the percentage of class member variables that I declare as 'final' in Java or as 'readonly' in C# is in excess of 90%. I declare only about 10% of them as non-final. By looking at parameters and locals, a similar ratio seems to apply: their vast majority is effectively final, meaning that even though I do not explicitly declare them as final, the only time I ever write to them is when I initialize them. I would have been declaring them as final, if doing so was not tedious.

My percentages may be higher than the percentages of the average programmer out there, but I shall be bold enough to claim that this is probably because I pay more attention to quality of code than the average programmer out there.

I will even be as bold as to say that the above was an understatement.

In my book, there is a simple rule: if it can be made final, it absolutely ought to be made final. If there is even a remote chance of making it final, that chance should be pursued tenaciously.

To put it in other words, it is my firm conviction that good code uses 'final' a lot, and bad code uses 'final' sparsely.

So, in light of the fact that immutability is a most excellent quality, and the fact that actual usage shows that values in well written code are in fact immutable far more often than not, it seems to me that the 'final' keyword is a bad idea. Not in the sense that things should not be final, of course, but in the sense that 'final' should be the default nature of all values, and therefore unnecessary. A keyword like 'mutable' should be used to explicitly indicate that something is non-final and therefore allowed (and actually expected) to be modified.

I hope one day we will see a language which  implements this realization.

UPDATE 2015-05-15: It turns out that Rust does this with a 'mut' keyword.

2014-09-21

Assertions and Testing

So, since we do software testing, we should quit placing assert statements in production code, right? Let me count the ways in which this is wrong:

(TL;DR: skip to the paragraph containing a red sentence and read only that.)

1. Assertions are optional.


Each programming language has its own mechanism for enabling or disabling assertions. In languages like C++ and C# there is a distinction between a release build and a debug build, and assertions are generally only enabled in the debug build. Java has a simpler mechanism: there is only one build, but assertions do not execute unless the -enableassertions (-ea for short) option is specified in the command line which started the virtual machine. Therefore, if someone absolutely cannot stand the idea that assertions may be executing in a production environment, they can simply refrain from supplying the -ea option; problem solved.