2014-12-30

Movie: Dawn of the Planet of the Apes

This post does not contain any spoilers, unless you would consider as a spoiler my opinion on how the quality if the movie varies as the movie progresses.  (Or the image below.)

Picture source: cgmeetup.com
So, I watched The Dawn of the Planet of the Apes yesterday, and what can I say, wow, it blew my mind. I started watching it having very low expectations, and I was very pleasantly surprised for about one hour and fifty minutes of its total two hour and ten minute duration, which includes the end titles. Then, starting with the "I am saving the human race" incident, it transformed into the crap that I had expected from the beginning, perhaps even worse, but that does not annul the fact that the first one hour and fifty minutes were one of the most pleasant movie watching experiences I have had in quite some time.

2014-11-27

The transaction pattern and the feature badly missing from exceptions.

Exceptions are the best thing since sliced bread.  If you use them properly, you can write code of much higher quality than without them.  I think of the old days before exceptions, and I wonder how we managed to get anything done back then.  There is, however, one little very important thing missing from implementations of exceptions in all languages that I know of, and it has to do with transactions.

At a high level, exception handling looks structurally similar to transactional processing.  In both cases we have a block of guarded code, during the execution of which we acknowledge the possibility that things may go wrong, in which case we are given the opportunity to leave things exactly as we found them. So, given this similarity, it is no wonder that one can nicely facilitate the other, as this sample code shows:

2014-11-09

IntelliJ IDEA feature request: editor actions for moving the caret left & right with Column Selection.

I just submitted a feature request for IntelliJ IDEA.

It can be found here: https://youtrack.jetbrains.com/issue/IDEA-132626

Feature request: editor actions for moving the caret left & right with Column Selection.


It is a fundamental axiom of user interface design that modes kill usability. Having to enter a special mode in order to accomplish something and then having to remember to exit that mode in order to accomplish anything else is bad, bad, bad user interface design, at least when there is even a slight chance that the same thing could be achieved without a special mode. (Think of VI for example: it is the lamest editor ever, and almost all of its lameness is due to the fact that it relies so heavily on modes.)

Unfortunately, programmers tend to think a lot in terms of modes, so the first time the user of an editor asked the programmer of that editor for the ability to do block selection ("column selection" in IntelliJ IDEA parlance) the programmer said "sure, I will add a new mode for this." That's how problems start.

2014-10-21

Why the 'final' (Java) or 'readonly' (C#) keyword is a bad idea

A quick look at the source code that I have written over the past couple of decades in various work projects and hobby projects of mine shows that the percentage of class member variables that I declare as 'final' in Java or as 'readonly' in C# is in excess of 90%. I declare only about 10% of them as non-final. By looking at parameters and locals, a similar ratio seems to apply: their vast majority is effectively final, meaning that even though I do not explicitly declare them as final, the only time I ever write to them is when I initialize them. I would have been declaring them as final, if doing so was not tedious.

My percentages may be higher than the percentages of the average programmer out there, but I shall be bold enough to claim that this is probably because I pay more attention to quality of code than the average programmer out there.

I will even be as bold as to say that the above was an understatement.

In my book, there is a simple rule: if it can be made final, it absolutely ought to be made final. If there is even a remote chance of making it final, that chance should be pursued tenaciously.

To put it in other words, it is my firm conviction that good code uses 'final' a lot, and bad code uses 'final' sparsely.

So, in light of the fact that immutability is a most excellent quality, and the fact that actual usage shows that values in well written code are in fact immutable far more often than not, it seems to me that the 'final' keyword is a bad idea. Not in the sense that things should not be final, of course, but in the sense that 'final' should be the default nature of all values, and therefore unnecessary. A keyword like 'mutable' should be used to explicitly indicate that something is non-final and therefore allowed (and actually expected) to be modified.

I hope one day we will see a language which  implements this realization.

UPDATE 2015-05-15: It turns out that Rust does this with a 'mut' keyword.

2014-09-21

Assertions and Testing

So, since we do software testing, we should quit placing assert statements in production code, right? Let me count the ways in which this is wrong:

(TL;DR: skip to the paragraph containing a red sentence and read only that.)

1. Assertions are optional.


Each programming language has its own mechanism for enabling or disabling assertions. In languages like C++ and C# there is a distinction between a release build and a debug build, and assertions are generally only enabled in the debug build. Java has a simpler mechanism: there is only one build, but assertions do not execute unless the -enableassertions (-ea for short) option is specified in the command line which started the virtual machine. Therefore, if someone absolutely cannot stand the idea that assertions may be executing in a production environment, they can simply refrain from supplying the -ea option; problem solved.

2014-09-19

My notes on "Spring in Action" (Manning)

My notes on the "Spring in Action" book by Craig Walls and Ryan Breidenbach from Manning Publications Co.



2014-08-25

On Electronic Cigarettes

I have been vaping for about two and a half years now, and it has been one of the best things that have ever happened to me.  Here are some of my thoughts on the subject, written in the form of a "how-to" guide. It may change as I gain more knowledge.

Like most people, I started with various odd contraptions of the kind that you receive as presents, and I quickly realized that the way to go is a specific more-or-less-standard type of device which, rather unsurprisingly, is the type of device that you most often see carried by people who have picked up the habit. It consists of a USB-rechargeable battery, a replaceable bit called the vaporizer, and a tank with a mouthpiece.  These parts fit together by screwing one into the other, (the mouthpiece snaps onto the tank,) and the dimensions of all the junctions are standard, so you can replace each part as needed, and you can even mix and match components from different brands, since they adhere to the same standard.

Standard versus non-standard


There exists a variety of other types of devices which either require their own special charger, or they store the fluid in a sponge instead of a tank, or they are different in this or that or the other respect which makes them incompatible with standard components. My experience says that it is best to stay as far away from them as possible. Sure, some of them look sleek and exclusive, but lack of interoperability results in an unreasonably high extra cost, for benefits which are usually only aesthetic. You might even find a one-of-a-kind system for a price which might seem comparable to the cost of a bulky and motley system put together out of standard components, but in reality the one-of-a-kind system is far more expensive, because if one aspect of it turns out to not suit you, or if one part of it gets lost or broken, the entire system must usually be tossed, while with standard components you only replace the part that needs replacement. If, in addition to all this, you consider the fact that certain components of electronic cigarettes (namely, the batteries) are known beforehand to have a limited lifetime, buying a special system which is guaranteed to have to be thrown away after a few months makes no sense at all, in my opinion.

2014-07-18

Benchmarking Java 8 lambdas

Now that Java 8 is out, I was toying in my mind with the concept of a new assertion mechanism which uses lambdas. The idea is to have a central assertion method that works as follows: if assertions are enabled, a supplied method gets invoked to evaluate the assertion expression, and if it returns false, then another supplied method gets invoked to throw an exception. If assertions are not enabled, the assertion method returns without invoking the supplied merhod. This would provide more control over whether assertions are enabled or not for individual pieces of code, as well as over the type of exception thrown if the assertion fails. It would also have the nice-to-have side effect of making 100% code coverage achievable, albeit only apparently so.

Naturally, I wondered whether the performance of such a construct would be comparable to the performance of existing constructs, namely, the 'assert expression' construct and the 'if( checking && expression ) throw ...' construct. I was not hoping for equal performance, not even ballpark equal, just within the same order of magnitude.

Well, the result of the benchmark blew my mind.

Congratulations to the guys that made Java 8, because it turns out that all three constructs take roughly the same amount of time to execute!

Here is my code:

Benchmarking code written in Java or C# (or any GCed, JITted, VM-based language)

Sometimes we need to measure the time it takes for various pieces of code to execute in order to determine whether a certain construct takes significantly less time to execute than another. It sounds like a pretty simple task, but anyone who has ever attempted to do it knows that simplistic approaches are highly inaccurate, and achieving any accuracy at all is not trivial.

Back in the days of C and MS-DOS things were pretty straightforward: you would read the value of the system clock, run your code, read the value of the clock again, subtract the two, and that was how much time it took to run your code. The rather coarse resolution of the system clock would skew things a bit, so one trick you would at the very least employ was to loop waiting for the value of the system clock to change, then start running your code, and stop running at another transition of the value of the system clock. Another popular hack was to run benchmarks with interrupts disabled. Yes, back in those days the entire machine was yours, so you could actually do such a thing.

Nowadays, things are far more complicated. For one thing, the entire machine tends to never be yours, so you cannot disable interrupts. Other threads will pre-empt your thread, and there is nothing you can do about it, you just have to accept some inaccuracy from it. Luckily, with modern multi-core CPUs this is not so much an issue as it used to be, but in modern VM-based languages like Java and C# we have additional and far more severe inaccuracies introduced by the garbage collection and the jitting. Luckily, their impact can be reduced.

In order to avoid inaccuracies due to jitting, we always perform one run of the code under measurement before the measurements begin. This gives the JIT compiler a chance to do its job, so it will not be getting in the way later, during the actual benchmark.