2015-03-03

Minimal MySQL Memory Footprint

The following image shows the mind-boggling amount of memory occupied by MySQL 5.6 server on Windows 7 64-bit.


(This is despite the fact that during installation I explicitly specified that this MySQL server is going to be used for development, not for production.)

A quick search on the web shows that this preposterous amount of memory can be reduced to something less preposterous by editing my.ini (usually found in some place like ProgramData\MySQL\MySQL Server 5.6\) and replacing the following line:
table_definition_cache=1400

with this line:
table_definition_cache=400
Unfortunately, even though the savings are huge, the memory footprint of mysql is still nothing short of gargantuan:

2014-12-30

Movie: Dawn of the Planet of the Apes

This post does not contain any spoilers, unless you would consider as a spoiler my opinion on how the quality if the movie varies as the movie progresses.  (Or the image below.)

Picture source: cgmeetup.com
So, I watched The Dawn of the Planet of the Apes yesterday, and what can I say, wow, it blew my mind. I started watching it having very low expectations, and I was very pleasantly surprised for about one hour and fifty minutes of its total two hour and ten minute duration, which includes the end titles. Then, starting with the "I am saving the human race" incident, it transformed into the crap that I had expected from the beginning, perhaps even worse, but that does not annul the fact that the first one hour and fifty minutes were one of the most pleasant movie watching experiences I have had in quite some time.

2014-11-27

The transaction pattern and the feature badly missing from exceptions.

Exceptions are the best thing since sliced bread.  If you use them properly, you can write code of much higher quality than without them.  I think of the old days before exceptions, and I wonder how we managed to get anything done back then.  There is, however, one little very important thing missing from implementations of exceptions in all languages that I know of, and it has to do with transactions.

At a high level, exception handling looks structurally similar to transactional processing.  In both cases we have a block of guarded code, during the execution of which we acknowledge the possibility that things may go wrong, in which case we are given the opportunity to leave things exactly as we found them. So, given this similarity, it is no wonder that one can nicely facilitate the other, as this sample code shows:

2014-11-09

IntelliJ IDEA feature request: editor actions for moving the caret left & right with Column Selection.

I just submitted a feature request for IntelliJ IDEA.

It can be found here: https://youtrack.jetbrains.com/issue/IDEA-132626

Feature request: editor actions for moving the caret left & right with Column Selection.


It is a fundamental axiom of user interface design that modes kill usability. Having to enter a special mode in order to accomplish something and then having to remember to exit that mode in order to accomplish anything else is bad, bad, bad user interface design, at least when there is even a slight chance that the same thing could be achieved without a special mode. (Think of VI for example: it is the lamest editor ever, and almost all of its lameness is due to the fact that it relies so heavily on modes.)

Unfortunately, programmers tend to think a lot in terms of modes, so the first time the user of an editor asked the programmer of that editor for the ability to do block selection ("column selection" in IntelliJ IDEA parlance) the programmer said "sure, I will add a new mode for this." That's how problems start.

2014-10-21

Why the 'final' (Java) or 'readonly' (C#) keyword is a bad idea

A quick look at the source code that I have written over the past couple of decades in various work projects and hobby projects of mine shows that the percentage of class member variables that I declare as 'final' in Java or as 'readonly' in C# is in excess of 90%. I declare only about 10% of them as non-final. By looking at parameters and locals, a similar ratio seems to apply: their vast majority is effectively final, meaning that even though I do not explicitly declare them as final, the only time I ever write to them is when I initialize them. I would have been declaring them as final, if doing so was not tedious.

My percentages may be higher than the percentages of the average programmer out there, but I shall be bold enough to claim that this is probably because I pay more attention to quality of code than the average programmer out there.

I will even be as bold as to say that the above was an understatement.

In my book, there is a simple rule: if it can be made final, it absolutely ought to be made final. If there is even a remote chance of making it final, that chance should be pursued tenaciously.

To put it in other words, it is my firm conviction that good code uses 'final' a lot, and bad code uses 'final' sparsely.

So, in light of the fact that immutability is a most excellent quality, and the fact that actual usage shows that values in well written code are in fact immutable far more often than not, it seems to me that the 'final' keyword is a bad idea. Not in the sense that things should not be final, of course, but in the sense that 'final' should be the default nature of all values, and therefore unnecessary. A keyword like 'mutable' should be used to explicitly indicate that something is non-final and therefore allowed (and actually expected) to be modified.

I hope one day we will see a language which  implements this realization.

UPDATE 2015-05-15: It turns out that Rust does this with a 'mut' keyword.