Mike Nakis on Code Craftsmanship

In a recent job interview I was asked what are my favorite means of ensuring the quality of the code that I write. Off the top of my head I could give a few answers, but it occurred to me afterwards that I could of course have said a lot more. I will try to make a list here.

Please note that in this list I try to avoid repeating things that are common practice, or common knowledge from well read books.  So, for example, I will not mention "Use inversion of control" here, it goes without saying.  I will try to say things that might not be common knowledge, or that might even be controversial.

When I work for an employer I follow the practices of the house, but when I write software for myself, I tend to do the following:
  • Assert everything.  When I look at code, I don't ask myself "should I assert that?" Instead, I ask myself "is there anything that I forgot to assert?"  The idea is to assert everything that could possibly be asserted, leave nothing unasserted. Assertions take care of white-box testing your code, so software testing can then be confined to the realm of strictly black-box testing, as it should.
  • Do black-box testing, not white-box testing. Heed the advice that says test against the interface, not the implementation. Unit Testing tests the implementation, so it should be avoided. Do Incremental Integration Testing instead, which only tests interfaces. With code that is choke-full of assertions, it works really well. Incidentally, this means that mocking, despite being an admirably nifty trick, should for the most part be unnecessary: if you have to resort to using mocks in your tests, then many chances are that a) you have not designed something well, or b) you are doing white-box testing. (And if you have to do ungodly hacks like mocking final/non-virtual or static methods, then you clearly have a wrong design in your hands.)
  • Minimize state, maximize finality / readonlyness / immutability. Design so that as much code as possible is dealing with data that is immutable. Eschew technologies, frameworks, and techniques that prevent or hinder immutability.  If you are using auto-wiring, use constructor injection and store in final/readonly members.
  • Minimize flow control statements, especially the "if" statement. If there is any opportunity to design something so as to save some "if" statements, the opportunity should be pursued tenaciously.
  • Move the complexity to the design, not the code. If the code does not look so simple that even an idiot can understand it, this usually means that shortcuts have been taken in the design, which have to be compensated for with overly complex code. Make the design as elaborate as necessary so that the code can be as simple as possible. Overly complex code is usually the result of violations of the Single Responsibility Principle. Often, what you think of as a single responsibility can in fact be further sub-divided into more fundamental responsibilities. Almost all of the code that we write performs, or can be thought of as performing, some kind of transformation. Most transformations are of the simplest kind, converting just one type of entity into another, meaning that they involve only two participants. In some cases we have transformations that involve three participants, for example converting one kind of entity into another by consulting yet a third kind of entity, and they tend to be quite complex. Four or more participants in a single transformation invariably belong to the realm of the grotesquely complex and generally need to be broken down into multiple successive transformations of fewer participants each, introducing intermediate kinds of participants if necessary.
  • Refactor at the slightest suspicion that refactoring is due; do not allow technical debt to accumulate. Avoid the situation of being too busy mopping the floor to turn off the faucet.  Allow a percentage of sprints to explicitly handle nothing but technical debt elimination. Do not try to spread the task of refactoring over feature development sprints, because a) doing so will not make the refactoring effort magically disappear, b) you will not do a good enough job at it, and c) the time estimation of the features will suffer. If you are dealing with a project manager who fails to see where is the "customer value" in refactoring, quit that job, find another one. 
  • Strive for abstraction and generalization. Reusability is often mistaken as being the sole aim and benefit of the urge to abstract and generalize, and so it is often met with the YAGNI objection: "You Ain't Gonna Need It". The objection is useful to keep in mind so as to avoid over-engineering, but at the same time it must not be followed blindly, because abstraction and generalization have inherent benefits regardless of the promise of reusability. Every problem of a certain complexity and above, no matter how application-specific it seems to be, can benefit from being divided into an abstract, general-purpose part, and a specialized, application-specific part. Strive to look for such divisions and realize them in the design. The general purpose code will be easier to understand because it will be implementing an abstraction. The application code will be easier to understand because it will be free from incidental complexity.
  • Use domain-specific interfaces. Encapsulate third party libraries behind interfaces of your own devise, tailored to your specific application domain. Strive to make it so that any third-party library can be swapped with another product without you having to rewrite application logic to achieve this. Conventional wisdom says the opposite: we have all heard arguments like "the best code is the code you don't write" (makes me want to invest in the business of not writing software) or that "a third-party library will be better documented than your stuff" (presumably because documentation is a skill your developers have not mastered) or that "if you run into trouble with a library, you can ask for help on stackoverflow, while if you run into trouble with something you have developed in-house, you are stuck" (presumably because your developers know nothing of it, despite working with it every day.) The truth with application development is that the more you isolate the application logic from peripheral technologies, the more resistant your application logic becomes to the ever changing technological landscape, a considerable part of which is nothing but ephemeral fashions, the use of which is dictated by C.V. Driven Development (https://martinjeeblog.com/2015/03/11/cv-driven-development-cdd/) rather than by technological merit.
  • Strive for what is simple, not for what looks easy.  The simple often coincides with the easy, but sometimes the two are at odds with each other. Eschew languages and frameworks that provide the illusion of easiness at the expense of simplicity. The fact that a particular framework makes "hello, world!" an easy one-liner probably means that the ten-thousand-liner that you are aiming for will be both unnecessarily complicated, and unnecessarily hard to write.
    Watch this: https://www.infoq.com/presentations/Simple-Made-Easy
  • Avoid binding by name like the plague. Avoid as much as possible mechanisms whose modus operandi is binding by name: use them only for interfacing with external entities, never for communication between your own modules. Yes, this includes the use of REST. QQ.
  • Always use strong typing. Avoid weak typing and avoid languages and frameworks that require it. Yes, this includes pretty much all scripting languages. QQ.
  • Strive for debuggability. For example, do not overdo it with the so-called "fluent" style of invocations, because they are not particularly debuggable.
  • Strive for testability.  Design interfaces that expose all functionality that makes sense to expose, not only functionality that is known to be needed by the code that will invoke them. For example, the application may only need an interface to expose a `register()` and `unregister()` pair of methods, but `isRegistered()` also makes sense to expose, and it will incidentally facilitate (black-box) testing. (This is a trivial example, hopefully you see the bigger picture.)
  • Enable all warnings that can possibly be enabled. The fact that a certain warning may, on rare occasions, be issued on legitimate code, is no reason to disable the warning. The warning should be enabled, and selectively suppressed on a case by case basis. Some warnings, like "unused identifier", occur on legitimate code too often for selective suppression to be practical. For those warnings, consider using an IDE that supports a "weak warning" level, which is highlighted inconspicuously, so the visual clue is there for you to see in case it points to something unexpected, but it can also be easily filtered out by your eyes.  And of course some silly warnings occur on legitimate code all the time, so it goes without saying that they need to be disabled.
  • Strive for readability. Code is generally write-once, read many. We tend to read our code several times as we write it, and then many more times throughout its lifetime as we tweak it, as we write nearby code, as we browse through code to understand how things work, as we perform troubleshooting, etc. Therefore, choices that make code easier to read are preferable even if they make code a bit harder to write. This also means that certain languages whose grotesquely arcane syntax has earned them the "write-only language" designation (I am looking at you, perl) are not to be touched even with a 10 ft. pole.
  • Use an IDE with a spell checker.  Avoid acronyms and abbreviations, and anything that fails to pass the spell check.  Modern IDEs have formidable auto-completion features, so using long identifiers does not mean that you have to type more. But even if it did, typing is not one of the major problems that our profession is faced with; unreadable code is.
  • Pay attention to naming. Strive for good identifier names and for a variety of names that reflect the variety of the concepts. A Thesaurus is an indispensable programming tool. Spend the necessary time to find the right word to name something, and dare to use names that you may have never heard anyone using before. For example, if you are wondering how to call a Collection of Factories, why not call it Industry?
  • Code offensively, not defensively.  This means never fail silently, never allow any slack or leeway, keep tolerances down to absolute zero. Fail fast, fail hard, fail eagerly and enthusiastically. Avoid things like a `Map.put()` method which will either add or replace, and instead design for `add()` methods which assert that the item being added does not already exist, and `replace()` methods which assert that the item being replaced does in fact already exist. If an add-or-replace operation is useful, (and it rarely is,) give it a name that clearly indicates the weirdness in what it does: call it `addOrReplace()`. (Duh!) Similarly, avoid things like a `close()` method which may be invoked more than once with no penalty: assert that your `close()` methods are invoked exactly once. If you are unsure just how many times your code might invoke your `close()` method, you have far greater problems than an assertion failing inside your `close()` method.
  • Use inheritance when it is clearly the right choice. The advice that composition should be favored over inheritance was very good advice during the nineties, because back then people were overdoing it with inheritance: the general practice was to not even consider composition unless all attempts to first get things to work with inheritance failed. That practice was bad, and multiple inheritance being supported by the predominant language at that time (C++) made things even worse. So the advice against that practice was very much needed. However, the advice is still being religiously followed to this day, as if inheritance had always been a bad thing. This is leading to unnecessarily convoluted designs and weeping and gnashing of teeth. The original advice suggested favoring one over the other, it did not prescribe the complete abolition of the other. So, today it is about time we reword the advice to read know when to use inheritance and when to use composition.
  • Favor early exits over deep nesting. This means liberal use of the `break` and `continue` keywords, as well as early returns. The code ends up being a lot simpler this way. Yes, this means multiple return statements in a function, and it directly contradicts the ancient "one return statement per function" dogma.  It is nice to contradict ancient dogma.
  • Avoid public static mutable state as much as possible. Yes, this also includes stateful singletons. The fact that it only makes logical sense to have a single instance of a certain one-of-a-kind object in your world is no reason to design that object so that only one instance of it can ever be. You see, the need will arise in the future, unbeknownst to you now, to multiply instantiate your world, with that one-of-a-kind object in it.
  • Put the tools of the trade into use.  Armies of very good developers have worked hard to build these tools, don't you dare make their efforts go in vain. 
    • The debugger should be your first choice for troubleshooting anything, not the last resort after all other options have been exhausted.  Configure your IDE so that the debugger pops up when an exception is thrown, instead of relying on examination of postmortem stack traces in the logs. Stack traces are for troubleshooting problems in production, and it is best if it never comes to that. 
    • Do not optimize anything unless 
      • you know beyond doubt that there is in fact a performance problem, and 
      • the profiler has shown precisely where the problem is. 
    • Do not even think that you are done with testing unless the code coverage tool gives you sufficient reason to believe so. 
    • Have your IDE perform code analysis on commit, and incorporate even more code analysis in the nightly or continuous build.
  • Design with reliability as a foundation, not as an afterthought.  For example, sharing data in a multi-threaded environment by means of traditional locking ("synchronization") techniques is error-prone and untestable. (You cannot test for race conditions.)  Therefore, these techniques of sharing data should be abandoned. Instead, design for a a lock-free, share-nothing approach that works by passing immutable messages, thus eliminating the very possibility of race conditions.
  • Design with security as a foundation, not as an afterthought.  Security is not something that you can add on top of an insecure foundation, because there is no amount of carefulness on behalf of the developers that is careful enough, and no kind of automated testing that can guarantee the absence of security hazards. So, what is necessary is architectural choices that eliminate the very possibility of entire classes of security hazards. (Do not worry, there will always be other classes of security hazards to deal with.) If a certain architectural choice is prone to vulnerabilities, do not make that choice. An example of a vulnerability-prone architectural choice which should be avoided like Anthrax is putting application code on the web browser, otherwise known as "full-stack web development". QQ.
  • Keep the logs clean.  It goes without saying that an error-level message in the logs is tantamount to a severe bug which should not even be committed, while a warning-level message in the logs is tantamount to a bug which should prevent the build from being released to production. However, it goes further than that: do not vex your colleagues, and do not make your own life harder, with torrential info-level or debug-level log spam. Keep the info-level messages down to an absolute minimum, and once debugging is done, completely remove all the debug-level log statements. Regularly use the "blame" feature of the version control system to remind developers of logging statements that they should remove.
  • Take maxims with a grain of salt. When someone says "no function should ever accept more than 4 parameters" or "no class should ever be longer than 250 lines" they are usually talking nonsense. A function should accept as many parameters as necessary to do its job, and if that is 15 parameters, so be it. A class should be as long as necessary to do its job, and if that is 2000 lines, so be it. Breaking things down to smaller units should be done because there is some merit in doing so, not because some prophecy said so.
  • Private static methods are fine. Really. Instance methods have the entire state of their object at their disposal to read and manipulate, and this state may be altered by every single method in the entire object. Static methods, on the other hand, obviously are not in a position to read nor alter any of the object's state, and instead rely exclusively on parameters and return values, which are all clearly visible at each call site. Thus, static methods are magnificently less complex than instance methods. I am not saying that you should strive to put as much of your code as possible in private static methods, (although a case could be made even for that,) but what I am saying that private static methods are not the slightest bit evil as some folks think they are.
  • Do not fix it unless there is a test for it. So far I have not tried test-driven development, so I do not have an opinion about it yet, (I am not under the impression that I have learned everything there is to learn yet,) but what I have tried so far, and I have found to be extremely useful, is test-driven maintenance. So, if a software tester (or worse yet, an end-user) discovers a bug, which obviously passed whatever automated tests you had in place, do not hurry to fix the bug. First, write a test that tests for the bug and fails. Then, fix the bug and watch the test pass. And hopefully you have enough tests in place for every other part of your software system so as to have reasonable guarantees that in fixing this bug you did not break something else.
I will be extending this list as I go along.



A short high-tech sci-fi horror story by Mike Nakis
written in the evening of January 25, 2018.

There was a guy who got in a quarrel with his girlfriend, and she kicked him out of her apartment without even throwing his clothes out the window to him. So there he was, naked on the street, not knowing what to do. Out of necessity, he grabbed a tablecloth from a restaurant, draped himself with it, and started to go home, trying to look as if everything was normal and under control.

People saw him walking on the street, draped with a tablecloth, and the only explanation that they could come up with was that he must be making some sort of fashion statement. Some of them decided to imitate him, by also wearing tablecloth while minding their every day business, and lo and behold, before you knew it, there was a tablecloth-wearing movement that was gaining ground like wildfire.

In true The Life of Brian™ fashion.

Imagine that this is all forgotten in the past, and you are now living in a society in which a large part of the population is regularly wearing tablecloth, and a multitude of explanations have been invented after the fact, to try and explain why tablecloth is better than conventional clothing. People who like to wear tablecloth will try to convince you to also wear tablecloth with disarming statements like the following:
  • Tablecloth is easy: you don't have to learn how to use complicated buttons and zippers and belts and buckles and what not; just hold it with your hand.
  • Tablecloth is simple: sheets of tablecloth come out of the machine; you just cut one and use it; no need for cloth designers and tailors, no need for cutting and sewing parts together, etc.  The best thing of all? no seam lines!
  • Tablecloth is convenient: when putting it on, you don't have to make your hands fit through sleeves and your legs fit through trousers; you just throw the tablecloth over you, and you are good to go. Want to take it off? no need for complicated motions, just let it fall off of you.
  • Tablecloth is fashionable. Tablecloth is hip. Tablecloth is cool. Who can argue with that?
And that, ladies and gentlemen, was my javascript analogy.


Tablecloth is a source of innovation. Every six months or so, someone comes up with a new pattern for printing on tablecloth, thus revolutionizing the way we dress.



Simplicity is the art of hiding complexity
Rob Pike, "Simplicity is Complicated", dotGo 2015


Disabling the Group Policy Client Service in Windows

  • You are an administrator on your machine.
  • Your machine is either:
    • In a Windows Domain, and you don't want the domain admins messing with it.
    • Not in a Windows Domain, and you just don't want useless services running.
In this case, what you probably want to do is prevent the Group Policy Client Service from running on your machine.  Unfortunately, that's not a straightforward task to accomplish, because if you go to "services" and try to stop or disable this service, Windows doesn't let you.

Here is how to do it.

These instructions have worked for me on Windows 7; they might also work on other versions of windows.  If there is anything in these instructions that you don't quite understand, what it means is that these instructions are not for you;  don't try to follow them, you are going to wreck things.  Ignore this post, move on.
  1. Using regedit go to HKLM\SYSTEM\CurrentControlSet\services\gpsvc and:
    1. Change the owner to yourself.
    2. Grant Administrators (not just you) full control.
    3. Change the value of “Start” from “2” to “4”.
  2. Now go to HKLM\SYSTEM\CurrentControlSet\Control\Winlogon\Notifications\Components\GPClient and:
    1. Change the owner to yourself.
    2. Grant Administrators (not just you) full control.
    3. Delete the entire key. (Possibly after exporting it so as to have a backup.)
  3. Restart your machine.


My notes on "Greg Young - The Long Sad History of Microservices"

Greg Young - The Long Sad History of Microservices
From the "Build Stuff" event of April 2017.

Talk begins at 9:45.

Highlights of the talk:
27:00 Placing a network between modules simply to enforce programmer discipline 
29:05 There is other levels of isolation I can go to. I can run a docker container per service. That's the coolest stuff right? What that means is I can make it work on my machine so I send my machine to production. 
29:52 Now, one thing that's very useful is I don't necessarily want to make this decision up front. And I don't necessarily want to make the same decision in dev as in production. I may want in dev to have a different way that we run things, why? because bringing up 19 docker containers on your laptop is not very much fun. I may prefer to host everything inside a single process to make debugging and such a lot easier when I am running on dev in my laptop. Whereas in production we may go off to multiple nodes. 
34:16 If you have maintenance windows, why are you working towards getting rid of your maintenance windows? Is this a business drive or is this you just being like C.V. driven development? 
My notes:

Unfortunately his shrieky voice makes him sound like he is bitching about things, which in a sense he is, but it would help his cause to deliver his criticism in a more palatable tone. Also, in order to make his point about microservices being nothing new he seems to disregard the one characteristic of microservices which I think defines them, which is their statelessness.

Resources referenced in the talk:




Leslie Lamport - Time, Clocks, and the Ordering of Events in a Distributed System
(available on the interwebz)

C.A.R. Hoare - Communicating Sequential Processes
(available on the interwebz)


Migrating a project from java 8 to java 9

Now that Java 9 is out, I decided to migrate to it my pet project, which is around 120K lines of java.

The first step is to just start compiling and running against jdk9, without using any of its features yet.

This is an account of the surprisingly few issues that I encountered during this first step and how I resolved them.

Issue #1: Object.finalize() has been deprecated.


A Hacker's Tale (With a Human Side)

This is a hacking story from my University years. It ends with a nice bit about human qualities.

The University had several computer labs, most of them equipped with Unix workstations, a few with PCs. I would often be found in the PC lab, since I was already quite familiar with that kind of machine and operating system. It was the early nineties, and PCs back then were running MS-DOS. Networking was done by connecting them to Novell™ servers via coaxial Ethernet cable, which delivered a (decent, for that time) 10 megabits per second.

Each PC in the lab was running a network driver, which was making parts of the server's filesystem visible locally as DOS drives. These drives were available only at the filesystem level: if you bypassed the OS and invoked the BIOS to enumerate the physical hard disks on the system, they did not show up, because they did not physically exist.

Filesystem access to these drives was subject to security checks performed by the Novell™ server, which was running some proprietary Novell™ operating system, so the whole setup was fairly secure, and for even higher security, the server was kept locked in a cabinet, so nobody but the administrator had physical access to it. The administrator of the lab was Dr. "A", and he had appointed as co-administrator a fellow student and friend of mine, Bashir.

Back in those days, if you were a power user, (let alone a computer lab administrator,) you absolutely had to be using the Norton Utilities.

Screen capture of the main menu of the Norton Utilities; found on the interwebz.
Of course, most of these utilities required physical access to the disk, so it was impossible to use them on the server, but they could be used on workstations.  And they were indispensable, so Bashir had stored them on the server, in order to be able to access them from any workstation.