2012-11-13

Why the Microsoft C# compiler lacks many useful warnings

As my C# Bloopers series of articles shows, the Microsoft C# compiler fails to issue many useful warnings which one would reasonably expect from a decent compiler of any language, and which are in fact readily and lavishly issued by Java compilers.

After all these years that the Microsoft C# compiler has been maturing, one cannot help but postulate that there are alterior motives behind this continued state of misery with respect to warnings. For lo and behold, it just so happens that the "Ultimate" (most feature-packed and most outrageously expensive) edition of Microsoft Visual Studio contains a "Code Analysis" feature, which is capable of issuing hundreds of different types of warnings, ranging from the pedantic to the arcane, and including most, if not all, of the missing warnings that I am discussing here.

Now, besides the fact that the Code Analysis feature comes at a considerable additional cost, it is also very cumbersome to use on a frequent basis, since it has been built as a separate product feature, instead of having been integrated into the compiler. For one thing, it is very slow. Another thing is that it is very spammy: we are talking about multiple warnings for every single line of code here, most of which are useless, and the first thing you need to do about them is to turn them off. And there are several dozen warnings to turn off. 

This cumbersomeness makes the Code Analysis feature unsuitable for use in the instant builds which developers tend to perform every few minutes or so, and more suitable to use as a separate code-quality-assurance step to be performed once or twice during the entire development process of a product, or at best on nightly builds. Unfortunately, it is precisely on instant builds that the warnings I am talking about in these articles are most useful. Better yet, most of these warnings are useful in real-time, while typing the code, in the form of yellow curly underlines. For example, as a developer, I want to know that a parameter to a method I have just written goes unused before I even proceed to start coding the next method, because it most probably means that I forgot something or I did something wrong. This information is useful not tomorrow, not in a couple of months, but right now.

So, what has probably happened here is that the Microsoft C# compiler team was told by Microsoft's marketing department to intentionally cripple the C# compiler and make it less useful to all of us, by moving some of the functionality which rightfully belongs to it into some other module, so that their premium "Ultimate" product can have a raison d'ĂȘtre.

I bet you that Balmer is behind this.

2012-11-12

C# Blooper №5: Lame/annoying variable scoping rules, Part 2


Before reading any further, please read the disclaimer.

In light of the previous blooper, this one is more of a confusing error message than an actual new blooper. What I am doing below is that I am declaring a field within a class, I am accessing that field from within a method, and further down within the same method I am declaring a new local variable with the same name as the field. Now, C# will not allow me to declare that local variable because it has the same name as the field, but that's not where I am receiving the error. Instead, the error is given when accessing the field. If you only read the first sentence of the error message, it does not make any sense at all. If you bother also reading the second sentence, it gives you a hint as to the real problem. Now, that's not very cool.

namespace Test5
    {
        public class Test
        {
            public int a;

            void foo()
            {
                for( int i = 0;  i < 10;  i++ )
                {
                    a = 10; //error CS0844: Cannot use local variable 'a' before it is declared. The declaration of the local variable hides the field 'Test5.Test.a'.
                }
                string a = "";
                Console.WriteLine( a );
            }
        }
    }

See also: C# Blooper №4: Lame/annoying variable scoping rules, Part 1
-

2012-11-09

How to get a raise

Once upon a time I was dissatisfied with my salary at my workplace, and I let it show. The boss, fearing that he was about to lose me, placed an ad in the newspaper for my exact job description. Since I was looking for a job, I saw the ad in the newspaper. What I did was to reply to that ad, sending my boss my resume, which of course included precisely those qualifications that the job required. The boss got the message.

2012-11-08

C# Blooper №4: Lame/annoying variable scoping rules, Part 1


Before reading any further, please read the disclaimer.

A variable identifier is, of course, only visible within the scope in which it is declared. This includes nested (child) scopes, but it does not include enclosing (parent) scopes. In C# however, once a variable identifier has been used in a scope, its name is "poisoned", so it cannot be used in enclosing scopes. Take this example:

namespace Test4
{
    class Test
    {
        void test()
        {
            if( this != null )
            {
                object o;
                o = null;
                if( o == this )
                    return;
            }
            object o;  //error CS0136: A local variable named 'o' cannot be declared in this scope because it would give a different meaning to 'o', which is already used in a 'child' scope to denote something else
        }
    }
}

Well, I am sorry, but in the above case the new variable named 'o' would most definitely *not* give a different meaning to the 'o' which was used in the child scope. It would, if it had been declared before the 'if' statement, but it wasn't. Luckily, if this "feature" was to be removed from the language, it would not break any existing code. So, can we please have this fixed? Pretty please?

See also: C# Blooper №5: Lame/annoying variable scoping rules, Part 2

-

2012-10-25

How to copy multiple contacts from Outlook to Lync

It is kind of amazing how crippled the co-operation is between Microsoft Outlook and Microsoft Lync. (Note:  I am talking about Microsoft Office Professional Plus 2010, with Outlook version 14.0 32-bit and Lync 2010.) All I wanted to do was to copy contacts from the Outlook contacts list to the Lync contacts list. Never mind the fact that these two programs should be sharing the exact same contact list and I should not have to do anything of that sort; it sounds like a simple task, right?  Copy contacts from one program to another. These two programs belong to the same Office suite and are supposedly seamlessly integrated with each other. Well, seamlessly my @$$. You can copy single contacts from Lync to Outlook. But if you want to copy a contact from Outlook to Lync, or copy multiple contacts, then you are completely out of luck.  No-can-do, apparently.



Well, I found a trick to do it.  Here is how:

1. Select all the contacts within Outlook that you want to copy to Lync.
2. Hit "Send email", and a new email will be prepared, with all the selected contacts in the "To:" field.
3. Select all the contacts in the "To:" field.
4. Drag and drop the contacts from the "To:" field into a group (not contact) in Lync.
5. Discard the new email.

Voila!

If you know of a simpler way, please let me know.

Also, if you know of any way to do perform the incredibly advanced operation of... (drum roll please) PRINT YOUR OUTLOOK CONTACTS WITH PICTURES, (duh!) please let me know.

Cheers!

2012-10-08

Scanning printed photos

I was looking around for advice on what settings are best for scanning printed photos and I was amazed by the number of answers floating around on the great interwebz which are misguided, or are technically correct but miss the point. So, here is my advice.

First of all, let us define the goal: For a home user to scan printed photos so as to retain as much as possible of the visual information contained in the print, within reasonable limits, and without wasting too much space.

The answer, in a nutshell:  Scan at 600dpi, save as 24-bit-color compressed PNG or compressed TIFF. Do not use any of the fancy options for noise reduction, color correction, contrast enhancement, etc that might be provided by your scanner. If you need to improve something, edit the scanned picture later using your favorite image processing software, but never touch the original scanned files: always work on a copy of the original.

If the nutshell is good enough for you, then off you go, and happy scanning.

If you are interested to know why, read on.

Please note that the way the goal was expressed, we do not have to take into consideration what we are planning to do with the pictures later. When you scan, scan for posterity. Scan so that you can later crop, rotate, retouch, print. etc. If the photos need to be communicated through a low-resolution medium such as the web, then you can later make scaled-down copies for that purpose. But at the time of scanning, the point is to not limit yourself on what you will be able to do later with the scanned photos.

The resolution of photo prints ranges between 150 and 300 dpi. When we scan at 600 dpi, we are doing twice the resolution of a photo print, (or better,) which is the theoretical maximum required so as to capture all the information that there is on the photo print. Every bit of it. Not an iota of information left out. Sure, some color fidelity will inevitably get lost, but that's a different ball game altogether. As far as resolution is concerned, anything above 600 dpi is a pure waste of space.

24 bits per pixel is also perfectly adequate to fully match the capacity of film to convey color. There will be some reduction in color fidelity stemming from the fact that the sRGB color space into which you will most likely be scanning will not necessarily (and in all likelihood never) exactly match the color space of the photo, but the loss in fidelity will be imperceptible, and the measures required to correct this problem are disproportionately painstaking, and outside the "within reasonable limits" requirement stated as the goal in the beginning of this article. Furthermore, when you are working with differences in quality that are not perceptible, it is very easy to make a mistake which reduces, rather than enhances, the fidelity of the digitization, even by an imperceptible amount, and you will never know. So, color is best left at 24 bits per pixel sRGB and never messed with.

Use compressed PNG or TIFF because these file formats offer LOSSLESS compression. Lossless compression means that 100% of the information in the image is retained. Not very close to 100%, not imperceptibly different from 100% but precisely 100%.  Time and again I hear about people who save their pictures in uncompressed TIFF format; obviously, they do not understand squat about compression. It really is not rocket science: there are two kinds of compression, lossy and lossless. Lossy compression (JPEG) achieves huge savings, at a slight (usually imperceptible) expense to quality. Lossless compression (PNG, TIFF) achieves great, but not huge savings, at NO expense to quality. Using no compression serves no purpose, and is just plain stupid. It just spreads your picture over more sectors on your hard drive, increasing the chance that it will one day be lost due to a sector going bad.

Since lossy compression usually represents an imperceptible loss of quality, we could be scanning into JPEG, but the problem here is that if we ever retouch the picture, and then save it again as JPEG, the lossy compression will be re-applied, thus compounding the loss of quality. Keep repeating this cycle, and at some point the deterioration will start becoming perceptible. That's why we always use lossless compression on originals and on working copies, and lossy compression when publishing.

2012-07-05

Canon Pixma MX700 printer dead and resurrected (Not!)

My Canon Pixma MX700 printer died the other day as it was printing. It just went completely dead, as if the power cord was unplugged. Today I decided to troubleshoot it. First, I tried a different power outlet. That was not it. Then, I checked the power cord. No problem there. Then, I took out the power supply box and disconnected it from the printer, then I reconnected it and put it back in. No improvement. Then I opened up the power supply box and examined the circuit board in it.

Canon Pixma MX700


I am not terribly familiar with hardware, so I could not tell if the smell was of burned electronic components or if it was just the regular smell of electronics that have been sitting inside a closed box for a few years. Nothing looked burned though, and all the electrolytic capacitors seemed intact. I located 3 fuses on the board, and I checked each one of them for continuity. They were all fine.

So, I decided to give up, and I started putting things back together. I placed the circuit board back in the power supply box, I closed the box, and then for some reason I did something backwards: first I connected the power chord to the power supply box, and then I connected the power supply box to the printer.  As I was attaching the connector, I noticed that one of the pins was momentarily making a little spark. And then lo and behold, the printer came back to life! The printer had fixed itself!

And that, ladies and gentlemen, is why I hate hardware.

UPDATE 2012/07/13: nope, the resurrection was only temporary. The printer is dead. Dead as a doornail. And that, ladies and gentlemen, is why I hate hardware even more.

2012-05-02

Solved: Local resources unavailable on remote desktop

I experienced this problem today, drive C: of my local computer "Pegasus" was not appearing on the remote computer as "C on PEGASUS" when I connected to the remote computer via Remote Desktop (Terminal Services.)  All other drives of Pegasus were showing fine on the remote computer, but the one I actually needed (C:) was not.  The drive did not even appear under "\\tsclient" in "Network Places".

Judging by the problems reported by people from all over the world who have this problem and are searching for solutions on the interwebz, it may happen with any local resource, like printers, the clipboard, etc.

Luckily, I found a solution to the problem:

Terminate the RDP session not by closing the RDP window, but by actually logging off. Then, start a new RDP session, and the problem will have most likely gone away.

It is unclear why local resources sometimes fail to show on the remote computer during an RDP session; it is one of those things that "just happen", and that tend to go away if you just "close and reopen it". (Or get out of the car and get back in again, as the joke goes.) The reason for the frustration with this particular problem is that more often than not we do not really "close and reopen it", because we tend to just close the RDP window, which does not terminate our logon session with the server. By logging off and connecting again, the logon session gets restarted, and that's the "close and reopen" needed to fix the problem.

2012-04-27

.Net code running faster under the profiler?

So, today it occurred to me that the C# application that I am developing is a bit too slow on startup, and I decided to throw the visual studio profiler at it to see if I have goofed up somewhere. To my astonishment, under the profiler my app ran 10 times faster. The slowness I wanted to troubleshoot was nowhere to be found.

I also tried running the release version, and as I expected it performed better than the debug version under the profiler, so the universe was still in its place, but still, I would very much like to know what the profiler did that made the debug version of my app run so much faster. For one thing, it would be a great convenience to be able to enjoy this speedup while developing; waiting for 2 instead of 20 seconds for my app to start every time I want to check something would be very good for productivity.

I tried my luck with various google searches, and I found a couple of articles on StackOverflow, but none pointed at the exact cause of the problem.

Luckily, after quite a bit of hard thinking, troubleshooting, and browsing through the myriad of potentially relevant settings in Visual Studio, I found the answer: 

It is the "Enable unmanaged code debugging" feature.

In Visual Studio this feature is not under "Tools / Options / Debugging", (because that would make too much sense,) it is under "Project / Properties / Debug".  Enabling that feature makes everything slow as molasses. The profiler disables the debugger, and that feature with it, so the application appears to run lightning fast.

Here is a StackOverflow question to which I added my newly acquired wisdom:

stackoverflow.com: Launching VS Profiler boosts Application Performance x20?


2012-01-13

The "Handoff" Pattern

I had been thinking about posting this for quite some time now, and all by coincidence I happened to get a chance to mention it just the other day in an answer that I wrote to a question on Programmers-StackExchange. So, here it is in a more formal way:

If class M stores or manipulates or in any other way works with instances of destructible (disposable) class D, it may not assume the responsibility to destruct these instances, unless it is explicitly told that ownership of these instances is transferred to it. Therefore, class M must accept a boolean called 'handoff' as a construction-time parameter, stating whether instances of D are being handed off to it, and it can therefore destruct them when it is done with them.

Example:
    //Note: the IReader interface extends IDisposable
    IReader reader = new BinaryStreamReader( ... );
    reader = new BufferedStreamReader( reader, handoff:true );
    try
    {
        /* use the reader interface */
    }
    finally
    {
        reader.Dispose(); //this destructs the buffered stream reader, and 
                          //destruction cascades to the binary stream
                          //reader because handoff was specified.
    }

Example:
    var collection = new CollectionOfDestructibles( handoff:true );
    collection.Add( new Destructible( 1 ) );
    collection.Add( new Destructible( 2 ) );
    collection.Add( new Destructible( 3 ) );
    collection.Dispose(); //this destructs the collection and every single
                          //one of its contents, since handoff was specified.

In languages which support optional parameters, the 'handoff' parameter should default to false.