Parkinson's law says that all work expands to fill all time available for its completion.
This has be proven statistically to be very true, so nobody can doubt it. However some of the consequences of this law often go unnoticed as the law's wording and meaning gets twisted and turned over to suit one's needs.
Hell, I am furious! I come to work on a bloody Saturday, to review some code so that we can release our new product on time and I stumble twice on one of the most basic coding crimes when it comes to error handling, each done by a different developer.
I know it's natural to make errors, that's why I do the code review in the first place right? Absolutely! The reason that I am furious is that one of the two instances of this crime is long ago commited to our product (an SDK) and thus I cannot correct it... because... I will break backwards compatibility. Damn it!
The other one is not released yet, so it's just a human error; we'll just have to update about 70 calls to some internal function and we are done.
So what exactly is this crime I am talking about?
Simply put, it is a CAPITAL OFFENSE to convert an error code to a boolean Success_or_Failure flag.
A variation of this crime is to convert a rich error code into something like E_FAIL (in the COM world).
In real life, most programs cannot do much in the face of errors, other than (a) not crash (b) let the user know what went wrong.
In real life, when something goes wrong the user will immediately contact technical support. So imagine being the tech personnel, having a frustrated and often angry user telling you "Hey, I tried to do such and such and it fails with the error E_FAIL (Unspecified Error). What the hell is going on?".
Exactly my point! What the hell is going on???? How is our tech support team supposed to provide decent technical support if we don't know exactly what error occurred?
Most developers I have encountered in my career so far don't have the proper understanding and respect for error handling code. They want to write the REAL code not the error handling crap (or heavens forbid... the documentation).
They don't understand or just don't want to realize that in solid systems the error handling code can be up to 50% of the code. Yeap, that's FIFTY percent. And if you don't write it properly then you are NOT doing 50% of your job. Think a little bit about it.
Moreover, in most cases you don't write the error handling code for your end user, you write it FOR YOUR OWN SHAKE, to make YOUR LIFE easier when something goes wrong to the user. Because as sure as death and taxes are, things going wrong to the user is a certainty.
OK, now that I've had my fair share of whining I am not furious any more... Back to business :-)
Today I came across a very interesting article about why it is hard to program, with which I wholeheartedly agree.
Joel talked about student's inability to understand C pointers some time ago in his article about interviewing. Leave alone recursion or concurrency.
Here's what I think:
I don't think it's arrogant to claim the above.
It holds for programming in the same way it holds for many other demanding professions.
Could I ever become a weight lifter just by trying?
I wasn't built for that. No matter how many steroids I inject, I won't ever be able to bench press 150kg.
Could I ever become a successful composer? I can't recall from memory any 5 minute song, how could I ever come to compose music?
Could I ever become a mathematics professor? Although I like math, theorems and equations don't ring in my ears, they are not my native instruction set.
This is the best analogy I can come up with. Each of us has a native instruction set, embedded in our DNA. Activities that are compatible with this instruction set can be carried out with the highest degree of efficiency. Other activities less so. Some activities might be seriously incompatible, causing serious degradation in overall performance.
Back to programming. If programmers had professional licenses like doctors do, then I know a couple of people whose license should be immediately revoked. Irrevocably revoked. Programming is not their brain's native instruction set; not in their DNA. They don't really understand what they are designing and what they are implementing. How it will behave, how it will perform, how it will be tested, how it will need to evolve and expand, etc.
No, I am not being harsh here. Some say there is an extremely high demand for software developers, so not everyone has to be that good.
WRONG. DEAD WRONG.
In my previous job I spent around 40% of my time bug fixing or completely redesigning and rewriting code written by people whose programming license should be revoked. What good did it do that these people worked for the company? The company paid their time plus 40% of mine for a work that could be done in less than 40% of my time, had I done it myself from the beginning.
That's why I insist on having top programmers on my team.
Yes indeed I am! The last few weeks I have been starring in an exciting action series that will rock the software community. Here is a first draft of the logo (please don't mock me for my poor image editing abilities; I am only a soon-to-be-world-famous movie star):
Releasing software is tough.
Releasing software that may render your clients' applications unusable is tougher.
Releasing software that may also crash the operating system is even tougher.
If you try to do it without proper testing procedures then you become an action movie star ;-)
I just came across a very interesting article that has to do with how people think and how they operate. It talks about "Simplifiers", people that like simple solutions, that like simplifying structures and processes, and "Complexifiers" that are more or less the exact opposite.
Myself, I am clearly a simplifier. That goes without saying with the BruteForce nickname. In my work I like developers that are simplifiers. Writing software is a very complex task in the first place, so if you design unnecessarily complex hierarchies and structures you only make things worse, much worse. The larger the project the worse you make it.
Here is a one liner example I recently found in some code. The programmer had to copy one buffer to another buffer. Plain old totally naked bytes, nothing fancy, no objects or hot stuff. Instead of doing a single call to CopyMemory which is part of the Win32 API (actually a define) or to memcpy from the CRT, this programmer chose to include <algorithm> and use the copy algorithm!
So instead of the simple call:
CopyMemory(TargetBuffer, SourceBuffer, Length)
the guy wrote:
copy(&SourceBuffer, &SourceBuffer[Length], &TargetBuffer);
And this call to STL 'copy' was the sole reason for including <algorithm>.
Making things simple doesn't necessarily mean that the actual solution you design or specify is not complex. The point is not to make it more complex than it has to be. Simplifiers will choose the simplest complex solution. Oversimplifying is always something that should be watched closely as it might be equally bad to overcomplexifying. Or it just might produce a genius design.
The above are closely related to the so called KISS principle (Keep It Simple Stupid). So if you are a developer don't just smile and nod, saying KISS sounds cool, take it as a personal goal to become a software KISSer ;-) a digital simplifier. For one thing, you will make your life simpler ;-)
I am a device driver developer at heart although I worked for several years in userland (= normal applications) using C++ and Win32. Seeing all these .NET related technologies and tools popping up everyday like spam I can't help but wonder at the uttermost frustration experienced by all those developers trying to use immature tools to program completely immature technologies that don't let you shoot yourself in the foot, but let you whip yourself in managed ecstatic nirvana.
.NET came as the salvation of programmers and users, and what do we have?
You invested time fooling around with .NET 1.0? Thanks for the alpha testing dude.
You worked seriously with .NET 2.0? Thanks for the beta testing dude.
Now you have .NET 3.0 at your lap and pretty soon you will have .NET 4.0.
Yes a year or two is SOON. Damn SOON.
Hey man, don't worry about that. There is a simple solution. Build your own framework and wrap .NET! Gosh! It seems that nowadays everyone is building their own framework. Everybody calls their little toy library a framework. Give me a framework, ehhh I mean a break.
Get this clear: .NET is NOT a framework. It's a library and a runtime. The "framework" part is pure marketing bull.
If .NET is a framework, then COM is a framework too. But no one says "The COM framework". What? You already forgotten the despised COM with it's sweet REGSVR32? What? DLL-hell? I see. I guess you'd rather have the .NET-hell. Where you can accidentally (and silently) load the wrong version of the runtime and have things fail at random or even worse work in slightly different ways. Where everyone (p)raises exceptions but no one actually handles them correctly or does anything other than display the stack to the frightened creature occupying the seat in front of the monitor. Policy-hell anyone? Any 3rd party tools that rename their DLLs with every version because the versioning support of .NET is so impeccable?
I am sorry to break your hearts, but just be honest to yourselves. .NET simply replaced a known and stable set of evils with a new, unknown and limitless set of evils.
And you know what's the biggest joke? Some .NET fanatics inside MS evangelize writing drivers in C#. Yeah, why not? Make drivers easy! So that everyone can program drivers. So that everyone can crash the OS with their ignorance and carelessness. Give me another break(point)!
For those of you that are interested in the Business of Software, I posted an article on my personal web site that discusses a set of metrics for evaluating developer experience.
Any comments would be greatly appreciated!