Sunday, January 29, 2012

10 Things I do to code better

After being into the software development career for more than a year now, I feel I understand what is expected from a "Software Developer". There are a lot of things that need to be done, and they need to be done with care. I see everyday that a lot of developers around me take software development as a job-to-be-done and not as a passion. Anyways, I'm here to talk about what I, as a software developer do to write better code everyday. Here's the list:
  
Keeping my code clean
Clean code could be described as code that is easily readable, and at the same time quite understandable. Stuff like proper indentation of code and the related comments can make your code look delicious.
 
Writing comments
Clean code and better comments make everyone's life simpler. Good comments are those which describe the 'WHW' (what, how and why) you do a particular thing in your code, in the simplest possible way. I call it "The expected WHW". Explaining this shouldn't be like writing an essay. Comments should be highly precise.
 
Writing Pre-conditions
Your code doesn't always stay with you. Everyone in your team should understand the pre-conditions and context when invoking a particular method that you might have written. The method which is being called should clearly describe it's pre-condition. Pre-conditions can be something like "The Person object should be initialized by calling the Person.Init() method" or something like that. Here, if the object 'Person' wasn't initialized, it may result in errors when other developers use methods that use the 'Person' object. This step helps other developers to use your code effectively.
 
Loose coupling
I always use this design principle to stay on the safer side with regards to scalability and maintainability. Loose coupling enables you to define boundaries within your code structure, and helps make easier the testing of the code and handling later changes in the code. This is one thing a developer's architect or designer might have a say in, but as far as I am concerned, I do have the liberty to suggest better coupling structures, however, they are happily trashed if we foresee any problem.
 
Knowing your programming language well
I'm always stuck in this conquest of mastering the programming language I use for coding. I keep switching from C++ to C# and vice versa but, I don't really understand which one to choose as they both have their own supernatural powers. I love C# for its simplicity and it's always a pleasure to work with it on Visual Studio. I came across a guy (who has always been working in C++) lately who was using C# .Net and was doing some kind of string processing. He had written around 10 lines of code for splitting strings (the C++ way, character by  character) based on a few characters but, never did he know that there is already a method called String.Split() that was readily available for use. It is difficult to know everything in the very huge .Net Framework, but the more you know the better your life becomes.
 
Learning Tips and Tricks
I always try and learn new tips and tricks to use my development IDE (Visual Studio) in the best possible manner. These tips and tricks help you to perform faster. Consider using keyboard shortcuts rather than moving your hand to the mouse and then searching for the respective button to be clicked in the IDE. This reduces the time in interaction with the IDE and would give you more time to think about your code. I learn tips and tricks in coding i.e. how something could be done in the best possible way.
 
Using Source and Version control software effectively
Source and version control software sound like an unnecessary nightmare to a low-on-experience software developer. I've had great times with Rational ClearCase. It's annoying ways help me learn more about it. Although a merge always screws up my code, I love the concept of source and version control because it enables me to code without the fear of losing my previously written code by mistake. I'm still in the process of taming this wild animal called "ClearCase".
 
Following coding guidelines
I always try my best not to be cursed by my fellow developers for not following the coding guidelines. Adherence to these guidelines helps developers to understand the "The expected WHW" of your code. This in turn helps you keep the code clean.
 
Reviewing my code
I always perform a self review of the written code before sending it for a review to the other team members. Reviews help you identify defects in the code at an early stage. Defects might miss out ones attention but, it might get caught by someone who would look at the code in a different manner or it may be easier for someone who has been working on the same thing for a long time to point out the possible defects in the code.
 
Knowing the context
Last, but never the least, it is a must to understand the entire context of usage of your code especially when you are adding particular functionality to existing code. There's always a chance that your changes could affect and break other's code. Always consult the original developers who had written the earlier code and get your code reviewed from them to avoid unexpected and vexing results.
 
This is a compilation of some things that I think help me in learning to code better. Opinions may vary, and if you’re still reading, I would like to hear your opinion on this. What do you think are other things that developers need to take care of?

Dynamic bitsets not supported in C++ STL

It was annoying and frustrating to come across the fact that an essential feature was missing in the C++ STL library that I was using. I was writing code that would create a bit array of length ranging from 1 to anything like a thousand or say ten thousand. This needed to be dynamically allocated. So, if I said:
bitset* b;
...    
b = new bitset(number_of_bits); // somewhere in the program
This would create a bitset of "number_of_bits" bits. This would have been cool, but, this isn't the way bitsets are supposed to be used. The usage of bitsets is quite rigid. They are based on templates which need the size of the bit array at compile time which is done something like this.
bitset<1000> b;   
or
bitset<1000> *b = new bitset<1000>(); 
Now, how does that help me? What if I need more than 1000 bits someday? I can't just give it the highest possible constant value. Everything would fail one day if the question of scalability arises. I don't understand why the folks who developed STL didn't have this in mind. They did it for all the other data structures but, couldn't do the same for bitsets? Why? The question might sound strange but, the answer to this might even be more strange.

So, what's the solution?
Use Boost libraries - which have their implementation of dynamic_bitset. Damn! I can't use Boost as the firm that I work for doesn't want to.
Use vector<bool> and implement overload the bitwise operator to act on that. Well, that's as good as creating a new implementation for my own dynamic bitset.
 
So, I've decided that I'll create my own dynamic version of <bitset> as the guys at MSDN forums told me to.

Phishing in the name of Midwest Airlines

What happens when you receive a very polite email from an airline company which tells you that you have booked a ticket somewhere across the globe and your credit card has been charged with $690? This doesn't sound strange if you've really bought the ticket on your credit card. What happens when you know that you haven't?

This happened to my colleague recently. She received a mail from the phisher pretending to be the Midwest Airlines web service which thanked her for purchasing the ticket and informed her that her credit card account was charged with $690. Gosh! You should have seen the look on her face. I definitely can't describe it. It was a mixture of fear (the fear of losing $690, which is quite a large amount), confusion (the confusion of what should be done next) and curiosity (all said and done, she too is a techie, knows and is curious about this stuff). But it's kind of cool to study the behavior of people becoming  victims (or in this case, potential victims) of phishing.

She gave me a shout across the desk and asked what she should do next. I informed her not to delete the mail (as I needed it as a real phishing example for posting on my blog, cruel thinking!) and inform the information security folks about this problem. And, I shouldn't have believed her on that. She deleted the mail and dreams of including snapshots of that mail and the attachments were destroyed. Anyways, you can find the pattern of the mail and the attachment in this article on CyberInsecure.com.

The best part of it was when I asked her to forward the mail to me. She looked at me as if I was planning to learn phishing by using that Trojan as my tool. But, by the time I asked for it, the mail was long gone (the mail was a victim of the Shift+Del disaster).

The attachment contains contains an exe file named E-ticket_[number].doc.exe which is a Trojan horse that steals information, including keystrokes, from the infected Windows PC and transmits that data to a server hosted in Russia. Now, that is something to take note of. Almost a year ago, this Trojan ripped off more than 1.6 million customer records from Monster Worldwide Inc., the company that operates the popular Monster.com recruiting Web site.

Have you ever been phished?

ClearCase and My Uncontrolled Source

I had some really funny time working with the Rational ClearCase source control software yesterday.

I'm not a regular ClearCase guy. In fact, I hate source control softwares. They're always a pain until you realize it's power. I've been working on ClearCase for like, five months or something, but, I still don't feel comfortable with it, especially when it's installed with Visual Studio 2005.

Yesterday, I tried renaming a file from "abc.cpp" to "xyz.cpp". Some crap happened in there and BOOM!!, the file was gone. Nowhere to be seen neither in the Clearcase Explorer nor using Windows Explorer. My mouth was left open and my lungs deflated by the very thought of writing 1000 lines of code again. Where did the file go?? I don't know!

The only thing I could think about then was to search for it. But how? Not manually through each directory, of course. Pop! I opened up Windows Explorer Search(which was unbelievably slow, considering the fact the my files were stored on a "high-speed" processing server connected by a "high-speed" network). Was it an attack by some freaky terrorist trying to destroy my valuable work? Windows search disagreed to my thoughts. Results showed that there a file named "xyz.cpp.04ac136e421d4108b617d79bf2aec045" in a directory called "lost+found". Now, what does that mean? Was my file lost?? Probably, it was, which is in turn very very strange and no one likes such surprises.

Thanks anyways to ClearCase for preserving a copy of the file before it lost it and folks, remember, to take care of this when renaming files using Visual Studio which are managed by ClearCase. Did you have any such crazy experience?

Efficient XML processing

Nowadays, many developers deal with a lot of XML files everyday. These files can be anything ranging from uses in configuration, documentation, databases where they are used for data sharing, data transport or simplifying platform changes. These files can grow to a very large size and need to be processed in an optimized way.
For example, while reading a configuration file, the module that reads the XML, iteratively reads the XML tag for the current XML Element and decides what processing it then has to do. A relatively large XML file would then contain a lot of different XML Elements(differed by their tags) that need to be checked each time you encounter an XML Element.
A brute force algorithm ro achieve this would be to check for each XML tag by doing a string comparison using an if-else ladder. For now, your XML file contains just three tags - Config1, Config2 and Config3. Your code would look something like this:
  
    class Caller 
    { 
        public void Call(string inputValue) 
        { 
            // Using the if-else ladder 
            if(inputValue.Equals("Config1")) 
            { 
                Method1(); 
            } 
            else if(inputValue.Equals("Config2")) 
            { 
                Method3(); 
            } 
            else if(inputValue.Equals("Config3")) 
            { 
                Method2(); 
            } 
        } 
    } 
 
All works well. But, what if the number of XML tags that need to be handled grow each day. You will be handling "Config1" to "ConfigN" in the same way as you have did before - using the if-else ladder. And what if you have no control over the value of 'N'. That is when the processing time for each file increases and a need arises to check the efficiency of the code. String comparisons do take a lot of time and having so many string comparisons can ruin your code in terms of efficiency, maintablility and scalability.

If you try to visualize the above code in terms of a map, you would see that:

"Config1" maps to Method1()
"Config2" maps to Method2()
and so on...

Here's when you know that it would be useful to modify your code to utilize the Hashtable class. Initialize the Hashtable object to store the <key, value> pair as <string, MethodHandlerDelegate> where the string object represents the XML tag input such as "Config1", "Config2" and so on... and the MethodHandlerDelegate is a delegate type that references the method that needs to be called. This can be done using an initializer method such as:

        private delegate void MethodHandler(); 
        private SortedDictionary<string, MethodHandler> stringToDelegateDict; 
  
        public Caller() 
        { 
            stringToDelegateDict = new SortedDictionary<string, MethodHandler>(); 
        } 
  
        public void Initialize() 
        { 
            MethodHandler handler1 = new MethodHandler(Method1); 
            MethodHandler handler2 = new MethodHandler(Method2); 
            MethodHandler handler3 = new MethodHandler(Method3); 
            AddHandler ("Config1", handler1); 
            AddHandler ("Config2", handler2); 
            AddHandler ("Config3", handler3); 
            // All the handler methods are initialized here in the SortedDictionary. 
        } 

Note that we use a SortedDictionary object here since we are aware of the types of the key and value that might be inserted into the Dictionary. This saves us from doing any unnecessary downcasting which we would need to do if we used a Hashtable.

Using this approach, it can also be decided at runtime that which of the handlers need be present in the dictionary by using the following methods that add or remove a handler from the dictionary.

Now, you can subscribe the handlers only when they will be needed.

       // Adding a new handler 
       public void AddHandler(string inputValue, MethodHandler handler) 
       { 
           stringToDelegateDict.Add(inputValue, handler); 
       } 

       // Removing an existing handler 
       public void RemoveHandler(string inputValue) 
       { 
           stringToDelegateDict.Remove(inputValue); 
       } 
 
When you encounter an XML tag now, use just make the same call that you did earlier like this:

       Caller c = new Caller(); 
       c.Initialize(); 
       c.Call("Input1"); 
       c.Call("Input2"); 
  
However, you change your Call method to have the following implementation.
 
       public void Call(string inputValue) 
       { 
           // Get the delegate to the respective method. 
           MethodHandler mh = stringToDelegateDict[inputValue]; 
           // Make the call. 
           mh(); 
       } 

This definitely makes life simple while dealing with changes in the XML tags, adding handlers for new tags, removing handlers for existing tags and providing the runtime support for the same. Now, any changes occuring to the XML format would need changes in the Caller.Initialize method. Here, the method handlers act as subscribers that can dynamically subscribe/unsubscribe for a particular event.

Now, what if your design says that it needs to call both Method1 and Method2 to handle the tag “Config1”? In case you were using the OrdinaryCaller design, you would handle it something like this:

public void CallModified(string inputValue) 
       { 
           // Using the if-else ladder 
           if (inputValue.Equals("Config1")) 
           { 
               Handler.Method1(); 
               Handler.Method2(); 
           } 
           else if (inputValue.Equals("Config2")) 
           { 
               Handler.Method2(); 
           } 
           else if (inputValue.Equals("Config3")) 
           { 
               // The OrdinaryCaller.Call method needs to be changed like this. 
               Handler.Method3(); 
           } 
       } 

The problem with doing this is that you need to keep changing the code of the Call() method and also the main disadvantage is that you cannot change the behaviour of the Call() method at runtime.

Here is where the concept of multicast delegates comes into picture. You slightly modify the AddHandler() method to support multicast delegates. This let you have any number of delegate methods as handlers of each tag.

// Adding a new handler 
       public void AddHandler(string inputValue, MethodHandler handler) 
       { 
           // Check if the string key is already available... 
           if (stringToDelegateDict.ContainsKey(inputValue)) 
           { 
               // If yes, create a multicast delegate 
               // and place it back there with the same key. 
               MethodHandler temp = stringToDelegateDict[inputValue]; 
               temp += handler; 
               stringToDelegateDict.Remove(inputValue); 
               stringToDelegateDict[inputValue] = temp; 
           } 
           else 
           { 
               // else, simply add the handler. 
               stringToDelegateDict.Add(inputValue, handler); 
           } 
       } 

Just call the AddHandler() method to subscribe a new handler and voila! You’ve got another handler working right where you wanted it.

I felt that the above mentioned technique is just a better way of handling large XML files. I’m sure more cleaner options exist to do the same which I’m not aware of. What do you think? What can be a better and more efficient technique to do this?

Saturday, February 19, 2011

Mosher’s Law of Software Engineering == Bullcrap

Mosher’s Law of Software Engineering – “Don’t worry if it doesn’t work right. If everything did, you’d be out of  a job.”

You probably have already come across this one and perhaps also have nodded in agreement. But, what you fail to see is that it is this kind of thinking that has reduced software development in India to a laughing stock. Not caring if it doesn’t work right is just an incentive for us to not stop being lazy. It’s sad, but true. Managers are just happy if the number of billable hours are high, even if you were doing crap all day.

Frankly speaking, Indian software developers have just become brats. Till now, there has been far lesser competition for India from the rest of the world when it comes to delivering software at a low price. But now, things are changing. Developing nations around the world which can not only can deliver the same software at a low price but can also maintain far higher levels of quality are coming up, and they are doing it far better than we could possibly even think of. We seriously need to rethink and redesign the way we work if we want to stay in the race for a long time.

Delivering working software will not make you lose your job. That’s like saying Superman would be hanged for saving the world.

Mosher was a jerk.

Monday, November 22, 2010

Solving problems and making decisions

I've noticed that I usually go through a lot of trainings each year, have fun in those trainings and forget whatever happened. Trainings are definitely a cool way to interact with people other than our project staff. And if you possibly learn something new, that's a bonus. But, most trainings are just crap. C. R. A. P. I'm not here to rant about such a crappy training I've probably been through. It's just that we have a lot of fun in these trainings, but when you try to look back after a couple of years, the only thing that one would possibly remember is whether there was a hot chick present in the training class or not. So, I'm trying to write down the stuff that happened in the training - what we learnt and what we could've and whether it has affected my normal routine in any way.

Lately, I was in a training called 'Problem Solving & Decision Making'. Sounds like bullshit. Kinda was too. I don't understand why my manager nominates me to these trainings. But, I'm also glad I went through this training as I met and socialized with a lot of interesting people, learnt a few things about myself and found out the difference between the 2007 me and the 2010 me.

The day started off in panic mode. We were ready for our trainer (Mrs. Pallavi Mandrawadkar) to begin at 0815hrs and look who comes in - another trainer for the same course. It was funny, incredibly funny. For once, we thought the L&D department had screwed it all up. But, things were resolved soon enough and we finally came to know that there was another training organized for the same course in some other training room. Wow! It could have been a lot more awkward.

One of the things that I realized during the second day of the training was that now I was a lot more comfortable with portraying who I was and what I could do. I did have a strong opinion in most of the discussions that we had as a group and I could see how different people thought about the same problem in different ways. That's that best thing about working in groups. The group's dynamic is so much important than anything else. Another super-important thing that I realized was that people were so easily embarrassed on going wrong. We did an exercise on a tool called 'Appreciative Enquiry' and the morons that we were, we understood the technique incorrectly and ended up doing half of the thing wrong. Embarrassing? Yes. But, we're here to learn and mistakes happen.

I'm glad to have worked in an environment where although mistakes were frowned upon, it was never that bad. Everyone is given a chance to make things right. The fear of making mistakes makes you more and more paranoid and you cannot be at your best when you're paranoid (no gyan, simple funda).

We also did another exercise where we had to build paper airplanes, market the item with a creative jingle and present/sing the jingle as an advertisement for the product. After wasting some time convincing what a 'jingle' means to the team members, I finally started to write a simple jingle while the other members of my group made the paper planes. They looked cool. The jingle was done and I needed a scapegoat who could sing that! Guess what? Nobody's willing to sing it! Well yeah, nobody wants to look like a fucking moron singing a nursery-rhyme-like jingle. But, C'mon it's just a frickin' training room with 15 people! And no offence to the ladies, but there were no hot chicks present too! (Most guys prefer not to act like douchebags in front of hot chicks. True fact.) I didn't want to do it because I had already done my share of presentations earlier and did not want to be termed as a "stage hogger".

In all that commotion finally someone decided to get on there and do the presentation. I asked him to sing out the jingle nicely and do the regular advertising stuff (I've heard/seen a fair amount of jingles and I know the pattern quite well). But, this guy (the jackass that he is) went up there and started talking some crap that we hadn't even written down and stole Cadbury's Dairy Milk's tag line and used that in the ad. Obvious disappointment! The guy was so embarrassed, he just wanted to get out of there. No biggie. But then I thought to myself, would I let my group's work go to waste by staying quiet and not giving our original jingle even a single try? No can dosville baby doll! I decided I should give it a try myself and said, "Pallavi, we actually have an original jingle that the team has made. But, maybe he's just a wee bit embarrassed to sing it as it's a little funny. I'd like to give it a try".

I went up to the center of the room and blurted it out! To my surprise, everyone liked it. Wow! It's such a great feeling that you get when your audience approves of your crappy material. Coming up with a jingle in 10 minutes is hard. Very hard. But, we did it! That made my day!

The rest of the training went fine. Learned a lot, maybe, I can't remember.