Due to the naming conventions article I wrote some time ago, I tend to get a lot of guestbook comments on both sides of the issue. This latest comment illustrates one of the reasons I am so dead-set against Hungarian notation (or more importantly, its common [mis]use)
Puneet Mehresh wrote:
intEIN = objCustomer.Name("xyz"); by reading this, one can easly depict that an object is returning and integer value. rather if I write
EIN = Customer.Name("xyz");
In this example, he is showing that an EIN (presumably the Employer Identification Number, which is like an SSN, and in the case of a sole proprietership, often is an SSN) should be prefixed so you can tell it is an integer. Ignore for now that the code itself is difficult to parse just by looking at it (why does Customer.Name reuturn an EIN?), and concentrate on the broader issue this illustrates.
I counter that treating the EIN as an integer is both wrong and meaningless. EINs can have leading zeros, so you end up having to write formatting code to account for the missing zero. To do that, you end up converting it to a string, checking the length, and then handling the zero issue.
More importantly, I don't think it is useful to tell that the EIN is an integer. If the better forms of Hungarian were used, an EIN would have a classification all of its own, so someone would know that it is an EIN and should conform to the rules and norms of an EIN, not those of an arbitrary integer value. It would indicate a type in the problem domain, not a CLR type. To know something about the real type you're modeling -- that is useful. The fact that it started as an integer is useless trivia.
When this variable gets changed to a string in the future, will the developer remember to change the variable name to prefix with "str" (ugh) or whatever string prefix they plan to use? More often than not, the old prefix is kept because the maintenance developer doesn't want to take the trouble or risk of doing a search and replace throughout the code. I can't blame them, as that can be both messy and, depending on the context that that particular set of characters "intEIN" shows you, they could introduce subtle or not-so-subtle errors in the code. (think about the field name "xyz" actually being "intEIN" as I've seen in some poor code, and you can appreciate where the errors will crop up. When the prefix moves out of line with the type, not only do you have the original noise of the type indicator, but now you have the problem of it being inconsistent with the underlying type.
As I wrote in my article, I think the day for Hungarian Notation has long since passed. If you want to use Hungarian to provide true and useful meaning in your code by indicating types that exist in the problem domain as opposed to the programming language, I can almost see a case for that. However, seeing code with 50,000 "obj" or "str" or "int" prefixes in it simply contributes to the noise of the code and to problems visually parsing the logic.
If I really want to know the CLR type we're working with (and can't immediately tell because I'm using loosely typed collections as above, or because I wrote a procedure that spans (ugh!) more than one screen), I'll hover over it with my mouse in the IDE.
Then again, maybe I'll just prefix every variable with "mob" for "meaningfully ordered bits" [:P]