Allow me a brief rant. Consider the program:
a + b
This is a very failure prone program because it fails silently in many useful cases.
Root cause analysis would suggest that one should modify + to simply compute the right answer in all useful cases. This has been done by programmers over and over but these implementations of + are judged "odd ball" by management who would rather run a strict java or c# shop out of some weird distortion of the "prudent man" rule. (The prudent man rule says that you can't be accused of being reckless with your investor's money if you are doing the same thing everyone else does.)
I wrote financial software once and traced the root cause of difficult bugs to the implementation of + for data of types Date and Money. Fortunately I was programming in Smalltalk where the implementation of + was accessible. I took several days to correct these deficiencies after which my life, and that of my customers, became much better.
You might be thinking that there is nothing wrong with + and that Ward is just being cranky. I then say to you, you have not gone far enough in your root cause analysis.
And I say to business, who has selected and paid for most of today's computing infrastructure, you are fools for having funded 50 years of software and not yet gotten a + that works well for time or money.
I once advised an international company on how to implement a useful + for money. The developers loved it. The customers loved it. But the database didn't like it much at all. I met one of the developers a few years later. He asked, "do you have any idea how hard it is to persist that money abstraction to the database?" Yes, I had to admit that I knew it would be trouble but I also knew that they would get through it somehow once they got hooked on getting right answers in all of their money calculations.
(Aside: The database problem comes from the fact that the sum of a+b can take twice the storage of either a or b when a and b are international currency. This requires either a variable size storage mechanism or preallocation of space for hundreds of currencies. Databases favor neither solution. Again business has been fooled by the database vendors, not the programmers.)
I saw recently where the IEEE was proposing a standard for "decimal floating point" under the misguided believe that this would alleviate rounding errors. They are fools, for they attempted to do this within fixed sized storage, the one "feature" of floating a "point".
(Aside: The program a / b introduces additional potential errors which are again more correctly solved by variable allocation of storage than by "floating" a "point", decimal or otherwise.)
I have intentionally stopped one sentence short of offering a complete solution in each of these cases. This is so that you can print this email and give it to the next Six Sigma guy that comes around as a test. Ask him to explain what I am talking about. If he gets it, ask him why he isn't hounding the vendors into fixing + instead of bothering you. If he doesn't get it, ask him how his methods are going to work on large programs when they can't find the bug in a + b.
(Aside: I heard that my financial software written for DOS is still in use today managing trillions of dollars. I also heard that the current owner/vendor was trying to meet customer demand by porting it to an industry standard database rather than the "odd ball" one I wrote for them. This porting effort was not going well. No surprise.)
You may need to read this post several times to get all the good advice that I've hidden between the sentences. I thank you for your attention. Best regards. -- Ward
This was posted to email@example.com by Ward Cunningham, 09/07/2007.