Tuesday, December 11, 2007

Need to take a brisk walk!

Some of you have remarked about my weight and that you have noticed, as I introduced photos from different periods of my life, how much slimmer I looked these days. However, the photo you see here is from last weekend as I returned from Chicago – and you will notice how I am keeping clear of the moving walkways. I had known for some time now that my weight wasn’t what I wanted it to be and, during my physical last year, my doctor suggested I exercise regularly.

Back then, I wasn’t exercising at all and my family began to have a little fun at my expense suggesting, at one point, that I resembled a beach ball! Wicked woman. So I have taken up walking here in Simi Valley, as well as in my Boulder neighborhood and my colleagues will often be surprised by the roadside noise when I answer the phone. As many of you have been reading, I don’t spend all that much time in either city, so I have begun to map out walking routes at different airports. Chicago, Terminal 1 Concourse C, is a good one and a half lap option as is Denver, Terminal B, from one end to the other and back. And at London’s Heathrow, Terminal 3 – a brisk walk to the end gates provides another work out.

It’s not only the walking I’ve been doing, but I have also cut down on the quantity of food I have been eating. I have also begun to be selective, and I am finding that I enjoy eating less, of quality servings, a lot better than eating fast-food. It was against this background that I overheard our eldest daughter Anna, complaining bitterly about her experience with migrating to Microsoft’s Vista operating system. She was particularly upset with the way Vista supported, or not, other applications and hardware interfaces, as well as how big it had become.

Anna is the Technology Teacher at Platt Middle School, here in Boulder County, and is very familiar with today’s PC technology. Not only did she object to Vista having packed on the extra pounds, but it was terribly slow as well. She then went on to tell me “there are issues with printer drivers – HP claims that the driver was bundled with Vista when they shipped it, which it wasn’t. Microsoft pointed at HP and, of course, recommended buying a new printer. It’s also VERY slow to boot up, even though I don’t seem to have a lot of programs loading on start-up. My main frustration, however, has been that software from other companies is not yet designed to run on Vista seamlessly - kind of putting the cart before the horse. And then there’s MS Office Suite 2007 – and it too has a lot of issues”!

When I was putting material together for another blog posting on architects and artisans a few months back – Kim Brebach, a former colleague of mine at Nixdorf Computers, pointed out that there were very “few people who still write elegant code, instead of ‘bloat-ware’. Look at Vista – 15Gb of lumbering code for a PC Operating System”?

Isn’t it time we put Operating Systems, and Infrastrucutre Middleware on a diet? Isn’t it time they were all made to exercise and get back into shape? Shouldn’t we be going back to our favorite vendors and saying “enough is enough”? Is it just my observation, or are we being provided with more bad code than ever before, and aren’t we seeing rapidly deteriorating product and feature quality as a result?

Palmer King, a regular attendee at SUNTUG, as well as a reader of this blog, supported my view and said “I agree with your point, but alas, I’m not sure why. I’m guessing, but I think we have fallen back into the same trap we saw in the late ‘80s / early 90’s where management pushed for timelines regardless of the concerns of engineers. It’s the same trap NASA fell into with the Shuttle program.”

There is a very visible rush to produce applications quickly. There is enormous pressure on all of us to deliver new solutions in support of our business endeavors. We have traded quality for quantity. Palmer reminded me of the Dilbert comic strip where Dilbert asks his managers “Is it OK to do it wrong as long as we do it really, really, fast”?

Peter Shell, former President of OzTUG and someone with many years of experience developing infrastructure middleware, thought I may be over-reacting when I suggested the quality of code was deteriorating. “I don’t necessarily agree that we are seeing a rapid deterioration in code quality, but I would agree that code is being released that is not as good as it once used to be,” he said, adding “and there are a few factors that are causing this – compressed timeframes, budget restrictions, backwards compatibility, and environment complexity.”

Some of this is beginning to creep into NonStop code, and this is a real concern for me. It’s not quite to the same degree as on other consumer or commercial software, but it’s beginning to become a lot more noticeable. When I put the issue of quality of code on NonStop back to Peter, he remarked that he “hasn’t a whole lot of info on the NonStop. Personally, I think the hardware is much more reliable, however, and it’s the available software that seems to create most problems.”

At the European ITUG user event, there were a number of questions raised during the Business Continuity SIG, and Panel Sessions, about the quality of NS SQL/MX. Mostly, anecdotal and often unsupported in terms of what had been reported to HP, but nevertheless, enough to suggest that NS SQL/MX wasn’t of the quality of earlier versions of NS SQL. As I recall, new releases coming out in 2008 aimed at improving quality.

Like everything these days, there’s no escaping the cost of quality, and the trade-offs we are forced to make between the quantity of code we provide and its quality. Let’s face it, good code is expensive to produce and, as Peter noted above, budget restrictions are playing their part, and the standard of Quality Assurance (QA) groups has begun to suffer. “Quality of anything, costs money”, observed Neil Coleman who worked with me at Insession, and “in software development you can cut development funding only up to a point – beyond that, nothing gets developed. But you can cut costs significantly in post-development activity, namely in QA.” Neil went on to suggest that we wont see the impact of the reduced QA efforts immediately, as it only becomes obvious that quality has become poorer much later and, by then, it is often someone else’s problem.

We are living and working in an ever-changing environment, with continually shifting user requirements. And a lot of the problems I am seeing start with this – poor requirements-gathering processes, that leads to poorly documented requirements. As Neil reminded me “I tend to think that ‘rapidly changing environment’ is a bit of an excuse used after the event when in fact the real culprit is the lack of agreed-upon requirements before development projects even start!”

I don’t want to appear reckless here, or wrongfully suggest that the quality of code being developed in support of NonStop is in decline. And in particular, I don't want to appear to be focusing solely on SQL/MX either even though I believe good data base product offerings are crucial to the long term viability of the NonStop platform. Furthermore, while I have no insight into the internal processes within HP NonStop development, I am very concerned about all the headcount cuts that have been made public and I am looking for assurances that it will not get any worse.

I know the management team very well and I have a lot of confidence that they will manage through this. I have seen an up-tick in the investment dollars being directed at NonStop development, and have to report that this is very encouraging. But have any of the headcount reductions over the past few years contributed to a declining requirements gathering process? Are there simply less eyes looking at user’s requirements these days?

I don’t see anything similar to the public questioning that’s going on in the press over Vista happening anytime soon over the NonStop Operating System and its associated infrastructure middleware. And we are a long way from developing similar frustration levels as are visible among Microsoft’s user community. Our daughter Anna’s last comment to me was “honestly, I feel like perhaps I should have listened to my friends and bought a Mac”!

This is not a likely outcome from any of us, but nevertheless, diligence across the ITUG community is very much needed. We have to engage our peers more routinely at ITUG meetings and events, for instance, and to be wary of any signs that there’s further erosion of quality surfacing. And we really do need to keep an eye on any signs of our software becoming bloated, no matter what the excuse! The real problem though is that you can’t just take your software on a brisk walk down Denver’s terminal B. You have to design the “slim lines” into it and make it fit, lean and mean, as you create it!

4 comments:

Steve Guendert said...

I would tend to agree that Windows has become extremely bloated, and most of the bloat is for absolutlely useless features that the typical everyday user does not need, nor will they ever use them. IMHO it is an example of how arrogant Microsoft has become. It would appear that they do not listen to what the customer needs, they just decide what we need and base that decision on what a few whiz kids in Redmond need.

This underlying love of bloat shows in Mr. Gate's presentations. Compare his presentation slides sometime to Steve Jobs'. BTW I am not an Apple user or fan.

Maybe Vista would have been more appropriately named Guernsey, Holstein, or Jersey after some of the different breeds of cow.

Anonymous said...

I found this to be very nice reading. My name is Mike Haggard, and I'am VP of MSTUG. We have not had a meeting in a while, I need some new ideas on all the new teck that is out there, to maybe come and give us a presentation. One of the last projects I was on was the Virtual Tape product provided by Crossroads. This product has saved tons of money for the company I work at. Thanks again mike.haggard@morgankeegan.com

bhonaker said...

Interesting topic, and I think (in the NonStop/Neoview realm at least) I see mixed results. For example, the NEPE (Eclipse Plug-ins for building NonStop apps) seems to have gone to market slightly prematurely; and some issues I reported are 'to be fixed later'; in other words, we won't fix your product and make an interim release to keep you running until then, you'll need to wait until the next release. I've seen the same thing on a number of products. However, many of the core products, you will get an SPR if you're in a 'system down' condition.

It is much worse in the Windows space; not only in single products, but in products sold together. For example my new laptop came loaded with Windows Vista 64 and Norton Internet Security 2007. The 2 have problems working with each other; though they were SOLD as a bundle, they aren't SUPPORTED as one. That probably adds to the perception (if not the reality) which you wrote about.

Here's an interesting current 'CIO Insight' article that explores a similar view on how this applies to the 'custom' project side of the house:

Business Execs Expect IT Projects to Fail

Bill Honaker
XID Ltd.

R. Beauchamp said...

This is an extremely relevant topic in the industry today. I think your comments, along with those from Neil and Peter, accurately reflect many of the reasons for what we are seeing with "super sized" code in today's environment. I would also take a perspective that says many applications built in the 70's, 80's, and early 90's were purpose built for very specific problems. They could be tested for specific, known requirements. Today we are making those applications and new applications to be "enterprise wide" applications. These "enterprise" applications must now integrate with other applications across the company that, when built, had absolutely nothing in common. In many cases, we've left it to the engineers to figure out what it means to be "enterprise" compatible.

To do this, engineers add layers of code to existing software rather than re-engineer. This makes sense as long as Moore's law continues to give us more and more processor power to "hide" our lack of attention to efficiency. But, now that Moore's law is starting to tail off, will multi-core allow us to continue this practice? The slowing (ending?) of Moore's Law is probably another topic for another day, but I think it has allowed the engineering community to ignore efficiency, contribute to code bloat by creating more lines of code, which allows more errors to be introduced.

We don't have a choice regarding the movement to enterprise and open applications. But will we be able to drive the notion of efficient coding back into the engineering discipline? Not sure, given the move towards commoditizing the software developer function.