Friday, October 29, 2010

Reliable as the clock!

Anyone who has faced the prospect of driving on California’s Interstate 405 (I405), that bypasses downtown Los Angeles but represents a major arterial highway in and out of Los Angeles (LAX) airport, cannot escape the nervous twitches that develop or the sense of dread that begins to overwhelm them.

Ignore the big green signs that direct you to popular tourist destinations of Santa Monica and San Diego and forget about the prestige and glamor that may lie beyond the exits to Wilshire Boulevard, Mulholland Drive, and Sunset Boulevard, this freeway that can grow to be as wide as 12 lanes in some sections, is among the most dangerous freeways to traverse in America.

Hard pressed to catch a flight out of LAX there’s never any assurances that an accident will not occur only a mile or so ahead, or that construction workers will not close a lane or two, and what should take less than an hour ends up taking a lot longer, as every car comes to a stop, reluctantly joining everyone else in an impromptu stationary parking lot!

I had the opportunity to travel to Toronto, Canada, this week to attend the Fall Canadian Tandem User Group (CTUG) event, a view of the vendor area featured in the picture above. And with an 8:00 a.m. flight out of LAX, I was apprehensive about the pre-dawn dash down I405. There’s been days driving out of the pits and onto a race track that have proved far less stressful, but armed with a cup of strong coffee I ventured out all the same.

I have a good friend that takes this route on a daily basis and I often see him back in Simi Valley, early in the afternoon, and with a strong espresso in hand, still shaking and with a dazed expression on his face as if his very presence, unscathed, at the local Starbucks was as much of a surprise to him as to any of his friends! The distractions along I405 can overwhelm you and can become disastrous, as is so often the case!

However, it turned out to be manageable – there were the usual Police drama’s underway in the emergency lanes, as cars had been pulled over and everyone slowed to take a look, and bridge overpasses were being worked on with the usual lane closures creating additional choke points, but I made it to the airport safely and was soon standing in line with everyone else.

It was while passing through the security check points that I noticed just how many passengers were removing their watches. I’m not sure whether it’s a case of there being less tolerant settings on the passenger X-Ray equipment in effect, or it’s a reflection on today’s watches having become much larger than I recall seeing in previous trips down the security lines?

In the past, I used to check out the watches of those seated near me and look at the high-end models to see if they were real or just cheap knock-offs.

Good watches of course, depend upon complex mechanical movements and their seconds hand would sweep around the watch-face in a continuous, fluid motion. Less expensive watches employ a simple microprocessor “chip” that cannot match this movement and instead, generate a series of jerky motions, as seconds hands step from one second to the next.

So whenever I see a Breitling, or a Rolex, or even a Patek Philippe exhibiting such a movement I begin to wonder about the heritage of the watch being worn. It is the engineering where the differences originate and no matter the craftsmanship engaged with the production of the less-expensive movements, the manufacturers can never completely eliminate the visible jerkiness of the hand moving in increments, a second at a time.

My first thoughts when I looked at the various watches being worn was that there was a very similar comparison that could be made between IT systems that have been engineered to run in real time, versus those whose origins are in batch. Not that batch systems can be viewed as jerky, but rather, irrespective of how much faster the chips become, and how much multitasking is pursued, they are still batch applications.

There’s many of us who cringe as we recall some of the earlier labels associated with the NonStop server not the least being, On-Line Transaction Processing. This label was affixed during the time when the earliest online systems were appearing on mainframes and helped position the then-Tandem computer in a very specific marketplace.

As the market for NonStop grew and as data base support was introduced, this label was pushed to one side – but the label remains as relevant today as it ever was. We live in an on-line world dominated by real time access to information from around the globe. And the original architecture in support of NonStop is little changed from the time the very first Tandem computers were introduced.

I walked in a little late to the final Q & A session at CTUG as I had just finished up my own presentation on modernizing networks. I missed the question asked but Dick Bird was in the process of handing the microphone to Randy Meyer who heads product management for NonStop.

As best as I could tell, the question was in three part, and spanned the full history of Nonstop: what was the key product / service that most benefited NonStop; what could have been done better; and what were the challenges that lay ahead for NonStop?

These were all good questions and perhaps ones that we have all been asking the management of NonStop for some time, one way or the other. “The original architecture of NonStop, as it was expressed in the mid ‘70s,” was the response from Randy to the first question however turned out to be an excellent response. “Computer Science students will often discuss the architecture even today,” Randy later told the CTUG community.

How often we forget. NonStop is not something that can be added to a system, although the likes of IBM and Microsoft have tried and come close. Even Digital did a pretty good job at blurring the lines for a while, but the architecture of NonStop in addressing the sum of many fallible components in a way that produced a better performing, more reliable platform, remains close to magic even today.

Fault tolerance, another label that’s not used as frequently these days, has to be engineered into the original architecture and cannot be an afterthought or something layered on top of the Operating System (OS). Batch intervals can be reduced to where there are only absurdly small microseconds of gap, but truly on-line and indeed, real-time, systems have been engineered that way from the start.

Time is really on the side of NonStop! Who could have imagined!

Just as the I405 can prove to be a distraction, and the wreckage from previous disasters clearly visible on the side of the freeway, there have been a lot of distractions for users of the NonStop server. Attending CTUG and simply hearing of the recent successes being enjoyed by the platform proved to be a welcome “refresher” on what the platform can provide.

“We are modernizing the Tandem world,” Randy quipped as he kicked off his session on the product roadmap, Java and .Net solutions can run today as easily on NonStop as on any other less-reliable platforms and yet, as relevant today as at any time in its history.

Probably more so with the arrival of the internet, and the globally-connected world we all work in today. Peel back the commodity and standards based infrastructure and middleware and you will find what lies beneath reflects the architecture first embraced thirty-five years ago.

No comments: