Following the last posting where I talked of how I enjoyed riding my motorcycle along the Colorado front ranges in the cool of autumn, I found myself this week relegated to doing chores. One item I had to complete was the smog-testing of our faithful old-world, heavy-weight, gas guzzling Cadillac SUV. It passed without any issues and I then drove to the county offices to renew the tags, and I came across something so completely different from my SUV: a brand new Tesla! And the picture above is of the Cadillac SUV parked behind this rather photogenic battery-powered coupe!
For those not familiar with the Tesla, it pretty much is the polar opposite of the Cadillac – powered by batteries with no emissions, and helped out by the folks at Lotus, wrapped in a very attractive package. But as tempted as I was (after all, it was the season to look at what the auto industry was bringing to market and to think about change), I really have grown fond of the old SUV. It would take a lot of extra mileage from a new car to offset the high prices demanded by today’s efficient hybrids. Every time I turn the ignition key in my Cadillac it starts, and it’s proven to be highly reliable. And it size is a serious consideration: I am still old-school and feel so much safer on the long cross-country drives than I do in a small car.
That night, after a quiet dinner at the local eatery, I pulled the cork from a 2001 Penfolds’ Magill Estate shiraz. An occasion I noted in a network update I posted to LinkedIn. If it ever came to a situation where I could only drink the wines from one producer I would be hard pressed to think of anyone other than Penfolds. My first exposure to the Magill Estate came while I was visiting Sydney, back in 1992, when I was a Program Manager with Tandem Computers and in the Distributed Systems Management (DSM) group. I was managing the NonStop NET/MASTER program and it necessitated frequent trips back to Sydney.
The history of NonStop NET/MASTER, I am the first to admit, certainly proved to be anything but spectacular. When I joined the project I was coming off a background that had been exclusively IBM, and IBM plug-compatible (PCM), mainframes. I had been successful introducing a line of small plug-compatible mainframes for Nixdorf as I had funded the porting of NET/MASTER from MVS to DOS/VS, a version that ran exclusively on the Nixdorf mainframe. In a single deal, Nixdorf sold twelve PCM mainframes to the Australian Federal Government. The deal clincher was that the IBM mainframe operators could monitor and manage the Nixdorf PCM mainframes using the network management tools they were already familiar with as they appeared as just another networked SNA device.
Talking with the executives of the company that developed NET/MASTER, the one system in an IBM SNA network that they struggled to manage was the Tandem. NET/MASTER was just a VTAM application and it was proving to be a big hit with bankers and retailers, and nearly every one of them had a Tandem front-ending their ATM and POS networks. Wouldn’t it be advantageous to all if some NET/MASTER components could be run on the Tandem, and surely, it wouldn’t be any more difficult than the port to DOS/VS undertaken for Nixdorf?
The senior Tandem technical folks from Cupertino took a trip to Sydney in late ’88 to consider the opportunity - a number of large banking customers were working behind the scenes to make NET/MASTER on Tandem a reality. And as I recall, following many glasses of good Australian red wine, a decision was taken to deploy NET/MASTER on Tandem. But then it all fell apart – for one reason or another, what was considered at the time as the world’s best network management implementation, was not ported, but re-implemented and not as a network management aid, but as a new system management subsystem!
It proved to be a classic “overkill,” and for the Tandem systems of the mid ‘90s, there just wasn’t the capacity available (as was routinely exploited on an IBM mainframe of the day) and the solutions struggled to perform. Although the NET/MASTER solution proved to be less than stellar by any account – but the problem it addressed remained. In a heterogeneous environment, where Tandem systems were a part of the solution, there were demands not only for Tandem to be well managed, but to be easily integrated into the enterprise management tool of choice. Visibility of the systems, the networks, and the applications being run, from a central “flight deck” became a necessity.
But the circumstances did change and the explosion in server populations changed the nature of the problem for good. As these servers were racked higher and higher, and spread deeper and deeper, the human factor took over and simple errors from missed messages often brought down critical applications, and as the pressure increased to protect private information many of these outages made it onto the front pages of major newspapers and into the evening news broadcasts on the major television networks.
The data center had to be automated and the servers charged with the oversight of the data center quickly became the most important servers of all. Across many installations, the trusted IBM mainframe became the central management hub filtering every event message and routing it to the appropriate application. Surely, just as we had seen upon the arrival of ATM and POS networks where 7 X 24 operations were mandated, the NonStop would become central to any data center managers plans!
It never happened! And the innovation I thought it would trigger, didn't appear! Not up until now, it hadn’t. In the feature I wrote some time back, on Wednesday, February 18, 2009 and posted as “Game changers!”, I reflected on how “NonStop servers can be deployed as critical components to monitor everything to do with a data center facility – whether a single building, or just part of the facility, or the entire global enterprise. Every major data center in the world, and the count of these facilities climbs quickly into the thousands and tens of thousands, could witness a NonStop server overseeing it all.”
A small start-up company, Modius, was porting their environment management application to NonStop. I added that “every time a customer took me on a visit of their ‘flight deck’ where the applications, systems, and networks were managed, I wondered why there hadn’t been an innovative engineer front-ending everything with a NonStop.” And then I concluded with the bold prediction “NonStop is poised for a new era of cool-ness! … a NonStop in every data center!
It hasn’t turned out to be quite this explosive and Modius continues to look for the best channel to work with - the typical HVAC salesman isn’t quite up to the challenge of a NonStop, it would appear. In recognition of this, I have just read where Modius has recruited Ed Sterbenc a former Tandem executive and well known to many of us. Particularly for those of us with roots in the old ICON division of Tandem Computers, and this “addition” bodes well for Modius. But the basic premise remains – for key manageability solutions the NonStop server remains an ideal platform. And it is the coming changes in the data center that may just fuel the interest once again.
As we move to cloud computing, particularly within the enterprise as we deploy the clouds ourselves behind the firewall, the expectation is that the cloud will always be available. Transactions are routed around downed servers, those experiencing peak loads will have secondary transactions routed to less stressed servers, and the same application may run on different platform / OS combinations when their SLA’s don’t mandate specific response times.
Relying on something other than NonStop for this level of oversight may prove extremely short-sighted and become a risky proposition for many data center managers. For many of these managers, the NonStop has almost been forgotten; running for many years without an outage of any kind. These same data center managers can be hard-pressed to even identify which of their servers is the NonStop - it’s in there somewhere – and no, I can’t recall what it looks like! The time may be right to re-evaluate the role of NonStop, and with so many of them deployed in critical situations it may surprise data center managers how effective a “tool” these servers have become!
I didn’t buy a new motorcycle, and I didn’t buy a new car. And no, I didn’t open an exotic-car dealership, either. Readers of the last blog may have been left speculating about the outcome. It’s going to take a lot for me to move away from my SUV – perhaps I will replace it with a more energy efficient hybrid as my green-side continues to develop. But the role the SUV fills, and the requirements I have for it, haven’t changed. I still need to navigate the winter roads of Colorado and make it through to sunny Southern California no matter what!.
As I enjoyed my glass of shiraz, and thought back on earlier projects, I couldn’t help but think about a future of NonStop with respect to manageability and the potential for NonStop to play an even bigger role. It still makes so much sense, to me even after all that was done with NonStop NET/MASTER. Isn’t it about time we revisited the very attributes that make NonStop so valuable to our business units and, as we contemplate moving to a different computing model, use it to keep everything operating reliably?
I am confident that some data center managers will recognize the value proposition as I am confident solutions providers will renew their interest in NonStop. After all, the server has been in our data center for decades in support of transactions – isn’t the stream of management events just another transaction stream? Perhaps we will see a NonStop in every data center after all!
Comments