Thursday, April 29, 2010

Adding tow hooks?

Last week I returned to A&A Corvette Performance to have prerequisite hardware added to the car - not to improve the safety of the Corvette, but to enable it to be pulled out of the dirt should I drive off-track. While some tracks provide lots of run-off area, and returning to the track is not all that hard, drop a wheel off the pavement at other circuits and you can very easily get caught in a gravel pit that’s impossible to drive out of. The installation process took much longer that I thought, but I kept reminding myself of the important role the “tow hooks” would play! The picture above is of the car with the rear tow hook painted bright red, of course, already installed, and waiting for access to a lift in order for the front tow hook to be installed.

This year I have benefited from the support of Pyalla Technologies as the company has offset some of the costs associated with driving a street car on a circuit. Perishables, such as filters, brake pads, tires, as well as all the car’s fluids require changing on a regular basis and it is imperative that the car is well looked after. Incidents do happen, of course, and readers who follow my weekends at the track on the web site “Buckle-Up” will be familiar with the number of times I have failed to keep the big Corvette headed where it’s supposed to. While not a serious sponsorship, as other cars attract, it’s all part of the marketing budget of Pyalla Technologies. The support is valuable all the same, as being able to install the towhooks goes a long way to protect the Corvette’s body work from damage, should tow tracks arrive and begin throwing out lines.

The circuit that necessitates the installation of a tow hook, is the famous Laguna Seca circuit down by Monterey, and perhaps the track I have most wanted to drive since taking up the sport. I not only follow motor car races but motorcycle races as well, and Laguna Seca hosts a round of the MotoGP championship – the Formula One of Motor Cycles. The gravel trips that surround the circuit and that mandate the installation of tow hooks, have been created to lessen serious injuries to the motorcycle racers as they have found barrels and guardrails a lot less forgiving.

Just thinking about Laguna Seca takes me back to my days at Tandem Computers, Cupertino. While working in the NonStop Software group, Margo Holen was given tickets for a one day outing on the Laguna Seca circuit and she gave them to a manager who worked for her at the time, Jim Nolan, a fellow Australian; I was so jealous of Jim! In the late 1980s, a number of Tandem employees from the Australian offices made the move to Cupertino – Ray Whiteside, Steve Bailey, Jim Nolan, Aldo Adriaan, John Donelan and myself all worked out of Tandem offices along the Australian East Coast. Over the years we have traded emails fairly regularly. I never did hear back from Jim about the day’s outing, but I have to believe it was a blast! I don’t even know what car he drove!

Jim and I worked for a while in the Comms building, Building 200 as I recall. While I was toiling away on DSM, Jim had become an advocate for standards, and was one of the first to work on implementing the OSI stack on Tandem. I am not entirely sure but I think Jim was also involved in some of the work associated with the implementation of CORBA. Jim has since moved on to other endeavors and checking LinkedIn recently, find he is now at Cisco and that comes as no surprise. Getting the opportunity to drive at Laguna Seca however will be how I best remember Jim.

It’s been hard to miss the press releases and news headlines about HP – it’s been one of the busiest periods that I can recall. With the purchase of 3Com completed, HP executives have wasted little time moving to the next big thing, and for many HP watchers, the proposed purchase of Palm was the highlight of the week. Earlier this week, however, HP unveiled upgrades to its high-end servers and talked of delivering “the industry’s first mission-critical converged infrastructure in one platform,” as BusinessWire reported the occasion. The electronic publication then went on to add of how this was a significant announcement “representing the first major architectural upgrade for Integrity Superdome in a decade.”

From my perspective, this announcement had been in the works for some time, as it was the first HP servers to utilize the Intel “Tukwila” 4-core chips. For some time, Intel has been telling us that Tukwila will be “the world's first 2-billion transistor microprocessor!” with Intel pointing out how Tukwila has been “designed to provide highly scalable and reliable performance for mission-critical enterprise server solutions.”

These Intel messages haven’t been lost on HP as, positioning this next generation of servers, HP has elected to trumpet loudly the value proposition of HP Integrity servers to enterprises running mission critical applications. As Reuters reported late on Monday, April 26th, 201 0, “HP’s new Integrity servers are targeted at industries such as financial services, manufacturing, telecommunications, and healthcare for so-called ‘mission critical’ functions where system failure can cost millions of dollars”. The more I poured over the material the more I had to remind myself that I wasn’t reading about NonStop as the language HP used to describe the new HP Integrity server offerings looked very similar.

There’s much that’s admirable about the new Superdome 2 – and in demonstrating the converged infrastructure, HP again points to another first when it says “businesses will be able to support a shared-service model with a common pool of adaptive, virtualized resources based on standards that can be can be optimized for all types of enterprise, cloud, and (high-performance computing) HPC applications.” Digging deeper, HP suggests that a common pool of adaptive … resources means businesses can “plug and play” and mix any combination of Linux, Unix, and Windows in the same bladed infrastructure. The “cloud in a box”, or “Megaplex”, as The Standish Group described earlier this year, is rapidly becoming a reality.

Perhaps the language most familiar to the NonStop community that I came across was in reference to the crossbar fabric that’s now part of the Superdome 2. Implemented, so that the Superdome 2 could scale-up and do so such that I/O, memory and processors could scale independently, HP made the decision to beef-up the redundancy and resiliency of the cross-bar fabric. NonStop users will recognize that this is exactly what ServerNet provides today. However, improving redundancy and reliability doesn’t create a fault tolerant system! (If you aren’t sure what constitutes a fault tolerant system then check out the discussion “What does fault tolerant mean to you?” underway in the LinkedIn group, Continuous Availability Forum.)

While the redundancy and resilience of the cross-bar fabric will play a role in getting you healthy again should something happen, for the NonStop community it looks a lot like a brightly painted tow hook; something you never really want to have to use, but which presence is welcomed when the time of need arises. The difference between redundancy and resilience, to the fault tolerance NonStop provides, is similar to comparing a tow hook to the electronic aids of a modern car. Stability control! Traction Control! Anti-Lock Braking Systems! All designed to intervene on your behalf and to keep you from ever leaving the track!

So, where was NonStop in this week’s announcement and why weren’t there any references made to NonStop in the press release? The announcement was barely hours old before I began getting emails, many that suggested that, with the absence of NonStop from the press release, it may further weaken the message of NonStop within HP. But in talking with those within HP familiar with the project and familiar with NonStop, the message I received was clear. NonStop will be supporting the new blades – the two sockets package housing two 4-core Intel Itanium 9300 series processors, still requires considerable testing.

When it comes to converged infrastructure and to a shared infrastructure blade chassis, I was reminded by Martin Fink, HP’s Sr. Vice-President and General Manager, Business Critical Systems, of how “the latest Nonstop blades are part of the same overall BladeSystem infrastructure that we use for the entire HP server line, including the Integrity line. That was the point all along. The entire BCS portfolio is now bladed from the entry 2-socket Integrity blades to Superdome and all the way up to Nonstop.” Martin then addressed the subject of NonStop more directly, adding “the innovation focus, for NSK, has been bolting our SMP environment underneath the MPP NSK kernel so that we can better take advantage of multi-core performance.”

It’s probably extremely unfair to compare all the work that went into the new HP Integrity solutions as nothing more than adding a tow hook, just as it’s probably a little too complementary to the HP Integrity NonStop in suggesting that the NonStop servers come equipped with something akin to a modern car’s electronic control systems. It’s also unwise to decline hardware advances that add redundancy and improve resilience, as the new blades packages provide, so NonStop will be sporting tow hooks as well in the near future.

However, the images do hold true in one respect; while there are times at different tracks where I elect to cancel out the electronic intervention and “manage the car” myself, the Corvette will not let me switch off everything and the car will continue to work at keeping me on the road, despite my best efforts. And for that, I am thankful I have a lot more than just the new, bright red, tow hooks!

5 comments:

Moore said...

I had the same thoughts when I read about the new crossbar especially the processor-i/o disconnect.
But when I read up on hard partitioning, nPars, in the "Partitioniong Continuum" which supports Virtualisation, I was taken back to the "not SMP" slide I use when introducing NS to the wider Integrity world.
nPars takes a system capable of being a big -n-way and breaks it back down into a number of smaller, totally isolated n-ways, to reduce the fault zone. Then for a big application, you have to find a way to distribute it in a clustered manner across the partitions and provide it with a shared or replicated db. Familiar ?

Robert said...

Actually there are some hidden switch settings that will turn off all the electronic nannies (except ABS, not sure that can ever be turned off without cutting a switch into some wires). But unless you are a real pro (and if we were, we wouldn't be driving our own cars), doesn't hurt to leave some of the nannies in place.

oltranscendence said...

Yes, NonStop was conspicuous by its absence in those announcements. It's good to see that there is some push (yet to be clearly articulated)to integrate the NonStop into the 'Converged Infrastructure', and once that happens, it could be a rallying point for the community!

Richard Buckle said...

Robert - yes, I rely on the "nannies", as you observe. I track my own car, after all; hwoever, after the first session, I do turn off the nanies trying to keep me in a straight line - after all, I have gone through brake rotors when I left them on ...

And Moore, I din't read the same doco on crossbar - but yes it sounds very familiar. Lot's of folks misunderstand that beneath the MPP of NonSto's shared nothing is the clever (and new) capability of supporting small SMP clusters - a change that came from jumping on the Intel roadmap! Good call ...

Richard Buckle said...

And Moore? "NonSto's shared nothing is the clever (and new) capability of supporting small SMP clusters" shoudl have read NonStop's ... and perhaps my reference to small SMP clusters doesn't truly convey what I meant ...

I will leave it as "beneath the MPP of NonStop's shared nothing is the clever (and new) capability of supporting small SMP "packages" ..." to reflect the multi-core Intel Itanium packages housed in the blade socket(s) these days!