Wednesday, June 2, 2010
What’s in your garage?
In case this looks unimpressive, what he was developing, and had already taken to a prototype stage, proved to be pretty sophisticated. Kevin was working on a way to measure potential impurities that could find their way into the fuel onboard satellites, and where after reaching orbit, there’s very little opportunity to perform any maintenance. By using light, and measuring disruptions in the light as the propellant passes through his measuring device, microscopic particles can be detected. Kevin has developed a reputation within military circles with previous inventions, and this latest device looks likely to be snapped up as well.
Why such space-age creations are in the hands of “garage inventors” like Kevin? And why would government agencies within the United States have expectations that such inventions would come from suburban garages? Time and time again, when something innovative is called for, it’s the cadre of small inventors who routinely sort through the technological possibilities, often far-removed from mainstream consideration, and yet come up with affordable products. It’s no surprise then that so much of the technology we have come to rely upon has had it’s origins in small garages along the Californian coastline.
It’s hard for any of us to ignore the history of Apple with the fabrication of their first PC in a Silicon Valley garage, just as it’s hard for anyone to ignore perhaps the most famous garage of all, the one used by the founders of HP that is now listed on the national registry of historic places. Even though I knew of these very famous garages, it wasn’t until I stepped into Kevin’s garage and listened to his passionate description of the research he was doing, that I really had a sense of how literal garage-research and garage-prototyping was and that so much could be engineered from basic items you could buy from your local store. It’s hard to imagine that the device to be used to measure particles in satellite fuel started out with an outdoor low-wattage light bulb, a couple of discs made from aluminum foil, and some round mirrors from a beauty salon!
Recently I had an email exchange with Jimmy Treybig, founder of Tandem Computers back in the mid ‘70s. For those who have seen the recent comments posted to the discussion “Scale? Not a fishy subject ...” in the Real Time View group on LinkedIn, a complementary social channel to this blog, would have seen that I was revisiting the “Tandem Fundamentals”. This discussion started following the remark of how “a few days ago, a comment posted elsewhere by Nigel Baker has had me thinking - scalability, the oft-forgotten, attribute of NonStop.” After all, with the emphasis on availability, what about scalability? And, just as importantly, with all the discussions about virtualization and cloud computing, is scalability becoming even more important than availability? After thirty-five years, should we rethink the attributes that first surfaced when Tandem Computers were little more that sketches on beer coasters?
In the email exchange with Jimmy, he explained that “scalability is the same as on-line repair which was there in the beginning (with Tandem). If some part fails, you must be able to repair it on line and then the system must expand while it is running to reincorporate the failed part.” In other words, while addressing the ability to provide a truly fault tolerant computer, and where a failed part (including a complete processor) could be taken offline, worked upon, and returned to service without disrupting the application that was running, scalability played an integral part in ensuring Tandem was fault tolerant!
In the article I posted on April 29th, 2010 “Adding tow hooks?” that covered the news release on HP’s mission-critical “converged infrastructure”, where HP had made the decision to beef-up the redundancy and resiliency of the cross-bar fabric, I suggested that “NonStop users will recognize that this is exactly what ServerNet provides today. However, improving redundancy and reliability doesn’t create a fault tolerant system!” In that post I went on to add that “the difference between redundancy and resilience, to the fault tolerance NonStop provides, is similar to comparing a tow hook to the electronic aids of a modern car.”
From my earliest times at Tandem Computers, I have known that at the core of the Tandem Fundamentals there had always been Fault Tolerance, Scalability, and Data Integrity. Applications developers were quick to exploit these capabilities despite the lack of tools and infrastructure. Dr Michael Rossbach told me of how he “was approached in 1978 by a friend … (as) at the time, there was a lack of skills about Tandem among software vendors – they were all looking for resources to be trained in Guardian / TAL; there was no Pathway at that time … so I started training in the early spring of 1979 and started my own business in July 1979!” Dr Rossbach wasn’t alone and over the next three decades, solutions leveraging the availability attributes of Tandem appeared from every part of the planet!
However, while very few within the industry question the NonStop’s availability properties, even as competitors continue to hype how they continue to bridge the gap between their server offerings and the HP NonStop server, is it also time to look more closely at what really separates the NonStop server from all other server offerings, particularly as it plays such a significant role in the support of today’s mission-critical applications. Perhaps it was time to check in with Martin Fink, Senior VP and General Manager, HP Business Critical Systems and get his take on what were the key attributes of NonStop today!
In hid response, and somewhat of a surprise, Martin was quick to list scalability first stating “there are two general types of scale: Scale-up and Scale-out. Nonstop excels at Scale-out. Why is that important? Because when customers (like banks) need to deploy tens of thousands of ATM machines, they need to know that ATM machine #1 and #50,000 will perform the same way and deliver the same customer experience. That’s what NonStop does. Extreme scale, with consistent performance across the scale spectrum. Nothing else can do it as well as NonStop.”
Martin then added real-time performance as an attribute, pointing out to me “when a cellular operator needs to decide in less than second that a subscriber is authorized to make a call, NonStop delivers that. But, the real point is that NonStop does it in real-time when millions of subscribers are trying to connect calls all at the same time. I don’t know of anything else out there that can deliver that kind of real-time results on the scale of millions of transactions the way NonStop does.”
Having read the blog post already referenced here, Martin agreed with me, adding “as you point out in your article, there’s more to fault-tolerance than redundancy. While most systems out there (including Unix, Linux, Windows) operate under the concept of ‘Fail-Over’, Nonstop combines a shared-nothing hardware infrastructure with a software ‘Take-Over’. The Nonstop take-over system operates at the process level and is near instantaneous. The point here is that not only does NonStop deliver extreme resiliency, it does it in a transparent way, and with the simplest of configurations.”
Finally, rounding out the list of key attributes, Martin didn’t miss the chance to talk about open standards, and finished with “that was the point of bringing NonStop to the blades world. NonStop now uses standard blades (the same ones used in the Integrity portfolio). Where others develop fault-tolerant systems thinking proprietary from the ground up, we think about standards from the outset and focus our innovation on things that really matter to customers. Things like NSK take-over, extreme scale-out, shared nothing, etc.”
As Kevin guided us around his garage laboratory, new projects were already starting. Kevin’s enthusiasm never missed a beat and it was certainly contagious. In my exchange with Jimmy I asked him whether Tandem Computers had it’s origins in a garage as well. Unfortunately, when it came to full-fledged computer system such as a Tandem, starting in a garage was not an option. As Jimmy explained “a garage start-up was not possible (as it) took too much money ($3 million), and there was not a product that could generate revenue before the total was finished.” Tandem Computers gave us the Friday beer-bust, First Friday reviews, the TOPS club, but no, there wasn’t a garage.
And yet, I have to believe there were many people, like Kevin, every bit as enthusiastic about what they were building. That there is a readership today still interested in commenting about the attributes of NonStop and about the Tandem Fundamentals is testament to the material impact the technology continues to have on the way we support applications. Garage or not, the innovation that surfaced with Tandem and that still intrigues so many of us in the industry, is as relevant today as it was those thirty-five years ago!
When the HPE NonStop team gets together with the NonStop vendor community, as it now does each year, it becomes clear that there are no li...
It’s London and it’s the BITUG Big SIG. The NonStop community is gathered to hear more of NonStop and it’s a lot happier community these da...
With the arrival of 2018 I am celebrating thirty years of association with NonStop and before that, Tandem Computers. And yes, a lot has ch...
Social media may not hold all the answers and may be a poor substitute for actually meeting face-to-face. And yet, it's proving to be a ...