After two plus months on the road, meeting with many members of the NonStop community as we drove almost 6,000 miles in the company command center, it’s a good feeling to once again be able to say you have a permanent address. At least, as permanent as anything can be these days, given the nature of the business we are in – content generation can be done almost anywhere but even so, having a residence that best could only be considered temporary certainly came with many distractions. On the other hand, we did stop in at N2TUG as well as 2017 HPE Discover and we did get to meet with clients in southern California.
To directly answer questions I have received of late our new residence is just a little north of the Niwot address where we used to reside. In fact it is just to the east of Ft Collins, home to a number of executives and managers of HPE, and East Harmony Rd is now just two exits further up the highway. In many ways we feel as though we are closer to Cheyenne, Wyoming, than we are to Denver, Colorado. However, having scaled down considerably, the issue of scale has come up a lot of late and for the NonStop community, scalability remains a huge positive when it comes to why applications are still running on NonStop. Perhaps not so much scaling down, as in our case, but much more likely about scaling up, naturally!
You can play scales on the piano all day and you can scale new heights as a mountain climber, but when it comes to computing dimensions are involved; you can scale up and you can scale out. The shared-nothing architecture of NonStop is perhaps the best example of a computer architecture that supports almost unlimited scale-out properties and with as much focus as there is today on cloud computing, it is this scale-out attribute that generates much of the discussion.
Expressed as simply as I can, you want to embrace cloud computing to have no practical limits on the resources available to you – isn’t that what NonStop has been providing for decades? Scale up and scale out are familiar terms for NonStop users but they remain very important attributes of NonStop systems. “When you think of scalability as an architect, you must consider both vertical and horizontal scalability,” NonStop consultant Dean Malone wrote in a recent email exchange. “Vertical scalability is the ability to add more and more processors and storage to a given server whereas horizontal scalability is the ability to add more servers to expand processing capacity.”
During the stock market melt-down on Black Monday, October 19th, 1987, where the largest one-day crash took place with the market losing more than 20% of its value in just that one day, trades were flying thick and fast with volumes way outside anything previously forecasted for the New York Stock Exchange. However, it was running on NonStop at the time, so NonStop engineers simply kept inserting more and more CPUs and the NonStop system just kept on running. Not a hitch as it scaled up to handle the huge volumes of the day. The Exchange only called a halt to trading when the continuous display replicating the ticker tape of old simply couldn’t keep up with all the action. We often reference “elasticity of provisioning” when it comes to clouds but in this case, the NonStop engineers inside the data center demonstrated such elasticity more than thirty years ago!
Almost unlimited scale-out capabilities? Anyone who has sat through the presentation by the OmniPayments team of late had seen referenced an instance of OmniHub that spans 3,000 CPUs that make up a single system and to the best of my knowledge, no IBM mainframe of this size has ever been sold. For a brief period of time, however, IBM did talk about initiating thousands of Linux workloads on a single System z. I recall it was back when the z10 was first announced and the number of Linux workloads was 4,000+, but I am sure someone will give the exact number as it could have been much higher, like 40,000! The fact remains however that there is a difference between running real world applications versus providing simple demonstrations. As Omnipayments’ CEO, Yash Kapadia, so often reminds me, should OmniHub ever be offline for any reason the multimedia message services for one entire country would come to a grinding halt.
Stock exchanges and telco systems aside, the industry is about to head into essentially unchartered waters – the user experience (UX) or what is increasingly being called the Customer Experience (CX) that, according to one source I checked, is the “product of an interaction between an organization and a customer over the duration of their relationship” is placing new demands on our systems. Call it the intersection of transaction processing with real-time analytics and suddenly your requirements for both compute power and storage become significant unknowns. Just how do you scale and is the cloud the only solution that comes to mind?
In his latest post to the Striim blog, The Rise of Real-Time Data: How Striim Helps You Prepare for Exponential Growth, Striim Cofounder and CTO, Steve Wilkes, writes about these upcoming demands on our systems. “The predicted huge increase in data sources and data volumes will impact the way we need to think about data. The key takeaway is that, if we can’t possibly store all the data being generated, ‘the only logical conclusion is that it must be collected, processed and analyzed in-memory, in real-time, close to where the data is generated.’” For the current users of NonStop systems there is no escaping the message here. To remain in business and be competitive, analytics will be playing an increasingly important role in how you support CX, so enterprises will need systems that can truly scale to meet these new demands for compute power and storage.
Scaling has also been on the mind of Paragon’s CEO, Jim Perry. It may not always be a topic we associate with scale but when it comes to testing of our applications and their automation, the anticipated additional work required in support of CX will necessitate the development of even more applications and these additional applications need to be subject to automated testing. And much more, according to Perry: Today it is all about continuous integration, continuous delivery and continuous testing in order for your enterprise to deliver quality code faster. For Paragon, the challenge that comes with scaling has as much to do with the people as it has with the tools as Paragon sees the enterprise looking to scale the business in a manner that determines whether or not it stays in business.
Scalability has always been an important attribute of NonStop systems. From the earliest days of Tandem Computers it was all about availability, scalability and data integrity and nothing really has changed. Perhaps we associate data integrity with security and that’s OK, but as for availability and scalability, perhaps the NonStop user is being best served because of the inherent architecture of NonStop that allows almost unlimited scalability. While it is true that no enterprise has come close to hitting the size limits of a NonStop system when clustered to the maximum, knowing that already there are users out there with 3,000 processors in their system has to be encouraging.
Scalability may indeed be the “special sauce” that is NonStop today and may even be overlooked at times as it is overshadowed by the availability aspects of NonStop systems. However, every enterprise wants to grow their business in a manner that simply doesn’t break the bank and for NonStop users, it has always been a simple process to add more CPUs – indeed, the architecture of NonStop assumes that the system has been populated to the max with CPUs and is checking to see if they have arrived; cool! As for our scaling down of our home and office well, we may be living smaller but we are still thinking big. And that is something that will not change any time soon!
Comments