Monday, January 23, 2017

Transformation – let’s look at how far NonStop has come!

A message about how necessity drives innovation isn’t anything new for the NonStop community and yet, it’s pleasing to hear all the same!

During a recent extended power outage I was left to figure out how best to entertain myself. Margo’s family commitments left me alone in the house and I was happily working away on storylines I had been considering for a while – some of which have now appeared in the latest issue of NonStop Insider as well as in posts made to LinkedIn – when unexpected darkness descended. When Margo and I built the house we had looked into integrating a sizeable UPS / Generator but nothing materialized. We do have a huge generator of course with our mobile company command center (our RV), but while we often recharge the batteries from the house, we hadn’t considered adding the necessary connections for the RV’s generator to power some of the house’s necessities.

It was a very cold day in Boulder, Colorado. Improvising out of necessity I was able to find a way to create a little heat in the family room by igniting several of the cooktop range’s burners using matches. And then there were a number of candles I found that I quickly deployed. But as I settled into waiting for the power to be restored, there was more than ample time to hold a candle while mixing a martini and as readers to this blog will recognize, the early evening martini in our household is not just a tradition but oftentimes a necessity following days when all that is on my computer is a blank screen.

It is the message of necessity that has been in my thoughts a lot of late. It is often out of necessity that so many of our actions arise, whether it’s a trip to the store for a forgotten item, mowing the lawns and cleaning the yard or simply to put on any extra layer of clothing as we step outside of our homes to brave the elements. And as I looked through the presentations I gave during the past twelve months I was struck by how well the HPE NonStop team have progressed numerous initiatives that indeed many view as necessary for NonStop to continue to have a presence in the data center. Mission Critical systems are an important part of the HPE products portfolio but have come under increased pressure to embrace a world in transition. Traditional IT is succumbing to cloud computing both private and public and to sustain a presence in this new world NonStop had to fundamentally change not just the hardware it depended upon, but the complete, highly integrated, software stack.

During 2016 NonStop Boot Camp event attendees would have heard how Randy Meyer, Vice President and General Manager, Mission Critical Systems at HPE, said that for his group, “Our strategy is to build specialized solutions for target customer segments.” Today, the coverage of customer needs extends to support for mission critical applications to high performance computing to small business and branch operations. The Mission Critical Systems group needs to support extreme scale where customization is required, carrier grade computing where standardization is paramount, and hybrid, versatile along with security, for enterprise’s core applications.

NonStop systems have never been general purpose systems and so continued support for the very specialized roles NonStop plays is completely understandable. However, the target customer segments where the focus of NonStop has been directed, want a lot more from NonStop as they embrace the key messages coming from HPE – transformation to hybrid infrastructures. As Randy’s new boss heading the Data Center Infrastructure Group (DCIS), Alain Anderoli, was quoted as having said (and as published in promotional material for the recent HPE Discover event in London), “A Hybrid IT requires deep competence and fluency in all areas of the industry – servers, storage, networking, IoT, data center infrastructure, and everything else. So, DCIG represents a pooling of talent that’s as broad as it is specialized.” NonStop remains specialized but this also means it needs to be competent in more than being just extremely highly available.

Whenever today HPE creates a presentation on NonStop the slides highlight the key attributes of Continuous Availability, Massive Scalability and Business Continuity. These are further defined as having automatic failover protection, an ability to scale to thousands of cores in a single system image that in the real world, can support applications delivering 15,000+ tps and then, when networked with other NonStop systems extend traditional continuous availability across multiple sites via support for active / active, active / passive and now, “sizzling hot takeover!” While this is a continuation of the NonStop story and highly respected by those users who have come to appreciate what NonStop provides them, NonStop development is obviously fine-tuning a number of its products to better accommodate the transformation to the new, hybrid world of IT, users are demanding.

Perhaps no bigger change has taken place than what is occurring with NS SQL/MX. Take a look at any recent NonStop PowerPoint presentation and you will see multiple slides devoted to NS SQL/MX. Two items on the NS SQL/MX product roadmap stand out that are important items for anyone looking to NS SQL/MX being a key part of the transformation to hybrid infrastructure. These two items are Database-as-a-Service (DBaaS) together with opening up access to NS SQL/MX for ANSI applications and in particular, those ANSI applications that today have been written to access databases such as Oracle.

No longer locked into expensive proprietary solutions, the NonStop development group is opening the door to access NS SQL/MX without massive amounts of application recoding and when it comes to running NS SQL/MX from within a private cloud, this should be beneficial to many more users that need not just access to SQL but to an SQL that is fault tolerant, supports a mixed workload that doesn’t necessitate any downtime whatsoever, maintenance tasks included, and can be served up from within a cloud as a service. Was this done out of necessity? Was the goal of the NonStop development team to make sure any transformation to a hybrid infrastructure didn’t leave behind NS SQL/MX?

Certainly these factors played a role but there was one more driver motivating NonStop development – HPE’s own IT group had been looking for ways to save money by rationalizing the number of databases they had deployed (more than 25,000 databases across all of HPE), and NS SQL/MX became an option. However, their requirement of NS SQL/MX not only drove the need for ANSI compatibility but also for multi-tenancy. How important is multi-tenancy for HPE IT (that will be reflected in support within NS SQL/MX for all NonStop customers) – it cuts down on hardware requirement even as it reduces license fees.

According to one source I referenced, TechTarget, “In cloud computing, the meaning of multi-tenancy architecture has broadened because of new service models that take advantage of virtualization and remote access. A software-as-a-service (SaaS) provider, for example, can run one instance of its application on one instance of a database and provide web access to multiple customers. In such a scenario, each tenant's data is isolated and remains invisible to other tenants.” When it comes to HPE IT, the practicality of NS SQ/MX multi-tenancy becomes obvious – embracing an implementation that has NS SQL/MX available on the basis of SaaS, where many HPE IT users will need “provisioning” to happen on demand, much of the complexity of doing so in terms of operational intervention, is significantly reduced.

Where HPE IT is headed is beyond the traditional converged infrastructure, which for the most part addresses a convergence at the hardware level, and onto hyperconverged computing. This is being propelled with HPE IT standardizing on x86 servers to populate their data center and with commodity hardware in a hyperconverged environment - the hardware takes a back seat to the software. And here’s the real story – the necessity, if you like, for HPE IT to consider NonStop – according to an explanation on the Hyperconverged.org  web site, “The software layer is built with the understanding that hardware can — and ultimately will — fail. The software-based architecture is designed to anticipate and handle any hardware failure that takes place.”

It is this bidirectional, indeed almost symbiotic, necessity that has arisen from both the NonStop development team and HPE IT that is propelling NonStop into a more mainstream role for those users looking to embrace not just hybrids but converged and hyperconverged architectures. Fault tolerance has always been a given when NonStop ran on proprietary hardware but now NonStop supports fault tolerance in the world of clouds where the necessity to understand “that hardware can — and ultimately will — fail” is well recognized. That NonStop development has also invested in NS SQL/MX to make it more attractive as SaaS is highly commendable and worthy of attention being paid by all of us.  And that’s something competitors will not be able to hold a candle to NonStop systems as they are being delivered today! 

2 comments:

Keith Dick said...

A lot of words there, but I come away without much understanding.

The reservation I've always had about making NonStop utilize virtual machines of the VMware style is that an inherent property of such virtual machines is that the software running in the virtual machine has no idea of its physical environment, and, in fact, a given virtual machine can migrate from one box to another transparently to the software running in the VM.

In such an environment, how can the NonStop software running in a virtual machine ensure that two virtual NonStop CPUs are not running in the same physical box? Is the fault tolerance no longer to be the responsibility of the NonStop software, but to be transferred to whatever software is running and managing the virtual machines? If so, there seems not to be any reason for the NonStop system to continue to exist. Any software that you run in such virtual machines would be fault tolerant, whether it be Windows, Linux, SQL Server, or Oracle.

On the other hand, if the software running and managing the virtual machines is not responsible for the fault tolerance, how can the NonStop software running in the mushy virtual environment guarantee that the virtual CPUs are kept in distinct physical boxes so that a failure of one of the boxes does not bring down more than one of the NonStop CPUs? I have never seen that mentioned, even in passing.

This seems like such a fundamental question that it must have occurred to at least most people who think about the concept of mixing NonStop with virtual machines, but I have never seen anything that even acknowledges that the issue exists, let alone describes, even in general terms, how the issue is addressed.

Richard Buckle said...

They are good questions Keith and ones I too have asked - it would seem that NonStop development are working on some guidelines and a reference architecture to ensure the NonStop you configure is NonStop and that for me suggests that not all that VM's offer / provide is going to be exploited ... I hope someone from HPE development is reading this. Would be good to know ...

Cheers,
Richard