Sunday, August 20, 2017

Celebrating a full ten years of posts …

Back on August 20, 2007, coming off my time as ITUG Chairman, I had no idea that I would be still writing posts for this blog. But here it is, ten years later, and the fundamental message of NonStop has changed very little …

I have to admit that this post is following quickly on the heels of another post just published, a matter of a few days ago. But tradition remains an important factor; I have written a post to this NonStop community blog, Real Time View, every August 20 for ten full years. Almost every August 20,  as checking the archives, there were a few posts that missed the exact date. For me however, precision is important. It’s rather binary when you think about it. It either happened or it didn’t. I was reminded about this during a conference call with a well-known industry analysis organization when the discussion turned to 24 x 7 and the way it was treated; all systems today run 24 x 7, don’t they?

Right now it feels as though I have been working 24 x 7 as I finally return to my office after almost ten weeks on the road. Living and working from our company command center through thick and thin, as we experienced outages of unimaginable proportions, it is good to be sitting behind a desk in a room where there is a sense of permanency. This office isn’t going anywhere nor is my desk and my computer. However, to suggest to you that I have been working around the clock, 24 x7, just to complete this post, may hold some truth given this topic of 24 x 7 has been on my mind for a long time, but in truth, there has been a lot of down time. The office may be a permanent feature of the home but I will still come and go as business needs dictate and that will rarely ever be 24 x 7.

The almost blasé dismissal of the value proposition of 24 x 7 was a wake-up moment for me. An epiphany of sorts, actually! Have we arrived at a point in time where business continuity isn’t all that important anymore? Across the NonStop community the attention we give to the value that comes with being able to run our mission critical applications continuously, 24 x 7, ensuring our customers have access to the applications they need to access any time they want is sacrosanct. After all, with all that we know today about NonStop would it still interest us if it didn’t run 24 x 7? In my last post, New home, new office; time to take stock … I extoll the virtues of scalability and as much as I really like the value scalability brings to any discussion on the value proposition of a system, it still plays a role to aid availability for most stakeholders in the NonStop community as you do not need to stop your operation to add capacity!

In being blasé about 24 x 7 this week, the analyst with whom I was working did think about the value proposition 24 x 7 provides, but then added that truthfully, it is not fated as strongly as it used to be, as even now, just as with real time, the actual time implications for both 24 x 7 and real time are similar. A few minutes here, perhaps an hour once in a while, surely doesn’t matter in the grand scheme of things. No down time means just that – no down time. Likewise, continuously available sounds a lot like permanent availability and readers of posts to this blog will have come to see that I have made reference to permanent availability many times in the past. Marketing types don’t like this term as it has a sense of no wiggle-room about it and seriously, how can you be sure nothing will fail? However, that’s the whole point – NonStop users keep purchasing NonStop systems because they provide permanency to where their clients will always be able to interact with them.

Like my office and the sense of permanency it affords me NonStop systems can be relied upon to be there even if my own access to the applications is casual at best. However, what I am pushing back on these days is terms like “near real time,” as well as “highly available is good enough!” The origins of real time were in missile detection and interception and there could be nothing vague about that. If you didn’t respond in time and neutralize the threat, you simply ran out of time. Nothing real time about pressing the launch button too late to protect a determined target! Again, this is all well-known to the NonStop community and yet, if industry analysts are becoming blasé about this key attribute of NonStop systems and the solutions that have been designed to run on NonStop, then others will follow and it is most definitely the beginning of a slippery slope leading to enterprises associating any value whatsoever to continuous availability.

A couple of years back, IDC did a pretty good job defining different availability levels with availability level four (AL4) representing the very pinnacle of availability. The only vendors to have systems awarded AL4 were HPE with NonStop and IBM with special Parallel Sysplex configurations of the mainframe. The inclusion of IBM really was for appearances sake – IDC just didn’t want a level of availability where only one vendor had achieved that goal. On the other hand, simply trying to build a Parallel Sysplex configuration for a reasonable cost that supported AL4 has proved allusive to almost every mainframe installation. Adding more components – more moving parts, of you like – to an architecture that never intended to support operations 24 x 7 is hazardous at best, but more often proves foolish at worst.  If AL4 could only be awarded to vendors with systems that provided the level of continuous availability IDC was describing out of the box then NonStop would be the only participant.

There is a lot more to this story, mind you. Transformation to hybrid IT is all the rage right now with HPE among the biggest supporters of hybrids – whether hybrids of servers and private clouds or even hybrids of clouds from different suppliers. However, the word hybrid is pretty scary when you get right down to it and there is no doubt that vendors like HPE recognize the need for them to shine the spotlight on simplifying the transformation.  Stated as straightforward as I can – hybrid IT is at best really hard to do and at worst, a path to disaster. For most enterprises the expectation is that vendors will be able to take what they have today, combine with products they would like to have in the future and after a fashion, mold them to where they appear as just a single system albeit a cluster of sorts. Won’t happen – if the Parallel Sysplex aspect to the mainframe has taught us anything at all, adding availability to a system inherently not architected to run 24 x 7 is as much about smoke and mirrors as it is about fulfilling the wishes of individual system programmers who want an opportunity to give it a try.

Hybrid IT is going to amplify the importance of availability. And what it will not do is be a substitute for 24 x 7 and by this I mean, give data center managers a way by which they can opt out of having any part of providing services on the basis of 24 x 7. “We are working with new technology here, mind you, and it is a little flakey so yes, expect outages,” will be the likely response from a data center manager. “We have negotiated with our cloud vendor for backup with redundant compute and storage capacity but we are still working out the bugs and vendors aren’t willing to sign SLAs that demand zero downtime. Can’t be done!” If you read blog posts, as I do, and work with other writers and analysts from across the industry, you will be familiar with how they almost always dumb-down the issue of availability and it really all comes back to no, they can’t tell you how 24 x 7 could be done. As permanent availability, if you like. And the upshot is that they are all discounting the value proposition of 24 x 7 informing all they come in contact with that like real time, there is considerable wiggle room so don’t be too hard on yourselves if you fail from time to time.

As I look back on ten years of blogging and on all that has been addressed in these posts, in a way it seems so strange that I have to write a post about availability and of how no other vendor can provide 24 x 7 support for any of their systems. As surprising as this may seem to many of us, after these ten years we as a community really haven’t done a very good job of extolling the virtues of this really important contribution to the value proposition of a solution. And yet, I am one of those who simply do not like to interact with an application or more likely these days, an app, and find that it is not available for some reason. Hopefully I will still be posting for another ten years and I just have to say it – will I be writing yet another follow-on post to this one and will we still find an industry as blasé about availability as some analysts are today? 

Friday, August 18, 2017

New home, new office; time to take stock …


Revisiting the topic of scalability reminds us about just how flexible NonStop systems continue to be and with enterprises preparing for ever increasing data volumes, having an out of the box system that readily scales will prove to be a big bonus ...


After two plus months on the road, meeting with many members of the NonStop community as we drove almost 6,000 miles in the company command center, it’s a good feeling to once again be able to say you have a permanent address. At least, as permanent as anything can be these days, given the nature of the business we are in – content generation can be done almost anywhere but even so, having a residence that best could only be considered temporary certainly came with many distractions. On the other hand, we did stop in at N2TUG as well as 2017 HPE Discover and we did get to meet with clients in southern California. 

To directly answer questions I have received of late our new residence is just a little north of the Niwot address where we used to reside. In fact it is just to the east of Ft Collins, home to a number of executives and managers of HPE, and East Harmony Rd is now just two exits further up the highway. In many ways we feel as though we are closer to Cheyenne, Wyoming, than we are to Denver, Colorado. However, having scaled down considerably, the issue of scale has come up a lot of late and for the NonStop community, scalability remains a huge positive when it comes to why applications are still running on NonStop. Perhaps not so much scaling down, as in our case, but much more likely about scaling up, naturally!

You can play scales on the piano all day and you can scale new heights as a mountain climber, but when it comes to computing dimensions are involved; you can scale up and you can scale out. The shared-nothing architecture of NonStop is perhaps the best example of a computer architecture that supports almost unlimited scale-out properties and with as much focus as there is today on cloud computing, it is this scale-out attribute that generates much of the discussion.

Expressed as simply as I can, you want to embrace cloud computing to have no practical limits on the resources available to you – isn’t that what NonStop has been providing for decades? Scale up and scale out are familiar terms for NonStop users but they remain very important attributes of NonStop systems. “When you think of scalability as an architect, you must consider both vertical and horizontal scalability,” NonStop consultant Dean Malone wrote in a recent email exchange. “Vertical scalability is the ability to add more and more processors and storage to a given server whereas horizontal scalability is the ability to add more servers to expand processing capacity.”

During the stock market melt-down on Black Monday, October 19th, 1987, where the largest one-day crash took place with the market losing more than 20% of its value in just that one day, trades were flying thick and fast with volumes way outside anything previously forecasted for the New York Stock Exchange. However, it was running on NonStop at the time, so NonStop engineers simply kept inserting more and more CPUs and the NonStop system just kept on running. Not a hitch as it scaled up to handle the huge volumes of the day. The Exchange only called a halt to trading when the continuous display replicating the ticker tape of old simply couldn’t keep up with all the action. We often reference “elasticity of provisioning” when it comes to clouds but in this case, the NonStop engineers inside the data center demonstrated such elasticity more than thirty years ago!

Almost unlimited scale-out capabilities? Anyone who has sat through the presentation by the OmniPayments team of late had seen referenced an instance of OmniHub that spans 3,000 CPUs that make up a single system and to the best of my knowledge, no IBM mainframe of this size has ever been sold.  For a brief period of time, however, IBM did talk about initiating thousands of Linux workloads on a single System z. I recall it was back when the z10 was first announced and the number of Linux workloads was 4,000+, but I am sure someone will give the exact number as it could have been much higher, like 40,000! The fact remains however that there is a difference between running real world applications versus providing simple demonstrations. As Omnipayments’ CEO, Yash Kapadia, so often reminds me, should OmniHub ever be offline for any reason the multimedia message services for one entire country would come to a grinding halt.

Stock exchanges and telco systems aside, the industry is about to head into essentially unchartered waters – the user experience (UX) or what is increasingly being called the Customer Experience (CX) that, according to one source I checked, is the “product of an interaction between an organization and a customer over the duration of their relationship” is placing new demands on our systems. Call it the intersection of transaction processing with real-time analytics and suddenly your requirements for both compute power and storage become significant unknowns. Just how do you scale and is the cloud the only solution that comes to mind?

In his latest post to the Striim blog, The Rise of Real-Time Data: How Striim Helps You Prepare for Exponential Growth, Striim Cofounder and CTO, Steve Wilkes, writes about these upcoming demands on our systems. “The predicted huge increase in data sources and data volumes will impact the way we need to think about data. The key takeaway is that, if we can’t possibly store all the data being generated, ‘the only logical conclusion is that it must be collected, processed and analyzed in-memory, in real-time, close to where the data is generated.’” For the current users of NonStop systems there is no escaping the message here. To remain in business and be competitive, analytics will be playing an increasingly important role in how you support CX, so enterprises will need systems that can truly scale to meet these new demands for compute power and storage.

Scaling has also been on the mind of Paragon’s CEO, Jim Perry. It may not always be a topic we associate with scale but when it comes to testing of our applications and their automation, the anticipated additional work required in support of CX will necessitate the development of even more applications and these additional applications need to be subject to automated testing. And much more, according to Perry: Today it is all about continuous integration, continuous delivery and continuous testing in order for your enterprise to deliver quality code faster. For Paragon, the challenge that comes with scaling has as much to do with the people as it has with the tools as Paragon sees the enterprise looking to scale the business in a manner that determines whether or not it stays in business.

Scalability has always been an important attribute of NonStop systems. From the earliest days of Tandem Computers it was all about availability, scalability and data integrity and nothing really has changed. Perhaps we associate data integrity with security and that’s OK, but as for availability and scalability, perhaps the NonStop user is being best served because of the inherent architecture of NonStop that allows almost unlimited scalability. While it is true that no enterprise has come close to hitting the size limits of a NonStop system when clustered to the maximum, knowing that already there are users out there with 3,000 processors in their system has to be encouraging.

Scalability may indeed be the “special sauce” that is NonStop today and may even be overlooked at times as it is overshadowed by the availability aspects of NonStop systems. However, every enterprise wants to grow their business in a manner that simply doesn’t break the bank and for NonStop users, it has always been a simple process to add more CPUs – indeed, the architecture of NonStop assumes that the system has been populated to the max with CPUs and is checking to see if they have arrived; cool! As for our scaling down of our home and office well, we may be living smaller but we are still thinking big. And that is something that will not change any time soon!

Tuesday, August 8, 2017

Our colorful lives!

Being on the road as much as we have of late has given us time to talk to members of the NonStop community and it’s been a very interesting indeed colorful ride so far …

There is nothing like seeing the first signs of home appearing on the horizon. With all that has transpired these past couple of months, crossing the state line and seeing the tourist-promotion billboard declaring entry into Colorful Colorado was welcomed, indeed. I am going to leave it to postings to other sites to cover all that has transpired while on the road these past two months but suffice to say that it was most interesting, with challenges coming at us every day. Sounds an awful lot like a typical day inside any enterprise data center as no matter the plans, there is always something unexpected taking place.

It has been on these travels that I have come to truly appreciate the need to have multiples of just about everything. Suddenly primary and secondary are no longer abstract terms as even the best prepared plans go right out the window when unexpected situations develop. For me the big item I have been covering is the real world experiences of those enterprises embracing hybrid IT; in particular, those enterprises that include NonStop in their plans.

So, let me cut to the chase – I have as yet to come across any enterprise that have gone down this path or talked to data center managers tackling the finer details of having mixes of traditional and cloud environments supporting anything apart from pilots and prototypes. So, has something gone awfully wrong and are the reports coming from industry analysts overstating reality?

To be fair, I wasn’t expecting to come across a sizable number of NonStop users kicking off such projects and yet, I have become somewhat curious about what is happening out there in today’s data centers where NonStop has an established presence. Members of the NonStop community that I have met through the years have proved to be colorful individuals, not slow to form opinions and yet, it would seem many of them are looking at their peers within the community running applications on NonStop to see who blinks first. Or not at all!

The promise of NonStop participating in hybrid IT on the other hand has huge potential upside; not just for those enterprises already convinced of the merits of NonStop but to the industry as a whole. Who wouldn’t want better availability, scalability and yes, security? My point here is that all too often the messages we develop in support of a product or service have expiration dates but when it comes to the key attributes of NonStop there are no expiration dates. Have you ever seen a use-by date on a NonStop system?

It isn’t just the colorful characters we come across within the NonStop community that help set directions but general mood of the community as a whole. It’s as if collectively we have all been holding our breath that there was going to be a breakout season for NonStop happening soon. Don’t get me wrong. You don’t have to be colorful to make a contribution. What I am really trying to convey is that after all these years there are still many individuals who champion NonStop on every occasion.

For them there is absolutely no reason on earth why any mission critical application shouldn’t turn to NonStop as the go to first choice system for deployment.  However, what is taking place is a leadership transformation across the vendor community as many of the baby boomers begin to ease up to take a back seat to new leaders. Whether CEOs or Managing Directors, have you noted just how many new folks have come to help companies we know so well over the last year or so?

If industry and financial analysts can be accused of overstating reality and   the progress being made toward embracing the hybrid IT by major enterprises is more snail like than rabbit like, should we start to think that perhaps we have, yet again, come up with a fantastic technology and a terrific answer when it comes to cost containment at a time when the problem isn’t all that bad? Are we at a moment in time when it all sounds good but no, it’s not for us just yet?

In other words, has the “Gucci smart brigade Data Center” not struck as loud a chord with data center managers as the “Bonds t-shirt Data Center?” (For non-Australian, you can substitute any local brand and of course, Fruit of the Loom easily applies!)  Point is, does a glamour solution hold up when all we need is some pretty bread and butter solutions to work better? Wouldn’t it be nice if we could easily connect our silo-ed applications in a way that gave us a hub and spoke model where the hub never failed?

Oftentimes I fret over just how caught up we can become in what’s new rather than what works. With exceptions, rather than what is universal. NonStop at its heart is fundamentally anything but exceptional when it comes to what NonStop runs on – it was always conceived as a solution to a problem of quality and the unexpected. Back in the 1970s no hardware vendor would consider talking about their infallibility but rather talked up the RAS support they had, the wonderful service staff they had on hand and even about backup sites you could turn to if you data center went south! 

And NonStop took all this into consideration and continued to work, 24 x 7, no matter what fell apart. In other words, it was designed to keep working when all else failed. And there is a reason why NonStop talked up take-over versus fail-over. NonStop wasn’t going to wait for a disaster but rather, stepped in to head off disaster once it detected normalcy was being compromised.

There has been a lot of news of late about the success or otherwise of cloud vendors. IBM is talking up its revenues from cloud computing offerings even as Microsoft has detailed how with Azure it is now making more money from clouds than Amazon, even as industry and financial analysts endorse Amazon as the leader.  It's hard not to apply the label “colorful” to someone like Amazon’s CEO, Jeff Bezos, who for a brief couple of hours had more money than Microsoft’s Bill Gates just a few days ago, but for the enterprise, all the color drains from their visage once we get past thoughts of clouds for anything other than select IaaS opportunities.

Perhaps the biggest potential game changer is the availability of Azure for the enterprise to power its own on-premise cloud. This is where I see IBM having difficulty competing over the long haul as enterprises equate cloud with x86 and there really isn’t any enthusiasm in support of Power. Given this, and that Azure will support Hyper V, that is a little more Gucci than it is Bonds.

According to what Microsoft is telling enterprises today, “The Hyper-V server role in Windows Server lets you create a virtualized server computing environment where you can create and manage virtual machines. You can run multiple operating systems on one physical computer and isolate the operating systems from each other. With this technology, you can improve the efficiency of your computing resources and free up your hardware resources.” Key here, for the NonStop community, is running multiple operating systems on one physical computer and isolating (the impact, good or bad) the operating systems from each other. With this, the prospect of firing up a virtualized NonStop platform becomes rather obvious – and can be given serious consideration whether it is Azure on-premise or Azure driving a cloud somewhere.

Here is where it is all headed. Those colorful individuals championing NonStop within their enterprise are going to arrive at the data center one day to find their beloved hardware is out on the loading dock. In its place, aisle after aisle of servers racked to the ceiling with nothing left to do other than to fire up virtualized NonStop as just another Azure-supported operating system; the difference being that, unlike its peers running alongside of NonStop, it can keep running the mission critical applications the enterprise depends upon when they all falter for one reason or another, possibly just an automatic upgrade pushed to them by none other than Microsoft. Yes, it will be NonStop that will make Microsoft look good running some very important applications.

Calamities are always present and disaster is just a few keystrokes away. In our world of highly networked systems, it’s good to know we will be able to turn to one operating system that knows when to take over and not just wait for failure. It’s very much unique to NonStop and it’s hard not to be impressed with this model developed at a time when all hardware was flaky and unreliable. Kind of sounds very much like today, doesn’t it? And here’s the good news, virtualized NonStop on Azure and Hyper V is on the roadmap and likely being tested in the labs of NonStop development even as I type these last few words. Colorful? Well, I can’t think of anything more colorful than seeing a lot of red faces among enterprise executives when they see what today’s NonStop can achieve supporting their business. 

Friday, July 28, 2017

Moving forward - transformation and virtualization makes testing of the business logic even more critical

When we think of virtualization and the coming of clouds and as we consider all that may be involved in transforming to these hybrid combinations incorporating the traditional with the very new, how often does the testing of our applications come to mind?

There have been times these past few days where events have reminded me of practices and disciplines that dominated our discussions in former times. I had the misfortune of breaking things and working with insurance companies and I was left without access to more modern methods of communications to the point where I was asked if I could possible find a fax machine so I could receive a fax.

It was in the early 1980s when the vendor who employed me back in Sydney, Australia, installed a fax machine in my office and I no longer had to take the long walk over to the telex machine where I would have then spend hours preparing a paper tape for transmission back to my head office in Richmond, Virginia. In many ways it was a sad occasion as I had really mastered the telex machine and yet it was progress, given how easy it became not only to transmit the written word, but pictures charts and graphs as well!

Fast forward to today and the power of the mobile phone is undeniable. We can communicate with anyone we want to, at any time, about anything at all. In a couple of recent conversations the talk has led to consideration of whether the mobile phone was about to fade from the scene, to be replaced by even more spectacular technology and whether or not we were entering essentially an era of magic. How else can you explain away the knowledge so many businesses have about everything we do? And yet, even with the most advanced forms of communication there will still be a need for apps to support inquiries as well as many different models used for purchases and other financial transactions.

Point is – we still write code and as much as AI continues to advance there remains a need for humans to remain very much involved in stringing together the logic that drives decisions for success. When we talk about clouds we talk about the elasticity of provisioning that addresses both the needs we have for data storage and business logic. But here’s the rub – we are working diligently to be able to store vast amounts of data even as we continue to write logic practically unchanged from how we did it in the past albeit, a lot more quickly of course.

Let me take you to an earlier time, decades ago. In fact, many decades ago, to when we first started coding the computers that marked the beginning of our adventure with computers. I was recruited by IBM on the campus of Sydney University at a time when I was becoming very bored with academic life. At the time I wasn’t really aware of the implications of my decision to participate in a series of tests the University sponsored but it was only a matter of months before I found myself on another campus; this time, it was the operations center for a steelworks in Wollongong, Australia.

The year I was recruited was the southern hemisphere’s summer of 1969 and my first day on the job was 1970, so effectively I have been looking at code for almost six decades. And the fundamentals haven’t changed, just the timeframes. Ambitions? Well, my first job was to develop applications in support of a new steelworks that was being built but along the way, I was tinkering with the operating system as for a period of time the IBM mainframes the steelworks purchased didn’t have enough main memory to run any IBM operating system so we pretty much came up with our own – just a couple of lines of IBM 360 assembler code together with a bunch of macros.

Timeframes? Well this is where the dramatic changes can be seen, perhaps more so than when it comes to chip power and Moore’s Law. I was writing just one application a year – perhaps a little bit more. I grabbed a coding pad, wrote assembler instructions for the logic I was pulling together to solve a business problem. Pages and pages of assembler code that was then submitted to the data entry folks who oftentimes took a week or more before they returned to me the coding pages along with a box of punched cards. I kept running these decks through the assembler until I got a clean assembly at which time I took the object deck and began to test.

As a matter of practice, we always left an addressable piece of storage (of about 100 to 250 bytes) that if my logic went awry, I could branch to, throw in a couple of correcting statements, and return to the mainline code. Ouch – yes, almost every production application was supported by a series of supplementary corrective card that steered the logic back to where it needed to be without having to reassemble the whole application, or worse, send the coding pages back to the data entry team.

Testing? For my applications, which supported what we called the “online application” I would often resort to booking solo time on the mainframe and dialing in “single cycle” so I could manually step through each instruction and watch the results via the console display lights that changed with the execution of each instruction. Productivity? Wow – I could debug my programs more quickly than others working with me who preferred to go home at the end of the day.The company had enough programmers to complete the implementation of the new application for the steelworks about to be commissioned so it seemed reasonable to function this way. Looking back at what we did all those years ago I am not surprised that applications often stopped but rather that any of them ran successfully at all!

Now let me fast forward to practices of today – attempting to develop and test applications and then ensure that they were maintained same way as we did all those decades ago is not only not possible but runs contrary to the always-on, always-connected 24 X 7 world we live in as we remain tethered to our mobile devices plugging away at the latest app. Languages and development frameworks have changed. We don’t simply write code, we pull code from multiple sources and practically assemble a program that in turn is just a part of an application designed to address a specific business need.

Providing defect-free applications at a fair cost, particularly when these applications have to accommodate today’s multi-vendor and hybrid environments even as they have to be aware of the many regulatory and compliance mandates for each industry needs something a whole lot more sophisticated than simple access to a system that can be set to single cycle!  And I was reminded of this only a few days ago when I had a conversation with folks at Paragon Application Systems. These are the folks who have developed the premier testing solution for the payments industry.

“It’s all about continuous integration, continuous delivery and yes, continuous testing,” I was told by Paragon CEO, Jim Perry. Integration, delivery and testing is a never ending cycle, for the life of the program and application, performed in a seamless manner whereby the state of the program or application is always current and correct. “The growth of our global economy has created payment systems that have grown too intricate and change too quickly for any organization to risk deployments without frequent, comprehensive regression testing. No company can hire enough people to manually perform the testing necessary in the time available within a release cycle. Automation of the software build and delivery cycle, as well as test execution and verification is required.”

Manually perform testing? Grown too intricate? For the NonStop community there has always been concerns about the business logic bringing a NonStop system to a halt. And for good reason! Fault tolerant systems have been designed to keep processing even when facing single points of failure, but oftentimes, poorly implemented and tested business logic can get in the way! Unfortunately it’s about to get a whole lot worse as testing not only has to ensure the application is defect free but that the underlying platform, now being virtualized, is configured in a way that NonStop applications can continue being NonStop.

We have virtualized networks and we have virtualized end points and this has helped considerably with automating our test processes but now the platform itself is being virtualized and this is a whole new ball game for many enterprises IT shops. And this makes the need to have something like Paragon on hand even more important – we have stopped manually checking anything these days so we cannot start now. In the coming months, as we continue to look at the transformation to hybrid IT and to virtualization and the software-defined everything I am planning on devoting more column inches to testing as all too soon our inability to thoroughly test what we are turning on in production could bring many a data center crashing down.

If as yet you haven’t looked at Paragon then you may want to visit the web site and download a couple of papers as I have to believe, for those of you in the NonStop community who are only vaguely familiar with how testing has changed, particularly when it comes to testing for payments solutions, it may very well be an opportunity to rethink just how comfortable we are with the processes we have in place today. And wonder too, how anything worked at all back in the days when it was all performed manually!

Tuesday, July 18, 2017

When things go horribly wrong …

How a few cents of wire lying unnoticed on the road can cripple a vehicle as large as an RV; we continue to value availability and it’s time to double down on the benefits of NonStop!

The most essential attribute of NonStop today is its fault tolerance capabilities. Availability is as highly valued as it has always been and yet, there are many parties advocating that it really isn’t an issue any longer. Push apps and data into the cloud – public or private, it matters little at this point – and the infrastructure on offer from cloud providers ensures your apps and indeed you data is protected and available 24 x 7. But is this really the situation and should CIOs contemplating a future for their IT centered on cloud computing be immune to the many ways apps and data can be taken offline?

Unintended consequences! We read a lot about such outcomes these days and it is a further reflection on just how complex our interdependencies have become. Push a button over here and suddenly way over there, something just stops working. They weren’t even on the same network, or were they? Throw malware onto a Windows server looking after building infrastructure and suddenly, the data on a mainframe is compromised – who knew that they shared a common LAN? Ouch – but it happened as we all know oh so well.

For the past two months, Margo and I have been fulltime RVers. That is, we are without a permanent address and have been living out of our company command center. We have driven to numerous events all of which have been covered in previous posts to this blog. Our travels have continued and this past week we headed down to Southern California to meet with a client and the trip took us through Las Vegas. In the heat of summer in the desserts of Nevada we hit temps exceeding 110F. Overnighting at our regular RV site, we found a collection of fluids pooling underneath the RV and sheer panic set in. After all, this is our home; what has happened?

It has turned out that unknowingly we had run over wire mesh that was completely invisible to the naked eye. But those strands of very thin wire managed to wrap themselves around the drive shaft of the RV where they became an efficient “weed whacker” – you know, those appliances we often see being used to trim hedges and lawn borders. In a matter of seconds our own drive shaft powered these thin wires such that the result was multiple shredded hydraulic lines and air hoses – who could have imagined such innocent strands of wire could be so disruptive or  that they could completely cripple a 15 plus ton coach in a matter of seconds. Yes, unintended consequences are everywhere and for the most part, lie outside any of our plans and procedures, where detection of the event comes too late.

It is exactly the same with all platforms and infrastructure, on-premise or in the cloud, or even hybrid combinations of both! If you don’t design for failure – even the most far-fetched – then you are destined for failure. It is as simple as that. In my time at Tandem Computers we often referred to an incident that led to Tandem systems always being side-vented and never top-vented. The reason for this was that, at an early demo of a NonStop system, coffee was accidentally spilt on top of the machine effectively stopping the NonStop. Now I am not sure of the authenticity of this event but would welcome anyone’s input as to the truth behind this but it does illustrate the value of experience.  Designers would immediately have caught on to the possibility that coffee would be spilt on a system the day it was being demoed but for Tandem engineers, it led to changes that exist to this day.

Experience has led to more observations which in turn have generated more actions and this is all part of the heritage of NonStop and in many respects, is part of the reason why there isn’t any competitors today to NonStop. You simply cannot imagine all of the unintended consequences and then document them in their entirety within the space of a two page business plan. But design them you must and as I look at how the platforms and infrastructure being hawked by vendors selling cloud computing today are dependent solely on the value proposition that comes with redundancy (which is all they ever point to), my head hits the table along with a not-too-subtle sigh in disbelief. Redundancy plays a part, of course, but just one part in negating potential outages but availability needs so much more. But at what cost?

The whole argument for cloud computing today revolves around greatly reduced IT costs – there is an elasticity of provisioning unlike anything we have experienced before but more importantly, given the virtualization that is happening behind the scenes, we can run many more clients on a cloud than was ever conceived as possible back when service bureaus and time-sharing options were being promoted to CIOs as the answer to keeping costs under control. With the greatly reduced costs came the equally important consideration of greatly reduced staff. And this is where the issue of unintended consequences really shows its face. Experience? Observations? Even plans and procedures? Who will be taking responsibility for ensuring the resultant implementations are fully prepared to accommodate elements that fail?

There is a very good reason why pilots run through check lists prior to take off, landings, changes of altitude, etc. Any time an action is to be taken there are procedures that must be followed. When I turn on the ignition of the RV, there is a check list that appears on the digital display and for the same reason as pilots have checklists – too many bad things can happen if you miss something and I have managed to inflict considerable damage to our RV through the years when I forgot to follow all the items on the checklist. And there are best practices in place today at every data center that have been developed over time based yet again on experience – so when next we talk about availability as we head to clouds, who is preparing the next generation of checklists?

It is pleasing to me to see the efforts that OmniPayments is putting into providing cloud computing based on NonStop. For the moment it is solely providing payments solutions  to select financial institutions but even now, the number of clients opting to run their OmniPayments on the basis of SaaS rather than investing in platforms and infrastructure themselves sends a very powerful message to the community. Don’t discount the value of NonStop as has been demonstrated through the ages – get to virtualized NonStop (vNS) as quickly as you can and go champion within your enterprise that yes, you now have the best possible solution that can survive even the strangest of unintended consequences. It’s just what NonStop was designed to do and it keeps on doing it.

You run on NonStop X so you will run on vNS. There is much that can go wrong with traditional physical systems just as there is much that can go wrong with clouds. Simply going for more clouds and leaving it to redundant banks of servers isn’t the safety net any enterprise should rely upon so take it to the next level. Let all you know how NonStop is taking its most prized attribute, availability, high and wide into the clouds! After all, these clouds are every bit as vulnerable to failure as any primitive hardware built in the past and NonStop knows failures when it encounters them and just doesn’t stop! 

Sunday, July 9, 2017

Growth is not optional; it is a must!

NonStop keeps on going no matter what system failures may arise – but is this enough? What follows here is purely speculative on my part but is worth raising with the NonStop community. And yes, any and every comment more than welcome …

Travelling around Colorado these past few weeks it’s so clear just how much growth has occurred. Lakes and reservoirs are full to overflowing – more than one state park we have visited had pathways closed due to local flooding – grasslands are standing tall and trees and bushes are a brilliant green everywhere you turn. Spring rains have continued into the summer with afternoons subject to intense thunderstorms, most days. I can recall that in the past such storms were forming at this time of the year but rarely did the rain reach the ground, but this year there have been more late afternoon storms than I can recall.

Living in a motor coach makes us a little susceptible to inclement weather but so far, we haven’t suffered anything more than a fright from an unexpected thunderclap. The rainfall that continues well into summer isn’t something we aren’t pleased to see of course but the growth these rains have helped produce has turned the Colorado front ranges greener than I have seen for a very long time. It may all be problematic later in summer if it all dries out as we have seen more than our fair share of wildfires with summer’s end but until then, this extended period of growth does a lot of good to the state. Any reader who has also seen photos posted to my Facebook and Twitter accounts may have already seen what I am talking about but just as a reminded, I have included one of the photos above.

For the past week I have been working with vendors on the next issue of NonStop Insider that should appear later this week. What really has struck me is the number of references to growth. Where will it come from? Does the business value proposition of NonStop remain as strong as it once was or will NonStop struggle to sustain double-digit growth year over year? The theme of this issue of NonStop Insider was transformation – you will see numerous references to transformation in the articles that were submitted – but does transformation lead to more sales? It’s questions like these that have come up more than just a couple of times this week and it made me rethink some of the answers I had previously provided to my clients after I had been asked this question.

The business value proposition is as real today as it ever has been – it’s all about availability after all. Out-of-the-box, supported by middleware and utilities that are all part of an integrated stack, from the metal to the user interface! From the perspective of any user developing an application, there is always concern about what will happen if something breaks and knowing that your application will continue to function even as all around it may fail is not something that can be lightly discounted. It’s really a very valuable attribute with an almost “holy grail” consideration about it – just talk to those now building their very first application and watch their reaction when you say you work with a platform that survives failure and just keeps on running. Like the famous “Energizer Bunny!”

However, for most of us, we had this all before. We know the value of NonStop but it’s a strange development environment with legacy tools and some very strange ways of doing things – what’s this about checkpointing? What’s this about redundant storage? Isn’t it all very expensive and don’t you have processors that simply don’t do anything until they are needed? Recently, I have heard just about everything being addressed except for the most important aspect of all – out-of-the-box, it just works! No, you don’t write NonStop programs, you simply let NonStop run the programs you write. You have a rich choice of languages and development environments – NonStop supports it all but with the addition of fault tolerance. It not only just works, but it keeps on working. The Energizer Bunny will eventually stop – its battery will run down. It may last a lot longer than other batteries, but as a power source, it will eventually fail. Not so with NonStop!

So, yes we have the susceptibility to failure covered. But growth? To paraphrase the Apollo space mission, for NonStop growth is not an option. In some respect we have to be very thankful that HPE has given NonStop every chance to build a larger population of users. There has never been serious consideration to discontinuing the NonStop program despite what rumors you may have heard – there are just too many blue-chip customers for HPE to turn them out onto the streets. As witnessed last year at HPE Discover, from the CEO on down, there is a strong appreciation for the value proposition NonStop brings for even the most fastidious of users. However, today’s HPE looks nothing like the company that existed just a few short years ago. Now HPE is looking to all of its products to start producing the type of growth any new company demands.

But here’s the rub; there is opportunity for growth with NonStop for sure but not likely in its present form. Surprised? Well you shouldn’t be. It’s been coming for a very long time – NonStop is going to wash over every product and every HPE system will contain some elements of NonStop as HPE looks to differentiate itself based on availability. A stretch? Dreaming? Perhaps this is taking it a little too far – but then again, is it? Imagine for a moment that any distribution of software HPE builds has a little of NonStop flowing through it, and applications running on HPE as a result just keep on running, would that of itself be the source of future growth for NonStop?

Stepping back for a moment, you will find nothing of this in any NonStop roadmap presentation. For now, the NonStop development team has so much on its plate and as fast as it is moving, there is still so much more to do. However, the judicial placement of a couple of knowledgeable developers within other projects and this could all change in a heartbeat. Yes, NonStop still contains a certain amount of special sauce but it is NonStop’s special sauce and it is NonStop development that has the recipe. Let a couple of chefs loose in other kitchens and stand back – NonStop is no longer just a product but a philosophy and that’s not diluting the business value proposition, to contrary, it certainly would create growth.

You just have to look at NonStop in an entirely different light. It’s not best practices, although best practices have always been a factor in having NonStop applications be as available as they are. Furthermore, it’s not rocket science as much as there are those who think you need a team of specialists to keep NonStop running non-stop. This fear of a graying population of lab-coat wearing engineers is just way over blown. Our graying population is retiring but guess what, there is a developing talent pool of much younger folks that I am not prepared to discount or suggest that they won’t cut it!

Earlier I used the phrase “NonStop is going to wash over every product” and it wasn’t by accident as this phrase too came up in discussions this week. Think of the incoming tide pushing further up the beach and spilling onto rock formations until the tide eventually covers everything. This is exactly one vision I have of NonStop and while I may be the only one predicting such a possibility, HPE has everything to gain in letting the NonStop tide roll in – indeed, let’s go one big step further. Let’s make NonStop open source! Let’s integrate NonStop with OpenStack. Let’s shake it all up – and lets just see who follows NonStop. I know that this highly problematic as well, but why not?

Enterprises will still want a managed software distribution as they continue to abhor the current model of fixes and patches arriving by the hour. Stability and predictability – a new release every summer is something they can handle, but not every hour. So, NonStop becomes a special distribution of OpenStack built to meet these requirements of enterprise IT execs. Think SUSIE, RedHat even Debian – supported distributions are important and have found markets. Put this down as another potential benefit that NonStop brings to the party – availability, scalability and yes, predictability!

In today’s transforming world of IT, there is no such thing as staying within the lines and keeping inside the box. It’s cliché but it’s also very true – to succeed think differently. While much of what I have written above will probably not come to pass even as it’s a stretch to ever think HPE would make NonStop open source, in order to grow and become the best software platform on the planet – HPE has to think of doing the unexpected! The dramatic! And I think it can do just that and it may be coming very soon. Move over Energizer Bunny, not only will NonStop keep on going on but will do so long after your bunny’s battery has died!        

Friday, June 30, 2017

Shipping containers are being repurposed; could we see something similar happening with NonStop!


Seated at a bar made up of repurposed shipping containers prompted me to think of NonStop. Just as shipping containers can be repurposed, with the arrival of Virtualized NonStop can we see NonStop being repurposed as well?   


It was way, way back in 2009 when I wrote a post to this blog site about how HPE was packaging up shipping containers loaded with blades and peripherals and marketing them as a specialty POD – actually, as a “Performance-Optimized Datacenter” and to read more about this, check out the post of July 2, 2009, Common standards, uncommon advantages! I have no real idea of how successful this market campaign proved to be but I only heard that the program was discontinued sometime last year. 

And the reason? HPE was moving beyond blades, where today it was all about hyperconverged infrastructure, hybrid IT and yes, Synergy. Perhaps I was too quick to dismiss Synergy as just software as the more I have dug into HPE Synergy, the first thing I noticed was that it included a whole new approach to hardware packaging to where blades have become legacy technology. Ouch!

I was reminded of the POD this week as I was writing an article to be published shortly on Banking Tech, a publication and web site I support these days. The article focuses on shipping containers and, in particular, the considerable re-purposing of shipping containers that is going on right now. Here in Boulder, inside the local mall, there is a bar that is comprised solely of two 40’ shipping containers lying side by side, slightly offset, out of which a bar operates serving a variety of local craft beers. There’s another one too that was set up a little earlier in Estes Park and, by all accounts, they are both proving to be very popular.

However, setting up a bar in a container is probably not the best example of repurposing of shipping containers. Want to develop a mini hydro-power station in a box? Well, the city of Melbourne has done just that with a shipping container. Want to build a swimming pool that you can take with you when you move? Well, that too is being done with shipping containers – 20’ and 40’ – proving ideal for the purpose.

Need to erect a shopping mall quickly following a natural disaster? Well, check what Christchurch did with shipping containers and the unique shopping experience that they created in the heart of Christchurch following a massive earthquake. After five and a half years it may be winding down but it certainly provided a unique shopping experience. Want low cost housing in the east of London? Well, Container City I and II may be the trick. Shelter for our homeless veterans then yes, it’s being done already in Orange County, California – new housing made from shipping containers.

Point is, the standards that were embrace in the 1960s across the transportation industry not only proved to be disruptive but spawned a number of unrelated industries, not even imagined at the time. Unfortunately standards too are subject to becoming legacy and of late it seems to be happening at an accelerated pace. The big deal here is that the demise of the IT expert or specialist is happening within enterprises across all industries leaving decision making in the hands of new-age managers prone to jumping aboard the next technology wave totally driven by what they just saw in an airline magazine.

There was a time not too long ago where we called this McDonalds Architecture. You know, the arches that symbolize McDonalds,  take another look; did you know there was a bell curve immediately followed by another bell curve that symbolized how quickly architects oftentimes jump from one product or technology curve to the next? And then, of course, the next one, ad nauseam!

With the amount of marketing weight HPE is throwing behind simplifying the transformation to hybrid IT, we already are beginning to see vendors looking at their options with some of them beginning to view the process of transformation as an open invitation to move in different directions. Consider, for example, the homogeneity of the simplification efforts of HPE – for enterprises the investment in Synergy isn’t going to be minor. It’s a big undertaking. However, how many enterprises out there will be all-HPE? So yes, NonStop with its history of being a player among many within the data center has thrived when there has been considerable heterogeneity present.

Could the roll-out of something like Synergy benefit from NonStop for simple connectivity to the rest of the enterprises IT infrastructure? Unfortunately, once you start thinking along these lines it’s hard to see how this could be simplified but surely, there are standards? Ethernet comes to mind and with Virtualized NonStop (VNS), there is RoCE – RDMA over Converged Ethernet.  The Synergy solution does not support RoCE enabled mezzanine cards today, but I can imagine Synergy does have a robust roadmap that may include RoCE support in the future.

And yet, Synergy with its frame that will fit into any regular, industry standard, 19’”rack, “is designed to accept multiple generations of compute, storage, fabric and management modules. It is built on industry standards so it fits easily into existing and new data center environments while preserving the ability to leverage existing storage and connectivity resources.” There it is again, standards. But what standards? Well, it is not about blades anymore, that’s for sure, as today blades have become legacy and are a part of what we are now referring to as traditional computing.

“The direct-connect midplane delivers 16.128 Tbps of bandwidth and is future-proofed with a photonic-ready design that could exceed bandwidth requirements for the next decade.” OK, so photonic-ready is picking up on some of the deliverables from The Machine project from what I could see while on the exhibition floor at HPE Discover. But midplanes are scary things – and rarely provide standards as each vendor tailors them to their own needs – will a Dell or Lenovo 2 or 4 socket “board” plug into the Synergy midplane and work? I haven’t seen anything to suggest that likelihood.

So it’s not blades per se but rather modules, or resources, the items you need today to better support “a broad range of operational models such as virtualization, hybrid cloud and DevOps.” If you do want to know more about HPE Synergy then check out the easy to follow write up that describes Five Steps to building a composable infrastructure with HPE Synergy.

As I look at the containers being repurposed the various transformations that result are kind of mind-boggling. Who knew? However, there is genuine excitement surrounding the repurposing of NonStop – taking it to the virtual world. Again, who knew? I am often asked about the marketing effort HPE is putting behind NonStop, particularly now that the new NonStop X systems are shipping. Well, no surprises here – there will be very little marketing effort. But rather trying to syphon off marketing dollars for NonStop X, the NonStop team is moving NonStop closer to where the marketing dollars lie.

That’s right, VNS isn’t just a cool piece of technology but rather, a really big jump out of traditional systems to where it can finally gain some of the bright spotlight being directed at all things related to simplifying hybrid IT. VNS is aimed at both today’s NonStop users as well as at building a new NonStop user base. And yes, looking at as I tend to do of late, it represents a very effective (and qquite legitimate) repurposing of NonStop!


There is nothing on the VNS roadmap suggesting there is anything under way to bring VNS to Synergy. For the moment, the priorities of the NonStop development lie elsewhere and to some extent I covered that in my previous post, Right time, right place – NonStop lights up the horizon! However, as noted earlier, there are always unintended consequences from actions taken by a major vendor such as HPE even as there are unrelated “industries” likely to appear – for the most part, I am expecting them to surface among the service providers looking to better integrate Synergy with whatever else may be in the data center and where VNS may be of assistance.

Another source may indeed be solutions vendors well versed in NonStop who see opportunities to better differentiate their solutions by capitalizing on both Synergy and VNS. But even with Synergy not making an appearance on any NonStop roadmaps, VNS or otherwise, I cannot imagine that at some point, their paths don’t cross. Perhaps it will come down to whatever happens within HPE’s own data center where NonStop has a presence already.

In many respects the age old truism that the good thing about standards is that there will always be plenty of them quickly comes to mind. Shipping containers started out with 20’ containers only to add 40’ containers into the mix. Now they are almost any size with US containers following a different standard to Asia-Pacific and Europe. We now see containers that are 8’ 6” high even as we see others that are 53’ long.  


Synergy may be unique to HPE even as NonStop provides unique capabilities but ultimately, for it all to work in HPE’s favor, Synergy may need VNS as much as VNS might benefit from Synergy. And not solely for the marketing dollars (although they will help)! Redundancy is highlighted in the Synergy documentation but as well know, today’s enterprises need more than just redundant elements. In wrapping up this post, it is worth noting (as was covered in that previous post to this blog), anything that is manufactured will break and it is NonStop’s ability to recover from breakages that will ensure the ongoing viability within the data center and yes, for the next decade, too!      

Tuesday, June 20, 2017

Right time, right place – NonStop lights up the horizon!

Given time to reflect on NonStop particularly as we hear more about where it is heading, should be encouraging for all in the NonStop community – vendors included!

It’s been a while since I caught something at the right time and in this case, the right time just happened as I picked up my iPhone and snapped a picture of a sunset. Margo and I have now spent well over three weeks in our company command center – out temporary office and home – and we have been on the move for a lot of that time. From that initial drive down to Dallas for N2TUG and then across to Las Vegas for HPE Discover, it’s all been covered in recent posts to various blogs. But as the picture above depicts so well, sometimes you just manage to capture something at the right time from a place that proves to be just right!

There is an upside to long drives across America’s interstates. You get plenty of time to just sit back and think! To reflect! And with the many customer events of late, where the focus has been all HPE, with for the most part a lot of NonStop thrown in for good measure. For the NonStop community, excitement levels are once again on the uptick. As is simple curiosity! By this I mean, the sheer volume of new products and features coming from HPE NonStop development is almost unprecedented. To some, the changes are coming more quickly than they can comprehend while for others, it is as if there is a whole new OS and integrated stack appearing on the horizon. For them, there is some puzzlement too as to whether it’s NonStop or something else entirely.

For the record, I believe that it is indeed NonStop and that it represents a major shift of focus. As IT looks to simplify all that is part of IT, at a time when major vendors are looking to do more of the heavy-lifting when putting functioning systems together (think converged and hyperconverged systems), NonStop already has passed this particular test and has done so for decades. Much to the surprise of my blogging friends at HPE Discover, when NonStop is delivered to a customer site, it’s ready to go. No assembly required! The emphasis that the NonStop team has placed on making sure the pieces all work together and that when it comes to software, there’s an integrated stack from the metal to the application, remains a novelty for these bloggers. And yet, that has become the goal for just about everyone else supplying systems and platforms to customers these days.

As the latest issue of NonStop Insider went to press, so as to speak, I was struck by how many of the articles looked at what was said at these recent customer events. The opinions expressed in these articles varied as widely as the locations, of course. Not everyone who submitted articles made it to every event but even as the opinions varied, the sentiment was oftentimes similar. In going from NonStop X to Virtualized NonStop, HPE is making a big bet that it will gain new opportunities even as it retains its core constituency. But here’s the big question being asked by everyone that I approached – will the work being done to continue to keep NonStop atop the technology wave take it away from its core fault tolerance capabilities? In other words, stray too far from the metal and into the world of software-only, will this lessen the level of availability we are so used to from NonStop?

Reflecting on sunsets is easy to do. Depending on where you are at the time, sunsets can be majestic in the way light reflects off the environment, be it mountains or seas. I have often witnessed such powerful displays of colors that are almost magical in the way they appear and then almost immediately are gone. Majestic! Magical! However you treat them it does give you pause to consider matters that perhaps otherwise escape us in the hustle and bustle of life and for anyone with an interest in IT, perhaps we should spend more time reflecting. Few architectures have sustained markets for as many decades as NonStop and yet, perhaps the sun is truly setting on the traditional way in which we view NonStop. No longer just a system made up of chassis, boards and peripherals but rather, an approach to solving business problems in a way that assures its users of no downtime.

When you do reflect on how we got here the role of solutions available for NonStop systems (all right, Tandem Computers, if you are going back to the beginning) plays an important role. After all, the success of NonStop is intimately tied to the applications that became available on NonStop. It may not be unreasonable therefore to think that ultimately, where HPE takes NonStop and what its capabilities turn out to be, when it comes to availability and fault tolerance, if solutions vendors believe that NonStop gives them an edge in this respect – their solutions are more available than alternate offerings – then this is really all that matters. We can argue over how these solutions vendors came to the conclusion that they did but if their solutions survive failures, with no down time, isn’t it OK to call the underlying NonStop just as fault tolerant as he original?

With this in mind I turned to my client, OmniPayments, LLC whose solution continues to focus on NonStop. Even as modern configurations include Linux and Atalla processors right alongside NonStop, OmniPayments markets its solutions as being fault tolerant, running 24 x 7, forever. It’s hard to ignore OmniPayments’ CEO, Yash Kapadia, when he points to lines in his PowerPoint presentation that state, “A single system supports 10,000 transactions per second (TPS); 99.99999% reliable, immense linear scalability, bulletproof data integrity!” Yes, seven 9s – and Yash is adamant that this is being achieved today by his customers. So yes, reflecting on what a solutions vendor provides on NonStop today only begs the question – will such solutions be making the same claims in the future as they accommodate the new NonStop coming to us from HPE NonStop development?



"OmniPayments has been out front testing with the latest offerings coming from HPE NonStop development. There have been many and it is clear to OmniPayments that the NonStop team is in transition as it throws more support behind virtualization and clouds. With OmniPayments keeping costs low, these efforts are indeed welcomed as it helps maintain a very competitive posture for OmniPayments," came the response from OmniPayments’ VP Business Development, Craig Lawrance. Like many vendors I talked to, OmniPayments clearly understands that there is a new mantra echoing across NonStop development, “yes, we can!” And it’s evident that where NonStop development is taking NonStop is into unchartered waters – a software only solution that yes, is the best software platform on the planet. It’s all about running mission critical workloads and according to NonStop development Director, Andy Bergholz, "Our goal is to make HPE Virtualized NonStop hardware agnostic, with the exception of utilizing Intel x86 microprocessors and RoCE enabled NICs.”

I caught up with Andy at HPE Discover and while the extent of our conversation was just the exchange of a few ideas, I have been reflecting on his assertion ever since. Valuable insight coming out of a customer event where it proved relatively easy to catch up with HPE senior management; HPE Discover has always produced surprises! Returning to my exchange with Craig Lawrance following HPE Discover, I raised the matter with him that with all the changes coming out of NonStop development, how was it impacting OmniPayments? Were there any reservations about pushing ahead with NonStop X, and then with Virtualized NonStop?

"For anyone in the NonStop community that has attended recent customer events it would be hard to miss the increased presence of OmniPayments at such outings. Given an opportunity to talk about our move to NonStop X, our support of OmniPayments as SaaS in public and private cloud configurations based on NonStop X, the demonstrated performance improvements audiences world-wide have witnessed, and being among the first to bring real-world testing to Virtualized NonStop, all this collectively puts OmniPayments well ahead of the curve when it comes to the payments marketplace."


Most important of all, I heard no comments whatsoever from OmniPayments management that NonStop development was in any way diluting the message of NonStop! In other words, future OmniPayments products were going to be every bit as available on new systems and platforms, no matter what form they will take, as they believed HPE is fully committed to ensuring NonStop development provides a platform capable of maintain the levels of availability OmniPayments demands, whether on physical systems or virtual and whether on-premise or in the cloud. We may continue to debate the merits of NonStop and the path it is taking but if those making a living from selling solutions differentiated by their implementation on NonStop continue to see NonStop fulfilling their mission then I think this debate will quiten-down rather quickly.

OmniPayments is not alone in this regard – there are many other vendors that I talked to that remain committed to NonStop, no matter what shape it happens to take in the future. If all that NonStop becomes, in a hybrid cloud environment, is hidden behind a check-box item asking whether the need of the solution is to run 24 x 7, then I’m quite OK with that.
In my previous post to this blog I noted that whatever runs on NonStop is just another workload and with products like SimpliVity and Synergy – not forgetting HPE’s latest move, HPE NextStack – could it mean that NonStop can participate without any further work required of NonStop!?!?

Imagine that – NonStop already has the jump start on practically every other traditional enterprise workload – mission critical requirements can be easily and readily addressed right out of the gate. This is bound to light up the sky for many who really need such capabilities. And somewhere in an office in Palo Alto I sense just the tiniest of sparkle in the corner of Andy Bergholz’s eyes – it isn’t a significant concern for NonStop development, it just works! 

Tuesday, June 13, 2017

I feel it coming in the air tonight …

Time spent at user events these past two weeks – small and large – has made me more appreciative of just how well NonStop embraces HPE’s vision for IT

Living life out of an RV, on a somewhat permanent basis, where there’s nothing to fall back to other than a hotel room, is something each and every one of us should experience at least once in a lifetime. It’s been a couple of weeks since we turned our backs on the only home Margo and I have ever bought together and resorting to a gypsy lifestyle wasn’t something we ever thought was in the plans. And yet, even as it hasn’t gone entirely to plan, it’s given us a whole lot of time to discuss recent events.

By recent events I mean the time we have spent with the NonStop community. In just two weeks we have covered 3,000 plus miles and visited with NonStop folks in Dallas for N2TUG as well as the much bigger event staged by HPE, the annual Americas instance of 2017 HPE Discover. But what have we learnt and just as importantly, what have we observed? I title this blog post with a line from that very famous song (of the same name), sung by Phil Collins – and anyone who can remember either the Budweiser commercials introducing Monday Night Football or scenes from Miami Vice will appreciate the meaning. The lines from this song almost always were associated with the unexpected. An upset (in a game) and a showdown (in a movie) but always and not without stressing too fine a point, change!

It’s hard for me to say it any other way; change is coming and yes, I feel it in the air tonight! For me, a premonition about what is yet to pass? Too dramatic, perhaps! Even an over statement? And yet, has anyone in the NonStop community seen NonStop moving and evolving as rapidly as it has these past three years? At a time when some elements in the community would dearly like to see NonStop development slow down, giving time for NonStop users to take a good long look at where NonStop product roadmaps suggest it is headed, but for me it’s more of a case of  “bring it on!” It’s been long overdue for NonStop to once again prove itself to be a disruptive technology.

The two weeks I have spent on the road behind the wheel of a rather large RV towing a trailer with a car on top, has taught me a lot about thinking ahead. Rolling down America’s highways with 60 feet of motorized rig doesn’t give you as many options as say a less than 20 foot sports car. And no, it neither brakes nor turns like the sports car and yet the RV has height. And with height comes visibility as to what is coming – and with visibility comes opportunity to take the initiative. So it is with NonStop and recent user events, we do have visibility to what is coming and with visibility we not only have opportunity but time.

So what did I take with me following these recent events? Looking at what transpired at N2TUG the image I took with me was that when you promote a good agenda and when you pick a good venue, you can certainly pull a crowd. N2TUG demonstrated the upside from doing both. Furthermore it attracted the attention of HPE NonStop head of product management, Karen Copeland, who made the trip to Dallas to spend time with NonStop customers and to provide an updated presentation on the product roadmap for NonStop. It was really good to see Karen taking time to do this as there is no substitute for face time with the NonStop community.

Then again HPE Discover was something entirely different and perhaps, for the NonStop community, not quite as interesting. However, for me, it proved to be even more important than local customer events for a couple of reasons. And I had the good fortune to go behind the curtains for a photo op with Exec VP Antonio Neri. Yes, I felt it in the air that night!

HPE’s stated strategy is to simplify hybrid IT, power the intelligent edge and to provide services to make it all happen. But for the NonStop community there is a lot more going on here than is represented by the messages that accompany this strategy. For starters, NonStop has always been a hybrid system and converged and even hyperconverged infrastructure are nothing new to NonStop.

Can any NonStop user ever recall a NonStop solution arriving at a loading dock in pieces leaving it up to the enterprise to figure out how to put all the pieces together and then firing up the operating system and associated middleware only to find that there was no integration across any of the resulting stack …

I have always been impressed by how much upfront work is undertaken by HPE and the NonStop team to ensure what arrives at our loading dock can be quickly transported into the data center and made operational immediately. A crucial message of NonStop and fault tolerance should always be about how simple it is to deploy and that aspect of NonStop is so often overlooked. Is it important?

Well, talk to the other teams involved in building other products present today on HPE’s product portfolio  and they begin responding with hybrid IT, converged and hyperconverged infrastructure as if it is all brand new insights as to how systems should be delivered.

It is good to have goals that include words like simplify and yes, software-defined (everything) even as it is good to develop a strategy around openness, commoditization and standards. The only issue I have is to look at yourself, when pursuing such goals, as you may already have achieved all you are setting out to do without being aware of your previous accomplishments.

There is so much that NonStop can teach the rest of HPE but having written this, I am very much aware that among the ranks of the senior executives there is more knowledge about NonStop than many may assume – recent personal promotions and shifts have seen former leaders of the business as well as the products in groups putting together the pieces in support of the goals.

Leaving this year’s HPE Discover event, here is my epiphany. In having accomplished all that it has done to date in terms of simplification and integration, it now makes it so much easier for NonStop to contribute in the transformation of IT that HPE so aggressively pursues (and promotes). In other words, all the rhetoric surrounding NonStop suggesting NonStop is the best software platform on the planet better positions NonStop to not only be independent of the hardware but of the platform. What runs on NonStop is just another workload and everything presented in support of products like SimpliVity and Synergy – not forgetting HPE’s latest move, Project NewStack – means that NonStop can participate without any further work required of NonStop!

Imagine than – NonStop already has the jump on practically every other workload – mission critical requirements can be easily and readily addressed right out of the gate. You want simple well, there you have it. Of course, it will not be that easy when it comes to educating HPE marketing and slipping this little gem into the team meetings of the strategy group but here’s hoping simple blog posts like this one can turn the conversation around for more favorable consideration of NonStop. I see that it has already begun as at HPE Discover, listening to other independent bloggers, there wasn’t quite the distance between us as there was only a few years ago.

It all does come back to building a compelling message for our business leaders. There is no question that for NonStop to lift its game in public then HPE needs to “articulating a compelling value proposition and strategic vision to improve outcomes to business executives,” one NonStop vendor CEO wrote in an email to me. What this highlights is that HPE just has to do a better job of telling the story of NonStop to these business executives and it shouldn’t be that hard – yes, NonStop is already ahead of the game when it comes to the goals and strategies HPE has set for itself.

This shouldn’t be kept secret and most of all, those business executives deploying NonStop today should have the bright spotlights turned on them and their accomplishments should be brought to the attention of the IT industry at large. This is something that does need to change at HPE! After all, these NonStop users are the ones turning simple rhetoric into powerful and compelling examples of the value proposition that comes with deploying NonStop. Today! Can’t you just feel it coming in the air tonight?


Saturday, June 3, 2017

It’s time for Vegas!

A post from the road – the road, that is, to Las Vegas! HPE Discover 2017 has rolled around once again and for me it’s a mixed bag of anticipation together with whatever might happen that is unexpected …

The week began with us pulling out of the street where we had lived for nearly two decades to pursue the life of technology gypsies! Homeless; no fixed abode, with just a PO Box to keep us connected with the rest of the world. And no longer in possession of keys to the front door of the house we had made our home and where over the years we had entertained many of you.

Of course this isn’t a permanent change for Margo and me, but life on the road has always been something we have enjoyed. The timing, of course, couldn’t have been any better as we had committed to attending the HPE NonStop customer event in Dallas – N2TUG. Followed, just a week later, by arguably the more important HPE big tent marketing event, HPE Discover 2017!

If you look under Labels, to the right of these posts, you will see that I have been posting on HPE Discover from 2011 and then before that, HPTF from 2008 (even as Margo and I had attended the last ITUG Summit hosted by the then HP Technology Forum in 2007. Before that? More than a decade spent attending the ITUG Summits held prior to HPE becoming involved. While the focus on NonStop seems to have lessened with the passage of time it’s still important to hear the message of Mission Critical Systems from HPE executives in an unfiltered manner.

I am not expecting a big turnout from the NonStop community. With budgets always tight I anticipate most NonStop customers will be saving their pennies to ensure that they can make it to the NonStop Technical Boot Camp, which in many ways has been capturing the spirit of ITUG Summits past. On the other hand, I will not be surprised when I run into a familiar face in Las Vegas and I am expecting that there will be more than just a few NonStop customers making “the trip to the strip!”

Perhaps more surprising for Margo and me is that it’s June, we are heading to Las Vegas and it is the first time we are doing this in 2017. In the past, by the time HPE Discover came around, we have already been to Las Vegas on three or four occasions. But not this time – and put it down to the building of our new home. Readers will know that this year, we have stayed very close to Boulder and not made it to the NonStop Partner Symposium in Palo Alto or to the pan-European eBITUG event in London, even though we had both firmly inked-in on our calendars. When it comes to a project like building a new home there is always a healthy mix of anticipation along with responding to the unexpected.   

The unexpected does happen and plans do change and it has left me to wonder what unexpected developments may overtake HPE and perhaps even NonStop? Are we sure we truly understand all that is happening in the world of NonStop? While many of us were aware of the deep port of NonStop to the Intel x86 architecture there were fewer of us who were aware of the project to support virtual machines. And what of NonStop running on virtual machines comprised of commercial, off-the-shelf (COTS) hardware? There has been much these past couple of years that can only be described as unexpected.

When it comes to experiencing the unexpected at big tent events, like HPE Discover, it always starts at the top. The tone will be set from the moment HPE CEO, Meg Whitman, steps out from behind the curtain for her opening remarks at the first general session. In the past, these have drawn large crowds with many standing outside the auditorium watching Whitman on big screens liberally scattered around the exhibition hall. This year, it will be interesting to observe just how big a crowd is drawn to the occasion as the show will be all about the products, together with some services, that make up the product portfolio of just the HPE Enterprise Group (EG).

Long gone are the folks interested in PCs and printers. Those interested in software and services will likely be absent from the show as well. No, this is like a much larger version of ITUG projected onto a much larger stage. There will be demonstrations of working elements of what was unveiled a couple of years back – The Machine. But much of the drama surrounding this enormous undertaking by HPE has lessened as plans now call for elements of The Machine to be released gradually as part of traditional computing offerings. As for what will take center stage this year I expect whatever is presented by Whitman will continue to center on openness and standards and on projects like OpenStack as well as the tools and services in demand by those moving to hybrid IT.

Cloud computing will be the subject of much of what HPE presents, I expect. As too will converged and hyperconverged infrastructure. Synergy, and the efforts to move to software-defined everything will also be highlighted but for the NonStop community, when it comes down to what business problems are being addressed, even with the attention these high profile themes will be given, it is still a bit of a yawn. According to presentation given by the NonStop team of late, including at this week’s N2TUG customer event, isn’t today’s NonStop system already a good example of hyperconverged infrastructure? Isn’t the latest NonStop X systems already being used by one solutions vendor as the technology powering their cloud offering?

Since the very first NonStop rolled off the production line and shipped directly to a customer, the hardware and all the required software had been not only packaged but rather been installed and tested. NonStop represents the industries best example of an integrated stack, from the metal to the user APIs. Talk about converged – it has already been done for NonStop customers, and has for decades. Hyperconverged? Well of course, with the arrival of CLIMs running on Linux boxes interconnected via InfiniBand fabrics – there is a lot to be said about NonStop being the only example shipping today supporting true hyperconverged infrastructure.

What I am anticipating to hear and see more of, and what unexpected developments could prove exciting? For some time now there have been slides in the NonStop product roadmap presentations that address topics such as IoT and Blockchain. I will be anticipating hearing more about NonStop and IoT as I have come to appreciate that IoT will gradually evolve into a tiered implementation, where important events will absolutely have to make it into the data center (whatever that ends up looking like). But this really is more of the same – placing a NonStop system in the transaction path to ensure mission critical data makes it to where it’s needed. While I am not expecting to see NonStop present in any IoT demo situation I sure would like to be surprised to see NonStop doing exactly this at some point.

On the other hand, when it comes to something as intriguing as Blockchain and the support of new-age applications demanding support of a distributed ledger, as we read about in almost every paper we pick up of late, there is movement on this front with regard to NonStop and at N2TUG this week, we were tipped off to expect to see NonStop participating in a demo for exactly this – Blockchain being supported by NonStop. Could this prove to be a tipping point for Virtualized NonStop? Could this be the one key development that truly does favor support by NonStop?

Arguably, it’s a stretch to make the comparison but when ATMs’ first arrived and the need was to ensure they operated around the clock, the appearance of the earliest NonStop systems proved to be a godsend for a number of software houses looking to support these new devices. As a result an industry was created where NonStop played a premier role. With so many concerns about security and indeed accessibility associated with Blockchain, could a similar situation arise for NonStop? In time, could NonStop prove to be the go-to system for support of Blockchain by financial institutions everywhere?

I have purposefully asked a lot of questions in this post and for good reason. Each time I head to HPE Discover I have lots of questions. Even as I am anticipating much and yes, looking for the unexpected to happen, I am reminded that in times past, dating back to the earliest ITUG Summits, vendors have held back a couple of critical developments just so that they can showcase them at their big-tent marketing events. Every attendee expects a little drama. Apple were the masters of the “one more thing …” approach and while I am not anticipating anything close to a major Apple reveal, I am not ruling out that there will indeed be something special announced.

And with that, I return to the road as Las Vegas is still a couple of days drive away and with the weather looking like it will continue to be a problem, I have even more on my mind. Should you make it to HPE Discover and should you be interested in hearing more about NonStop then yes, you will likely find something that surprises you. But even with that prediction the NonStop community has a lot to be thankful about as who would have imagined (just a few years ago) how rapidly NonStop has evolved to where today, it’s a platform, a system and yes, it’s software. Onward then to HPE Discover, 2017!