Monday, May 21, 2018

It’s a journey for NonStop across an ever-changing landscape!

A flight to Germany continues to be a rare occurrence for me and yet, GTUG held promise of more good news on NonStop so I simply had to go! 

It might appear that road trips have been a consistent theme of mine over the past couple of weeks. True, the topic of road trips always is close to my heart as I like getting out onto the highway to simply drive towards the horizon. And yet, it is also highly relevant in today’s technology environment – as much as I enjoy a changing landscape and a changing vista, every now and then the destination is important. Even if the neon sign flashing far out in the distance might only be that of a motel offering overnight accommodation, there is always the distinct possibility that something far more meaningful is being communicated.

Margo and I have just gotten back from a brief trip to Leipzig, Germany. Yes, as much as we bemoan air travel we simply had no other option than to head to the airport a couple of weekends ago and traverse a considerable part of the planet in order to make the GTUG supported
European NonStop HotSpot 2018/IT-Symposium 2018 - Conference & Exhibition! Big name but fortunately, same agenda as in past years, although this year we did enjoy the company of VP & GM Mission Critical Systems, Randy Meyer. And yes, a note of thanks to another big name, NTI, for making this trip happen for Margo and me!

It isn’t every day that I can throw into a current post references to former HPE CTO, Martin Fink, and it’s been a long time between drinks hearing references to Martin coming from senior HPE managers. But just this past week, on more than one occasion Randy did reference Martin and that was pleasing to hear. One of those references really resonated with me and I thought its inclusion was both timely and a reminder that some really smart minds are still focused on NonStop. “Listening is tactical; observing is strategic,” said Randy referring to an earlier comment made by Martin. I have heard Martin say this before and it’s still as relevant today as it was a decade or so ago.

One of the motivations for me to embark on road trips is to observe. I am fascinated by what is transpiring across the transportation industry – yes, we know the economy is picking up when all the big-rigs on the interstate look new and yes, the number of temporary tags on passenger cars are more plentiful than previously observed. Walk into any Starbucks bordering the interstate and see how long it is before they serve you coffee as the hustle to accommodate the drive through lanes snaking back out through the parking lot. Randy and Martin are right in saying observation is strategic.

When you hear members of the NonStop community refer to themselves as being customer-driven and that they are always listening to the business needs of their customers, don’t be surprised to see them changing direction often. It’s an old saying that being close to the customer means being close to closure! Seriously, listening to the immediate needs of someone sitting across the table from you more often than not triggers a kneejerk reaction about which we later confess as being inconsistent with what the industry may be telling us. Or our own people!

So, what is it that HPE has been observing of late and what is happening that is relevant for the NonStop community? In a nutshell, it’s hard to miss the signs of hesitancy on the part of entrenched NonStop users even as the companies those NonStop users work for look to tackle the twin issues of hybrids and clouds. “We are transforming to address a changing landscape,” said Randy. “The world is hybrid and when it comes to NonStop, we don’t want to be constrained by the tools (at hand) or the environment (that arises),” was the argument coming from the boss of all Mission Critical Systems.

“How does NonStop fit into this transformation,” asked Randy of the audience. Bringing it back to what is happening within his own organization, Randy then answered his own question with his team having observed that “evolving Mission Critical Systems to be cloud-ready and where NonStop is consumed as a service” has become the priority for the NonStop development team. That’s right, cloud computing moves by companies led to virtualized NonStop even as hybrid IT has generated an opportunity to provide “new ways to consume NonStop!” 

Observing the ever-changing landscape, as you typically do when you drive any distance inside America, let’s you develop a deeper appreciation for what’s about to happen in the not too distant future. You certainly know that autonomous big-rigs make sense even as you realize that with everything communicating (in an always connected world), why on earth haven’t we seen even more elements in the supply chain being automated? As part of the visit to Leipzig for GTUG many of us elected to visit the Porsche factory to watch Porsche cars and SUVs being built and it was hard to miss just how much automation exists on the factory floor. And yet, there were still a thousand or more folks engaged in putting things together.

“When everything is connected,” said Randy, we will get to a point where we get “crazy new insights into how things work!” In fact, we will find ourselves seeking answers “about things when before we didn’t know we would even be asking such questions!” When the edge meets the cloud and everything that is connected begins communicating we will be hard pressed to miss changes in behavior unlike at any time in our history. My own favorite possible new customer recognition option – our gait! Yes, simply the way we walk up to any device will be all it takes for that device to know who we are and what we are looking for.

We will still be able to buy NonStop systems the way we always have bought them – dropped off at a dock all wrapped up nicely with a bow. Fortunately for many, these new converged systems will become one of many options available to the NonStop community. More popular, perhaps, will be the option to deploy virtualized NonStop in-store / on-premise on hardware already owned, be that a server farm or a private cloud. In the coming months we all may want to pay more attention to work being done for NonStop that leverages projects like Ansible – Ansible playbooks that will be able to interact with NonStop delivered as a container that is subsequently pulled apart by Ansible and deployed atop a virtual machine – cool! And then there is the option to simply run NonStop as a Service from out of a public cloud (although, initially it might be more like from providers like Rackspace than Amazon) and that would be even cooler for many NonStop users!

When you take a good long hard look at the changing landscape for NonStop you’d be excited by the possibilities. Furthermore, if the journey that NonStop has been on for the past four decades is anything to go by, then perhaps Randy should have added, “And you haven’t seen anything yet!” I guess the biggest takeaway from GTUG may in fact prove to be that the journey is a pathway to even greater innovation and while we applaud the deep port to the Intel x86 architecture, should we rule out any other architectures being supported? I don’t think so … and you know, it is as if NonStop has now been freed from design and implementation constraints which we never thought possible just a decade ago.

What this journey of NonStop reminds many of us are the words to a popular song and in more ways than one:

But he was back in business when they set him free again
The road goes on forever and the party never ends

And I suspect few members of the NonStop community would disagree with the idea that yes, NonStop is back in business and yes, as Randy reminded us many times this past week in Leipzig, NonStop has been set free! Yes, observation is strategic and yes, here’s to hoping that the road goes on forever.

Thursday, May 10, 2018

Road Trip – old or new it doesn’t matter; it’s the journey!


Whenever opportunity presents itself for a road trip – then count us in! But with next month’s N2TUG event I will be retracing steps I first took back in 1976 that led to eventual move to the U.S.


Road trip! Road trip! Did someone say road trip? I have just wrapped up a post to our social blog about road trips, including ones prompted on short notice in small planes and yes, that counts. But more importantly for this audience, our next road trip will take us to Dallas for the N2TUG NonStop user event. There have not been too many user events that we have missed, but there is always something very special about Texan hospitality. And it’s no secret among the NonStop community that for this event, there will be even more that will be special – the founder of Tandem Computers himself, Jimmy T!

It’s a little known fact that my first ever residency in the US was in Dallas. I had been living in Edmonton, Alberta, where I had emigrated to from London, UK, – no, a London winter wasn’t cold enough so I’ve emigrated to the frozen north of Canada. As a database specialist no less and yes, I am talking about the mid-1970s, when DBMS products like Cullinane’s IDMS, Cincom’s TOTAL and Insyte Datacom’s Datacom/DB ruled the roost. It was a time when IBMers everywhere had grown tired of ISAM and had taken to the more fundamental BDAM to essentially build their own DBMS product from scratch. A whole year of programming? No worries!  

I am also talking about the mid-1970s when arriving in Edmonton I bought my first new BMW – back then it was the mighty six-cylinder 530i, complete with a stick shift. Mind you, this was a C$12,000 car when everyone else was buying Trans-Ams and Mustang IIs for about $3,000, so little has changed of late. What was new is that shortly after taking delivery of the Bimmer, I drove from Edmonton down to Dallas and then on to San Diego, Los Angeles and San Francisco before running the entire west cost to Seattle and then crossing back over the Canadian Rockies to Calgary and Edmonton.

I am not sure exactly of the mileage but to accomplish this drive in just three weeks taking in, naturally enough, the F1 event at Long Beach won by Mario Andretti. Little did I know where this all would lead to, but suffice to say a presentation I gave in Dallas to the Datacom User Group during that road trip led to a job offer in Dallas that I accepted immediately! Let me think about it – the freezing temps in Edmonton versus the perpetual sunshine in Dallas. OK, I have already given you a clue as yes, not only did I take the job but it was also a ticket back to Sydney where I would become a Datacom partner company’s new Managing Director.

I am not sure either how I had become a fan of Southfork Ranch and JR and the rest of the cast of the TV show, Dallas, but that’s another story for another time.
In the years that have passed since that first sojourn in the U.S. I have taken many road trips and have seen more of America than many of my American friends and colleagues. For the past decade many of these trips, if not all of them, were influenced one way or the other by a RUG event. Regional gatherings of the NonStop community have always been special times and I have attended gatherings in India, Helsinki, Toronto, Sydney – you name the city and there is a pretty good chance that I have showed up at some point. Of course, spending two years as the ITUG Chairman certainly helped in expanding that list of cities.

Today, NonStop has come so far in a relatively short period of time. When you look back at the work the NonStop development team has done to embrace industry standards and open systems by porting to the Intel Xeon x86 architecture and then providing the option to run virtualized, they have given us a lot of opportunity to grow the number of enterprises running NonStop. I am not saying this without giving even more thought to where NonStop will likely appear next as it’s still very much a solution-driven business these days, but I do have a sense that with the newly trimmed-down HPE, there is more opportunity to be heard above the noise than there was even a year ago!

When it comes to any journey we plan on taking, particularly when it involves a road trip as is often the case for me of late, reminiscing about previous trips has always been important. My wife and fellow cofounder of Pyalla Technologies and I recall what hotels we liked and perhaps more importantly, what sessions caught our attention. Who can forget when support for x86 was announced or the comment made in passing by one of the speakers that NonStop could run virtual. And what about the blockchain port? Of course, reminiscing on the past always brings us both back to the grand days of ITUG Summits and as much as we miss them they were for another time and we certainly see the passion for NonStop in the faces of everyone who turns up for a RUG event.  

We often discuss the journey that NonStop is on and about the need to take baby steps on occasion but here is the really big news. NonStop continues to provide an unmatched architecture that is affordable for any enterprise wanting to have the best uptime signature for its mission critical applications. And along with the affordable uptime properties, NonStop also provides scale-out capabilities out-of-the-box that NonStop’s competitors can only propose using PowerPoint slides. Not cool – and no, not without a lot of extra costs. Perhaps, most important of all? NonStop has found its home within the Mission Critical Systems organization whose leader, Randy Meyer, has very strong ties to NonStop and that is a circumstance we don’t take lightly.

This journey by NonStop has seen it reinvent itself numerous times over more than four decades since it took those first couple of baby steps. And now, in a matter of a couple of weeks, we will be in Dallas spending time with the man who first saw the value in building a truly fault tolerant system. History is still a very important part of this journey that every member of the NonStop community is part of and if the number of times old photos are posted to the Facebook group, Tandem Computers, it’s a reminder of just how passionate about all things Tandem and NonStop folks continue to be.

The N2TUG team is among the most passionate of NonStop community folks and there is no doubt whatsoever that a Texas size greeting will be given to everyone who shows up when N2TUG presents ‘Protecting with Blockchain’ on Thursday, June 7, 2018. For those who may be curious, your Pyalla Technologies team will once again be hitting the road, in yet another BMW, but this time, a hybrid.  More importantly, should you be curious about how to register for N2TUG, of if you simply need more information, than send an email to bhonaker@xid.com with 
Subject:

                Registration for N2TUG 2018 “Protecting with Blockchain”

Or just click here ...

Bill reminded us all just this week of how the N2TUG team is hoping  “to see you in Plano for our 2018 N2TUG event; as Jimmy always says, it’s excitin’” And yes, added Bill in a promotional email to us all, “Jimmy will be our final presenter.  He will stay for the evening cocktail party where we can all swap stories and reminisce about the past or make plans for the future.” Count us in Bill and we look forward to seeing many more of you this year at N2TUG!

Monday, April 30, 2018

Time to take a trip to Europe; NonStop RUGs kick off in earnest ...


Many of us will be heading off to Europe very soon as the RUG season kicks it up a gear or two with HPE execs on hand to fill us in on all that’s happening. But what we all want to hear is how virtualized NonStop will shake up the market!



I have always enjoyed the times when I had to make the trip across the Atlantic to Europe. I don’t know what it is exactly, but when circumstances present themselves where I need to show up for a meeting or event somewhere in the “old country” my heart skips a beat and I immediately begin counting down the days to departure. Before coming to America to join Tandem Computers, living in Australia meant trips to Europe had to be carefully weighed against the time away from regular activities as well as costs, and more often than not it also meant looking at combining a number of activities so as to maximize the time out of the country. 

With hindsight however, the travel I did in the early part of my career was truly educational. It was a time when so much change was happening in the industry that you just had to go out there and participate – imagine my surprise when on one occasion a stretch limo pulled up to take me to an event the only other passenger in the limo was Gene Amdahl! I am not sure what we discussed but it was lively and no, I didn’t remind him of the time only a few years previous where sitting in the front row of a conference hall, I fell asleep during his keynote. So soundly, in fact, that I began to snore and had to be nudged awake!

In the late 1970s through to the early 1980s there was one major conference I always managed to include as part of my European travels and that was the National Computer Conference (NCC) held in the US and I was fortunate enough to be in attendee at the NCC in Houston and Anaheim. This was at a time when IBM dominated the industry and all the discussions centered on the flourishing Plug Compatible Mainframe (PCM) business – yes, we had standards and they were all anchored in IBM products. On the one hand, it made life easy for IBMs competitors but on the other hand, as if industry technology stalled for a brief period of time. However, it was during this same period that I first encountered the world of Unix and it was in Europe that I ran into Unix evangelists almost everywhere I turned.

For all the years I confined myself to air travel, jetting from one venue to another (and racking up the miles in the days before frequent flyer programs), the education I received highlighted how trends could be quickly identified and then validated simply by walking the exhibition halls of a major event. Of course, the biggest event on the computer calendar back then wasn’t the NCC but the original Hannover Fair (before it split and CeBIT arose); if you liked toys and playing around in the mud, this was definitely the must-attend event as alongside computers there were elevator shafts and full size locomotives! It played such a prominent part in the industrialized world that many of the participants had committed themselves so strongly that they constructed their own pavilions out of bricks and mortar!

As I look back at those times, long before the arrival of the Internet, such a hands-on exposure to everything new in IT had no substitute and it was the only way to keep abreast of the changes continually at work reshaping IT.  And yes, change was happening with such regularity that you just had to show up the following year and the year after that and then, of course the next year – today, however, IT has become so fine-grained that we really have lost touch with the breadth of our industry and as much as I can google almost anything I want to know, it’s not the same as walking up to something completely foreign, touching it and wondering, what the heck does it do! To think, I walked onto the pavilion of Nixdorf Computers in the early 1980s to see Nixdorf’s first ATM complete with heated surfaces for a better customer experience in temperatures below -40C! And yes, destined for the Australian marketplace!

On the other hand, as I look forward to what is about to take place in just a few weeks’ time, the scale may not be what it used to be and the scope may be way smaller, but those planning on attending either or both BITUG and GTUG will be experiencing much the same level of anticipation as I had all those years ago. There may not be trucks, tractors and railway locomotives or anything else like that but to the keen observer stopping by the exhibitor desks, it will be clear that NonStop today is more than just a bunch of boxes with an OS; NonStop can be deployed any which way you would like and the NonStop vendor community is already catering to any configuration your enterprise requires. Most important of all – and visible at the RUG events already held this year – NonStop has truly become a line of software products with the option of having HPE throw in some hardware!

Too cavalier, perhaps? I don’t think so; there’s no value in going to RUG events today with preconceived ideas as to what NonStop is or how NonStop should be delivered. News flash – the days of simply mulling over an upgrade to the next NonStop model are over. Every enterprise with mission-critical applications needs to think very carefully about the optimum way to run these applications on NonStop and the focus is going to return to the human component. Does your company have access to the skilled personnel it needs to run NonStop in this new world of hybrid IT where clouds are well and truly entrenched in the mindset of CIOs. If clouds aren’t on your radar and your enterprise is holding steady on a tried and true course, then yes, take the next upgrade of NonStop from HPE. However, if you are seeing clouds being discussed and you want to ensure NonStop contributes to your business, go to these upcoming RUG events and interact with HPE and the rest of the NonStop community before making any firm decisions one way or the other.

With as much talk as there is about robotics and AI these days and with as much attention being given to analytics and deep learning, perhaps the more interesting question is how exposed might you become if you picked the wrong approach to running NonStop? What I am getting a sense of is that for the majority of NonStop users, the NonStop X family of systems is the way to go and that the time to stop feeding the NonStop i family of systems needs to be curtailed. I have only anecdotal support for now, but I have to believe the supply chain behind Itanium has to be thinning out not to mention how much energy is being expended on stockpiling ServerNet switches. Not much, by my count even as I am sure some NonStop i systems traded for new NonStop X systems are being cannibalized in support of some markets. Again, purely my own observation at this point in time!

However, as 2018 unfolds take a good look around at what your peers are doing. NonStop X systems for production; vNonStop on converged virtualized NonStop systems (NS 2) for development and test and yes, NonStop SQL/MX the engine behind DBaaS running from within a cloud and fully virtualized (supporting multitenancy). Not just one way to run NonStop but exploiting the flexibility (and indeed innovation) that today’s NonStop represents. Again, when was having choice ever a bad thing? And when has mixing it up been viewed negatively when the operational bottom line has so much potential upside – think of shared DR sites that are cloud based?

My love of travel hasn’t subsided over the years but it has taken on a different look – as the NonStop community is aware of these days if I cannot drive then it takes an awful lot of motivation to get me back on a plane. In a modern twist to that famous marketing message, “don’t leave home without it today,” I actually take home with me! On the other hand, the program content and the personalities that are present at major RUG events such as we see taking place shortly in Europe are enough to get me back on a plane and I am sure I am going to hear a lot that is new even as I know I am going to be caught out with something unexpected being announced or demonstrated.

But no matter what takes place the one thing I can absolutely guarantee is that with all that is taking place today with NonStop, I will not be falling asleep during any of the keynote presentations nor will my snoring be heard above the commentary of HPE participants.  Safe travels – and I look forward to seeing as many of you as I can in that fine German city, Leipzig! 

Tuesday, April 17, 2018

Let’s talk some more about hybrid IT


During a recent trip to Silicon Valley there was barely a meeting where the topic of hybrid IT didn’t come up and yet, almost every time, what was being described as hybrid IT was different … so, what’s going on?
                               
Many a Sunday afternoon of my youth I spent trackside north of the Sydney suburb of Hornsby, watching the steam trains pull up a grade as they headed south into the metropolitan area. These tracks had little to do with race cars, mind you, but watching the mighty locally designed and manufactured 38 Class locomotive as well as the even more famous (among Australians at least) Garrett locomotive, with its distinctive configuration where a separate water tank was located ahead of the boilers and engines in a detached carriage. Back in those heady days I actually took a trip from Sydney’s Central Railway Station to Newcastle on the famous Newcastle Flyer where somewhere the other side of Wyong, through a downhill cutting, we hit 70 mph! Wow … and yes, we were “flying!”

The age of diesel trains arrived soon after and all the fun went out of watching trains. The 44 Class introduced in the mid-1950s was a diesel electric locomotive and as such represented my introduction to hybrid vehicle power. But absent the belching smoke and the whistles and screeching metal, these locomotives were relatively boring, but nonetheless, they introduced significant change into the transportation industry as now very long distances could be covered without the need for water tanks to be erected at regular intervals along the track. Yes, the world of hybrids revolutionized transportation finding its way not only into trains but supersized dump trucks operating in some of the world’s biggest open-cut mines as well as in ships of all sizes, including many of the cruise ships where we vacation.

With as much talk as there is today about hybrid vehicles where nearly every auto manufacturer has at least one hybrid in their portfolio of vehicles, it isn’t at all surprising to find the word has crept into IT. I can’t recall exactly when I first heard of hybrids and hybrid IT but it has to be more than a decade ago, and maybe longer.  Despite how we got here – every IT vendor is talking up their hybrid IT solutions as if nothing else mattered and maybe it doesn’t any more. In one way I thought that this would be a possible outcome for IT as businesses gradually realized that their business wasn’t IT but rather banking, retail, telecommunications, transportation, etc.

We no longer see operating theaters and lines of doctors resident in businesses just as we don’t see large architectural offices. There may be the occasional consulting doctor or architect to help a business determine future product directions but more often than not, we bring in outside consultants in these fields as needed. So why staff up as big an IT organization as we have had in the past if we look to better integrate IT into every function within the business? Perhaps hybrid IT is ushering us in a new era of computing where IT is nothing more than the data equivalent of the HVAC!

But what is hybrid IT and how did we get here? In my post of March 28, 2018,
How did that happen – I have hybrid IT? I referenced a quote by HPE blogger, Gary Thome. This came after HPE commissioned Forrester Consulting to conduct a survey to evaluate the current state of Hybrid IT. “While Hybrid IT is the new normal,” Thome noted, “two-thirds of businesses ended up with a hybrid estate by accident – not design.” Ouch; kind of reminds you of how suddenly we all woke up one day to find everyone had a smartphone that needed to access the data center. Or worse; suddenly every desktop and laptop owned by an employee need to have these clients access servers throughout the enterprise.

As early as 2012, however, it was Gartner who began coverage of hybrid IT where reference was also made to cloud computing and the connection between the two really energized the marketplace. According to Gartner, “Hybrid IT is transforming IT architectures and the role of IT itself … Hybrid IT is the result of combining internal and external services, usually from a combination of internal and public clouds, in support of a business outcome.” So rather than simply falling into hybrid IT by accident, the real story here is that it is very much associated with clouds both public and increasingly, private. "Many organizations have now passed the definitional stage of cloud computing and are testing cloud architectures inside and outside the enterprise and over time, the cloud will simply become one of the ways that we 'do' computing, and workloads will move around in hybrid internal/external IT environments," said Chris Howard, managing vice president at Gartner.

"Hybrid IT is the new IT and it is here to stay. While the cloud market matures, IT organizations must adopt a hybrid IT strategy that not only builds internal clouds to house critical IT services and compete with public cloud service providers (CSPs), but also utilizes the external cloud to house noncritical IT services and data, augment internal capacity, and increase IT agility," said Mr. Howard. "Hybrid IT creates symmetry between internal and external IT services that will force an IT and business paradigm shift for years to come." You have to really love the enthusiasm of Gartner in this instance as they suggest yet another paradigm shift – but really? While volumes have been written about hybrid IT the only paradigm shift I am seeing is that there isn’t a single, uniting, unambiguous definition of hybrid IT and that it is very much something that is in the eye of the CIO. If he says he has hybrid IT then who is to argue with him?

In 2016 it was left to IBM to begin the process of making sure any paradigm shifts under way didn’t exclude participation by traditional IT vendors. “The simplest definition of hybrid cloud is that it is a combination of private and public cloud, but that is far from the entire story. To really grasp it, think about the challenges that businesses encounter every day.” Fair enough – clearly there is more to this story after all. “Hybrid is not even just about cloud. It’s also about traditional IT and legacy applications. It’s essential to ensure that legacy applications, which usually deal with core business functions, are able to exchange information with cloud applications and databases,” said IBM. “Hybrid cloud is not just about IT. It is also about business. It’s the logical ecosystem that can support business innovation and expansion in a fast and agile way and — at the same time — preserve the stability and efficiency of core business functions.”

Alternatively, the cloud service providers (CSPs) like AWS, a counter argument soon arises and in 2018 AWS explained how, “At the most fundamental level, hybrid computing can be viewed as having data that resides both on-premises and in the cloud. This is often done to economically store large data sets, utilize new cloud-native databases, move data closer to customers, or to create a backup and archive solution with cost-effective high availability. In all cases, AWS offers a range of storage and database services that can work together with your on-premises applications to store data reliably and securely. The most robust form of hybrid architecture involves integrating application deployment and management across on-premises and cloud environments. AWS and VMware have developed a deep, unique relationship to enable VMware-based workloads to be run on the AWS Cloud.”

It is hard to argue with AWS as they are the biggest player on the cloud stage even as it is hard to argue with the massive ecosystem that has surfaced of late in support of AWS. As AWS notes on its web site, “All AWS services are driven by robust APIs that allow for a wide variety of monitoring and management tools to integrate easily with your AWS Cloud resources. Common tools from vendors such as Microsoft, VMware, BMC Software, Okta, RightScale, Eucalyptus, CA, Xceedium, Symantec, Racemi, and Dell already support AWS, and that’s just naming a few.” Clearly, there is no textbook case of hybrid IT just as there are no classic use case scenarios. For IT, there are volumes upon volumes written about hybrid IT and pulling just one of them and checking a single chapter may prove interesting but shouldn’t be considered in absolute terms.

What we can all agree upon is that at a minimum, hybrid IT includes at least two different compute models and that they support data and business logic spread across them both with optimization undertaken to better exploit each of the compute models. Hence, given this understanding even the earliest NonStop systems where Linux was present – think CLIMs – can rightfully be described as a hybrid system but not a template for hybrid compute per se. What we can also agree upon is that somewhere in the picture is a cloud – a virtualized environment that is software-defined supporting an elasticity of provisioning that traditional compute models didn’t accommodate. And contrary to what IBM may propose, running a virtualized compute model inside a mainframe really does stretch all sense of credibility even as it looks like a cloud on IBM’s slideware!

For the NonStop community the introduction of virtualized NonStop represents a giant step towards greater participation of virtualized NonStop workloads within hybrid IT. Accommodating a heterogeneous mix of NonStop systems and Linux server farms is a step in the right direction but until we have NonStop living in the true world of software-defined everything, we have ways to go before we can wave the banner of NonStop in hybrid IT. Heterogeneous support is a great place to start but it is just the beginning and this is where HPE has done us all a huge favor by stepping up and supporting the virtualization of NonStop! In so doing, we won’t be seeing NonStop relegated to museums any time soon. As for the famous Newcastle Flyer and that 38 01, according to Transport Heritage NSW, "It is currently at Chullora having work done on it - i.e. its boiler is being fitted." It is hoped that it will return to life by the end of the year - most likely November - and take its place in the
New South Wales Rail Museum at Thirlmere, an outlying southern suburb of Sydney!

Sunday, April 8, 2018

What’s inside counts and NonStop inside is counting more …


As NonStop readies itself for deployment in clouds and where NonStop workloads will become just another virtualized workload running in the cloud, the opportunities for NonStop deployments will grow considerably …

Breaking news! Every time I pick up my smartphone these days it’s as if there is something happening in the world that I need to be notified about right now. In a sense, it’s my own virtual equivalent to “stop the presses” even as I know that most times, its relevance to anything I am doing at that moment is questionable. Not to put too fine a point on it, I really don’t need to know anything more about the Kardashians or the rallies and strikes happening in some far away land or even who is heading to the NHL playoffs this year even as I know that my local team here in Colorado may have moved up from last place this time last year to being a contender for Lord Stanley’s Cup. 

What does interest me, on the other hand, is the discussions and correspondence on the future of HPE and the likelihood of further success of NonStop. My interest in these topics has in part been driven by the work Margo and I have been putting into the digital publication, NonStop Insider. With eighteen months of issues now behind us, a routine has been established, with the support from the NonStop vendor community being outstanding. If you have as yet not subscribed to this publication then you should visit the subscription page:
http://www.nonstopinsider.com/subscribe/

This interest obviously has been fueled by the need to be current on topics of interest to the NonStop community, but just as importantly, its proved to be a fun thing to be doing because, as is often stated by proponents of Blockchain, I may not trust anyone but I will trust everyone. Said another way and more relevant to what is covered in NonStop Insider – anyone can hold onto an opinion but everyone can share common goals. Just talking with as many folks as we can within the NonStop community (and yes, that includes HPE NonStop product managers and developers) gives us a certain degree of confidence that we are staying on point. And the messages have been steadily increasing for the past month. So, what is the topic of the day and is it shared by everyone?

We have come a long way since news first broke that NonStop had been ported to the Intel x86 architecture. As a server, the NonStop X family represents a bold take on a growing requirement for NonStop – embrace industry standards and open protocols. Provide choice and provide better return on investment (ROI) even as the entry price into NonStop is greatly reduced. The move from Itanium to x86, as complex as any chip swap previously undertaken by the NonStop development team, was a repetition of previous ports and considered by many as somewhat routine. However, taking NonStop out of the hardware it has depended upon for forty plus years, cutting all direct ties to the metal and placing it into something as foreign as commodity bits and pieces buffered by an intermediary control program isn’t far removed from what we read of today concerning heart transplants.

When it comes to explaining what was done in order to come up with virtualized NonStop (vNS), making sure nothing was compromised in the process seems as complex as cutting away all the connections to the old fabric and then ensuring the reconnection to the new fabric (this time, not metal but rather a control program), works flawlessly. Of course, there is only so far this image can be taken as embarrassing as it may turn out to be, a few missteps in the technology world will never have quite the same ramifications as mistakes in the world of medicine. And yet, it’s hard not to see the similarities and share the same concerns as we would if we were watching a real heart transplant.

Similarly, it is hard to argue against the concerns some CIOs have today should the proposal be made to swap out NonStop from the heart of their ATM networks and to reinsert it into untried metalwork with an intermediary control program, or hypervisor, as it is more commonly called. Reduced to a simple risk – reward equation, why would anyone want to go down that path? What could possibly drive such consideration and motivate such an investment? Surely, when it comes to NonStop, the status quo prevails where simple upgrades from traditional system models to newer traditional models eliminates most of the risk.

There is a reason why heart transplants are performed today. The very existence of the recipient is at stake and all other options have been exhausted. Again, this image can be taken only so far as well when it comes to NonStop. Mission-critical applications running on NonStop may continue to run standalone, as modern-era transaction processing PABX-like systems, but the resource that they are accumulating – data – is a much treasured resource that enterprises everywhere want to access. And it is the growth in data that is fueling the consideration of building private clouds capable of storing and providing access to enormous amounts of data. Business logic and data is finding its way onto servers making up a private cloud and these private clouds are fully virtualized. For NonStop to play as important a role within the enterprise as it always has, it too needs to be able to run as a guest workload inside a virtualized cloud.

When it comes to the future of HPE and the likelihood of further success for NonStop, it’s not surprising to read how HPE’s vision embraces the simplification of the process of transforming to hybrid IT. Expressed as simply as it can be done, hybrid IT is a mix of both traditional systems and clouds. While it could be argues that NonStop could remain a key player in the traditional systems side of the ledger, that isn’t where the growth will come from – clouds are on the rise, traditional systems are barely holding their own. It was inevitable that NonStop would make the leap to x86 and then make a further leap into virtualization.

Nowhere is this more clearly stated than in the March – April, 2018, issue of The Connection. In the article by Prashanth Kamath U, Senior Product Manager, NonStop, HPE NonStop takes New Strides in its Journey through the Cloud he says, “We do realize and appreciate that ‘cloud’ has a much wider scope. It has different technologies, business models, deployment models and several other dimensions. We are carefully looking at these various plays in the cloud world and charting out the future course for HPE NonStop to enter your datacenters as the preferred deployment platform.”  More importantly and very much to the point, Prashanth then writes, “We are overwhelmed by the customer interest for deploying their Mission-Critical workloads on the NonStop platform in their private clouds.”

Headlines may come and go. Interest in a topic among journalists is always fleeting – of interest until the next news break occurs. But where HPE is taking NonStop is neither fleeting nor inconsequential. It’s groundbreaking for the NonStop community as it aligns NonStop with the major vision of HPE – transforming to hybrid IT. Even if transforming demanded transplanting, which moving NonStop dependencies away from the metal and onto a hypervisor clearly represents, the risks endured by HPE were well worth taking. Assuming that the customer interest is as high as Prashanth states, and I have no reason to doubt that customer interest is high, then the risks of joining the transformation to hybrid IT with NonStop are certainly worth serious consideration.

It is still early times for NonStop X and it’s still even earlier times for vNS. Choice is obviously important and there will be strong cases made to stay with traditional systems. However, if clouds have begun being deployed within your enterprise’s data center, isn’t it a good time to give due consideration to vNS? And what’s inside the cloud really does count – there is no lessening of the need to support Mission-Critical applications with NonStop! To date, all the bets taken by HPE have panned out as expected – shouldn’t our expectations be for a NonStop that conforms to the visions and strategy of our enterprise? I sense we are going to catching glimpses of more news about NonStop customers pursuing the transformation as after all, it only requires a little heart to achieve and we all know that “having a heart” has been central to everything the NonStop community has ever done!

Wednesday, March 28, 2018

How did that happen – I have hybrid IT?

Hybrid IT is the new normal, according to one HPE blogger, for the NonStop community it may only be a matter of time before the plans of both NonStop and hybrid IT intersect …


Let’s face it. There are many disruptions that can interrupt our routines. Not that long ago I was a regular at the local fitness center but now, having moved, I haven’t found a new place to go exercise. I have always loved to walk and there will be some of you from my time at Tandem Computers who may recall that most mornings I could be seen with fellow walkers striding the sidewalks of Cupertino. Even when I lived in Simi Valley I would devise a routine that had me cirling the shopping center where my Starbucks was located so that I could eke out a 20 minute walk. I don’t know what it is but I am horrible at improvising and unless I have a set time and place to do something, I find ways to duck whatever task or activity I had set out to do.

On the other hand, we only just coming out of winter here, along the Colorado front ranges, and so to some extent, I can blame the weather. More to the point, however, it has become hard to ignore that I have put on a little weight over the winter and almost by accident, I can no longer wear the same clothes I wore last spring. Perhaps accident isn’t the best choice of words as nothing happened overnight and yet, here I am, wondering how I can return to my regular exercise program. Working all day at a keyboard demands a price but I hand’t thought that would include a few extra punds!

Talking of accidents leads me to remark somewhat candidly about a tweet I happened to come across by chance. Gary Mitchell, a fellow HPE independent blogger who I happened to catch up with late last year while at HPE Discover, Madrid, tweeted “A new report from @Forrester finds 2/3 of businesses end up w/a hybrid estate by accident – not design.” The link takes you to the HPE newsroom and to the article,   Survey Says: Composability and Continuous Delivery Accelerate Business Initiatives and Deliver a Competitive Advantage. “We were curious to better understand how IT is dealing with this transition, so we commissioned Forrester Consulting to conduct a survey to evaluate the current state of Hybrid IT,” said HPE blogger, Gary Thome.

“While Hybrid IT is the new normal,” Thome noted, “two-thirds of businesses ended up with a hybrid estate by accident – not design.” Now, there are all sorts of ways to accidental IT these days. Sorting out IT following a major merger (or breakup), changing the IT leadership and with it, changing the applications and perhaps more commonplace these days, migrating to open platforms. Whatever the cause, IT suddenly wakes up to find it has inherited a completely foreign mish-mash of technologies with the directive to get it all working. And now! On the other hand, clouds and embracing clouds, shouldn’t be among the accidents IT inherits unless, unhappily, IT is deliberately kept in the dark. And there are plenty of instances where change was not only an accident but a deliberate sidestepping of normal IT oversight.

I am often caught up in discussions about what the data center of the future will look like if, at all, it still exists within Global 1000 companies. A very strong case can be made that when your business isn’t centered on IT either as a manufacturer or a vendor, then perhaps you shouldn’t even have an on-premise data center. Today, there are many enterprises rethinking their strategies in this regard and I know of one very large retailer, for instance, that is electing to move it’s IT support for payments to a cloud offering.

And this just makes sense – back in the 1970s when working for a Caterpillar Tractor distributor, the enterprise first consolidated all of its IT within a new organization and then sold it to a New York consulting company where it became a major service bureau, but accidental hybrid IT? It does sound a bit far-fetched even as it might have us looking to the horizon for the imminent arrival of storm clouds!

Time sharing; service bureaus; clouds – they have a familiar ring to them in one sense. It is ultimately just another perspective of relenting control and yes, funding, of the data center to someone else. I can recall all the passion and enthusiasm for embracing best-of-breed solutions freeing enterprises from having to place major bets with select few vendors.

But it “growe'd like Topsy” and even with the best defined infrastructure in place, an awful lot of silo-ed solutions emerged that even today require additional infrastructure to ensure viable participation with the rest of IT. When I look at what is being addressed by hybrid IT, it is starting to look a lot like the big data centers of the past as increasingly, enterprises argue strongly that the applications deemed mission critical really should run on-premise, in a private cloud!

I recently wrote to my clients about an in-depth article published late last year in Virtualization and Cloud Review,
The Composable Datacenter of the Future, where writer James Green starts out on familiar ground:

“IT Infrastructure is always evolving. The next big change will be back toward an old, familiar model, closer to the days of the mainframe than the era of silo-ed servers that has dominated for decades.”

However, possibly of more interest to NonStop vendors is a subsequent observation Green makes:

“What Does It Mean for Infrastructure To Be "Composable"? Software that enables the abstraction of infrastructure resources transformed the modern datacenter—most notably, the abstraction of physical machines into virtual machines (VMs) via a hypervisor. Spurred on by the benefits many organizations realized by abstracting the machine construct, some datacenter thinkers have been working toward building a ‘hypervisor for the datacenter.’”

Accidents? Familiar models? And now, hypervisor for the datacenter? I am not so sure that what is being referenced here can be considered haphazard or even random as there is an underlying plan working itself out among enterprises. For those enterprises that deem oversight of the business logic and data as being crucial to the ongoing success and indeed expansion of their business it is important to stay on top of the technology and the best way to do that continues to be the big boxes in the air conditioned room. Or vault, as it is increasingly becoming given the unease among enterprises executives about security.

There is a lot being written by HPE about composability even as there is a lot of head scratching among the NonStop user community as to what it all means. There is a very solid line connecting composable to virtual machines and that’s one place to start. But for now, you will not read anything from HPE suggesting that future virtualized NonStop workloads will be able to run on any hypervisor for the datacenter supported by infrastructure that is composable and yet, it wouldn’t be an accident if such a situation develops. It is common knowledge that the push to support NonStop on VMs came from the NonStop telco customers and there is every indication that shortly, HPE will take the wraps off a telco up and running with virtualized NonStop so perhaps it’s all about to happen for NonStop.

That is, NonStop being software-defined as HPE would like all of its core products to become – yes, think first up how advantageous that would be in development shops and where QA, testing and PoCs are the order of the day. Think too of the multi-tenancy on offer for NonStop SQL/MX and how this plays into a software-defined world. It’s no accident that HPE has let its own IT group drive requirements for NonStop and it’s no accident either that NonStop was never a consideration for the spin-merge of non-core software products.

Fault tolerance, as it is embodied in NonStop, has enormous value add in the world of hybrid IT and it would be foolish for any enterprise to discount the benefits that come with levels of availability only NonStop can deliver. After all, we may be in hybrid IT by accident but there really isn’t a positive spin that you can put on an outage of any kind these days – accidental or not!

I know that there is work ahead for me to get back into shape after a prolonged absence from our local fitness center. At least, I have a good excuse and can quickly address the situation. Enterprises everywhere know that they are entering the world of hybrid IT and that software-defined everything will rule the day. While I cannot get HPE or the NonStop team to make any statements about future product directions I have to believe, based on recent conversations and email exchanges with those close to HPE, that NonStop isn’t oblivious of HPE’s strategy and is paying a lot of attention to possible future synergies between NonStop and hybrid IT.

Clouds are now all around us and whether you see them as the harbinger of stormy days ahead or not, recent NonStop developments suggest that there are plans afoot for NonStop systems to better leverage cloud computing. From where I sit there is nothing accidental whatsoever with the journey NonStop is on and as someone with longevity in IT, I have to say, I can’t wait to welcome NonStop to hybrid IT!

Sunday, March 18, 2018

Yes, I read it in a blog!

With HPE CEO Antonio Neri posting and tweeting – HPE strategy is becoming clearer and it is all good news for the NonStop community!
Most of my background in IT is well known to the NonStop community but this isn’t the time to dig into the archives and reflect on past accomplishments. However, and just an observation in passing, for someone who was often told that he would find it hard to stick to doing just one thing well, maybe there is a story here after all. The NonStop community is witnessing change, but change a little different from what the rest of IT is observing. Yes, we are embracing hybrid IT in one form or another and yes, we are coming to terms with virtualization, provisioning and software-defined-everything. And yet, for the NonStop community, change is happening in unforeseen ways as it wrestles with where NonStop is headed amid concerns about “what is NonStop!” 

Expressed differently, are we happy to be given more options as to how we deploy NonStop or are unhappy about NonStop apparently responding to questions we never asked? My background in IT has exposed me to many turns of the technology wheel and from where I sit it’s as if we have circled back to topics I thought we had addressed long ago. Decades ago, what fueled the argument centralized versus distributed was the uptake of online systems and the transaction volumes they generated, but today, it’s not so much about transactions per se as it is about the data. If we continue with the systems and deployment topologies we have relied upon for two plus decades then we will be inundated with so much data that we won’t be able to handle it.

While a lot of attention is being given to the edge, when it comes to NonStop any consideration of running NonStop at the edge is probably an exercise in futility. At least for the next decade! For starters, there is greater potential for high performance computing running analytics and AI to make it out to the edge  than NonStop, but that isn’t to say some solutions provider may build a product that exploits NonStop out on the edge at some point. On the other hand and a little more realistically, with discussions about hybrid IT and converged / hyper-converged becoming more pronounced, NonStop has a very realistic shot at providing value in the data center. Centralized may not be the same as it was in former times, with multiple data center sites all seamlessly integrated constituting an enterprises modern data center, but the expectations will remain – business critical logic and data need to be accessible 24 x 7 x forever.

But what is the strategy of HPE? What are HPE’s plans for NonStop and do they dovetail nicely with HPE’s overall strategy? Before delving into HPE’s strategy, it is good to start with how new HPE CEO, Antonio Neri, views the world and what HPE considers is the next big thing, the next big challenge we all face, and where HPE sees itself playing an important role. In the March issue of NonStop Insider, in a new column we headlined
HPE: News from the very top … reference was made to a post by Neri, The Next Big Thing is … The Intelligent Edge published in the LinkedIn blog, Pulse:

We live in a hybrid world – the IT estate has to span from the edge to the core to the cloud because the power is in the data and you have to connect and secure all of it.
The challenge in this future is that the volume of data being created will soon outstrip our ability to capture, analyze and act on it. This intelligent environment will require a platform that is open, simple, automated and secure. This is exactly why we’re advancing our vision of Memory-Driven Computing.

What can you expect to see next? Remember the first time you heard about the possibility of an autonomous vehicle? Or how revolutionary AI and blockchain could be? Now think about what could happen when IT leaders apply real-time intelligence to retailing, sports arenas, medical facilities, college campuses, transportation systems and smart cities. No doubt, we’ll achieve higher productivity, mesmerizing customer experiences and change the way we live. In every industry, every city, everywhere.

There will be an intelligent environment. AI and yes, Blockchain will be important and yes, HPE will apply real-time intelligence to retailing, sports arenas, medical facilities, college campuses, transportation systems and smart cities. Memory-Driven Computing? Yes, we are seeing it being introduced into the marketplace already as the SuperDome Flex – it’s definitely a memory-driven computer and it is definitely foreshadowing the potential that is an extension of the first prototypes coming from The Machine. But wait, there’s more – Memory Driven Computing is at the very heart of the converged and hyper-converged models HPE foresees the market requiring to process the workloads that will arise from all the data being ingested.

Beyond simply viewing the world this way, in a subsequent post to the HPE web site, Neri gave more information about HPE’s strategy and of the markets HPE will be focused on as it addresses meeting the described needs of an intelligent environment. In a recent tweet by Neri that takes us to a blog post by him,
Market Share and the Tale of Two Strategies, he fills in some of the blanks that have existed within HPE strategy as articulated in the past:

Within the compute market there are a number of different segments and they are not all created equally. We think about them as volume markets and value-and-growth segments. At HPE, we are aggressively segmenting the market to ensure we are making the right investments in the value-and-growth areas where we can bring differentiated value like hyper-converged, high-performance compute and Synergy, while also profitably supporting our customers in more commoditized parts of volume segments like rack and tower. But, as I have said many times before, we are not going to chase market share just for share’s sake. We are going to drive profitable share.

HPE announced last fall that we will no longer pursue the low (or zero) margin, custom-built, commodity server business with a very narrow set of Tier 1 service providers – Apple, Amazon Web Services, Facebook, Google Cloud, Microsoft Azure in the U.S. and TenCent, Alibaba and Baidu in China. While this segment of the market is large and growing, there is simply no profit to be had by HPE or our competitors because there is very little value we can bring. We still do substantial business with these companies, but we are focused on delivering solutions that provide differentiated value for them like our value-and-growth compute, storage and networking offerings.


This is the strongest language to date concerning what HPE is not going to do. There should be no ambiguity here when it comes to understanding HPE’s strategy – if it is a high value, high growth opportunity then HPE will address it. Otherwise, look for other vendors who may be interested only in market share – a game HPE no longer wants to play, and for good reason. It is in this high value, high growth market where change will be needed as it is inclusive of the Global 1000 enterprises HPE is already working with seeking to “apply real-time intelligence to retailing, sports arenas, medical facilities, college campuses, transportation systems and smart cities.” And I got the scoop on all of this from reading a blog – did you read them, too?

I have covered some of this material in a couple of my latest private emails to my clients but let me spell out what this all means for NonStop as best as I can. There it is for all of the NonStop community to see: hyper-converged, high-performance compute and Synergy. The investment HPE has made in virtualized NonStop means that in response to where it provides high value and high growth, virtualized NonStop has the potential to become just another, albeit very important, virtualized workload in a hyper-converged system. They are all x86 and they support converged Ethernet so the basics are all in place. In talks I have had with various engineers on the NonStop team, it appears that it is only a matter of time before enterprises make a request to run their mission critical transaction processing virtualized NonStop workloads on the latest hyper-converged systems.

As the new CEO of HPE Neri covers a lot of ground in these two recent posts and while specific systems and platforms aren’t named (apart from Synergy), the door is left wide open for NonStop. Nothing I have read from HPE this year leads me to believe that anything included as part of HPE’s strategy precludes the participation of NonStop. Ultimately, it comes back to the imagination of the enterprises involved and the creativity of the solutions vendors working with them, but it is so tantalizingly close to think that the most important of all of mission critical applications, no matter where they run – on scale-up or scale-out systems – can opt to turn to NonStop and in so doing, bring to HPE the commensurate amounts of high value and high growth it is so focused on providing. There will always be a need for the levels of availability NonStop alone can deliver – so make you plans early and by all means, get to L-Series as fast as you can. There are opportunities all around us and we simply cannot afford to let them pass us by; it was Neri who said it best “we are not going to chase market share just for share’s sake. We are going to drive profitable (market) share.” And what better way to succeed other than with Nonstop!

Sunday, March 11, 2018

Our strategy for new applications is to build and deploy off-premise!

It is becoming all too clear. For the NonStop user the world is changing rapidly as we move to hybrid IT and look to a virtualized NonStop. Where will find the help we will more than likely need?


Only this week I was visiting our new off-premise storage facility where we are now storing vehicles we can no longer accommodate on-premise, at our home. The company command center – yes, our faithful RV that many of you have seen at recent RUG events, our motorcycle, our trailer and even our fifteen year old Corvette “track toy!” It is not a cavernous storage facility by any means but it does come heated and with a number of different power outlets including one 30amp circuit which should be enough to keep the command center’s refrigerator working and the drinks chilled. Now, all we need is a couple of chairs and we have a new weekend retreat! Well, perhaps not …

Fortunately it is all rather simple and we don’t need to have someone present looking after the facility. There is a management company lurking somewhere in the background that watches over the site, but to have someone on hand to consult when it comes to what more we can push off-premise well, that too is fortunately another instance of, perhaps not. In today’s world where we live, work and relax, there are so many instances where you do need to reach out for a helping hand. That it’s as if we are being viewed as totally helpless and yet, if something needs work to be done, I still call for assistance whether it’s a leaking faucet, a misbehaving HVAC or yes, the tires need replacing on the car. As the television commercial promises, “we have people!”

I came across a link this week that took me to the site of DXC.technology and to a Q&A about IT and Business Strategy. DXC having been created late last year with the merger of CSC and the Enterprise Services business of Hewlett Packard Enterprise. You may have seen this link to DXC.technology too as it was being promoted on LinkedIn. “Moving applications to the cloud requires a sound strategy that takes into account technology issues, business needs and embedded IP. Assess your organization’s readiness level and ability to take on this important work …” The topic of cloud always grabs my attention and when it comes to IT and to the data center, there is nothing more topical than cloud computing when it comes to off-premise offloading of both processes and data.

The link on the DXC.technology page took me to a self-assessment page where the first question stopped me from going any further, “How much has cloud affected your IT and business strategies?” For the NonStop community this is currently the topic that is most hotly being debated leading to us all asking the question, “How much has cloud affected my NonStop deployment strategy?” And by this, with options to run on x86 servers, either from HPE (as NonStop X) or third parties (as virtualized NonStop – vNS) with the option to run essentially on-premise as a private cloud or off-premise co-located on a public cloud?

When asked the question above, “How much has cloud affected your IT and business strategies?” you could say you were pursuing a position of leadership by, “using cloud to create new digital business opportunities to better serve customers and to improve employee efficiency and effectiveness.” Or, you could back away from this just a little and say that your, “apps strategy is to build all applications on cloud (such as Cloud native, PaaS, SaaS). Perhaps it was more a case of you “provide cloud services, and you have made cloud a key element for many of our applications and solution architectures.” To be honest, for many members of the NonStop user community I think the correct answer may be, “We don't have a cloud strategy. IT doesn't support cloud. Use of cloud is up to the requestor.”

It is this last observation that truly raises that all-important question – do we need help? Or, put another way, do “we have people?” I may not need a facility manager to help my off-premise storing of vehicles but when it comes to IT and NonStop, from where I sit there is a very strong argument to be made that sure as heck, we do! Language aside, irrespective of the resources available from HPE, any discussions of NonStop, clouds and IT and business strategy leads me to believe that we will be seeing the birth of a rejuvenated managed services ecosystem in the very near future.

And with this still fresh in my mind, I went to TCM’s Daniel Craig for his impression on the options the HPE NonStop development team is providing today. “TCM is very keen to explore the space that is being created by HPE's NonStop development: vNS, Hybrid, DBaaS etc. We have been working towards a NonStop-as-a-Service solution for some time now and we are confident that very soon we will have the right blend of technology and willing users to make this a reality.” When questioned further about vNS, Daniel said that, “Commercially we've held many discussions, but more from hypothetical than practical viewpoint. It seems a lot of Customers, especially those not already setup for private-cloud operations, are holding off to see what HPE is going to do in this space. Certainly the L-Series release with support of vNS with VMware has TCM's attention as we suspect that this may be a more readily deployable solution for most NonStop users.”

TCM is a provider of managed services and was the first to enter this market targeting the NonStop community. Now TCM is the largest managed services provider in the EU, having pioneered NonStop service solutions where mission critical systems and the solutions it supports were of paramount importance to the enterprise. It is only natural then that TCM is anticipating seeing even more business headed its way as TCM definitely can say, “we have people!” and the HPE NonStop team is continuing to advocate to users that they better be well-educated in all things virtual before they begin pursuing a vNS deployment in their data center.

However, the cloud continues to loom large for even the most ardent supporter of NonStop. Whether you follow the path established by HPE IT department with NonStop at the head of their operations serving up NonStop SQL to all and sundry on the basis of DBaaS or simply look to go off-premise and to a side running a combination of x86 servers with VMware as the hypervisor, the challenge remains. Where is the expertise coming from and how do you imagine recruiting the staff knowledgeable in both VM and NonStop? TCM will be the first to acknowledge that this will be a challenge for most enterprises, but it is also quick to highlight that this is precisely the area where TCM is investing today!

“Let’s just look at what we have done in Scandinavia and at the investment we have already made,” said Daniel. “TCM has just taken over from DXC in providing services to their former Scandinavian clients. It has been a lot of work but the TCM team, DXC's staff and the Customers have all been extremely professional, making it all the easier to pull off without a hitch. Great collaboration and teamwork all round!” Furthermore, and with this in mind, Daniel wrote of how, “It has also been a good opportunity for us to try out something new from a services perspective. For the last few years we have been concentrating on our cloud-based services via our Centre of Excellence. Indeed, this was how we first began providing service into DXC (CSC at the time). It is now all working out incredibly well and harkens back to TCM's willingness to adapt, try new things and remain flexible - basically whatever best suits the NonStop user. Yes, we certainly do have people!”  

Whether it has become your strategy for new applications to be built and deployed off-premise and in the cloud, it really is only a case of rethinking your NonStop solution as being just another virtualized workload. Running on the L-Series operating system, it can run anywhere there are x86 servers, some Ethernet deployment (capable of supporting RoCE), and a hypervisor including KVM and now, VMware. I fully anticipate NonStop users to first deploy on-premise and with the help of managed service providers like TCM, gain knowledge of what it takes to successfully deploy NonStop with the same fault tolerance as is supports today running on NonStop X, and then look at the business benefits of moving a lot or perhaps just a little to an off-premise public cloud. And if this is indeed your goal, then perhaps it is time to have the people and the way forward may very well lead you to folks like TCM. You need people? Sure do! Then we can help you – “Just give us a call,” said TCM’s Daniel.

Saturday, February 24, 2018

New leadership; more wins and yes, all’s well for NonStop!

HPE produces financial results that encourage the financial community; other communities take note – when it comes to mission critical servers, NonStop continues to excel!


I know there are still a number of months that need to pass by before we all start making our plans to attend HPE Discover 2018. It will be held, yet again, in Las Vegas and will run from June 19 to June 20. According to the advance promotion of the event, it will showcase just how best to, “Simplify Hybrid IT, innovate at the Intelligent Edge and bring it all together with HPE Pointnext services. Explore new, powerful methods of handling your data. Learn how to take advantage of the next big idea, seize the next new business opportunity and leapfrog the competition.”

For the past couple of years I have been a guest of HPE, representing Mission Critical Systems and NonStop Systems, as an independent blogger and part of the group that HPE calls the “influencers” – and yes, a huge thank you to HPE for including me in the program! HPE has to be congratulated for reaching out to folks like me and including someone from NonStop as part of the twenty-plus such influencers they bring together for these HPE “big-tent” occasions. Collectively, we represent every facet of HPE within IT and as part of the program HPE ensures all influencers are treated to separate “coffee talks” by HPE’s senior management. Of course, there is always the photo opportunity and at last year’s HPE Discover in Las Vegas, the guest HPE executive the HPE team brought along just happened to be then newly-named HPE President, Antonio Neri.
   
In case you haven’t been following the news; HPE released its financial results for the first quarter of its financial year and, under the leadership of Antonio Neri, the company surprised the market. This was followed almost immediately by an uptick in the stock price of HPE, further improving its market capitalization. After all the spin-mergers, movement of staff and restructuring, having a technology leader in charge of HPE, once again, these results suggests that perhaps the stars are beginning to align for HPE.

Combine these results for Q1 with the recent news of the big win for
HPE’s supercomputer with US agencies and the launch of a COTS server into space, and HPE looks to be reasserting itself as an industry leader. And by this, I mean, the technology originating at HPE has always been superb, but penetrating the marketplace hasn’t always been a good reflection on the development work being done by development teams. No, it hasn’t been smooth sailing for HPE over the past two years but now, with just one quarter completed, it’s beginning to look a whole lot better.

So, what were the figures and why should the NonStop community take notice? HPE never showcases financial results for individual product lines but the good news here is that when it comes to Servers and Networking, both did a whole lot better this quarter. However, before getting into the meat of what HPE conveyed to the investor community on their
Q1 2018 Results - Earnings Call, it is important to remind ourselves that NonStop is on a journey.

One that will take it down two roads – a continuation of NonStop as usual, with system upgrades plotted on a product roadmap that shows no indication of ending any time soon, and the beginning of a virtualized NonStop (vNS) that in time will be an option within the hyper converge roadmap. Becoming another virtual workload, solutions developed for NonStop and supported by the L-Series operating system can opt to go down either path and this represents a whole raft of new opportunities for NonStop, not the least being able to address SMB with solutions running on managed and public clouds.

The first item that caught my attention on the earnings call was the top line figure. As Tim Stonesifer, HPE EVP and CFO, noted on the call, Total revenue for the quarter was $7.7 billion, up 11% year-over-year and 9% in constant currency.” A little later he then said, “By region, HPE’s performance in the Americas continue to improve, growing 3% with strength in core compute and campus switching combined with a recovery in the organic storage results. Revenue in Europe was even stronger, up 11% in constant currency, driven by an acceleration in core compute and storage with double-digit growth in Germany and Scandinavia. Asia-Pacific grew almost 20% in constant currency, delivering solid core server revenue with double-digit growth in Japan, China and Australia.”

I have highlighted the references to both core compute and core server as this is truly the center of HPE’s product portfolio as everything else are more or less satellite products circling around this core. There are customers of Aruba networking products just as there are customers of 3PAR storage where HPE has no core compute presence, but in general, core compute continues to be the engine driving HPE forward and part of the core compute is core server that includes NonStop. And for the NonStop community, any time there are “new logos” added to the NonStop user base it is cause for celebration. Last year we heard that there were more than a dozen such new logos and looking ahead to what we can expect to see this year, I am hopeful that this number climbs to twenty plus new logos.

Returning to the earnings call, Antonio Neri talked more about core compute, telling the audience of financial analysts how, “In compute, revenue grew 11% year-over-year, driven by growth in core ISS, High Performance Computer, Hyper Converge and Synergy.” Furthermore, “In datacenter networking, revenue grew 27% year-over-year, driven by strong execution within our install base. Our differentiator in Hybrid IT is our software defined services led approach, which helps customers navigate through the transformation challenges.”

At some point, with the work that has been done to support vNS, I see no reason why virtualized NonStop workloads will not be supported by Synergy. I don’t want to get too far ahead of myself and I have been cautioned about appearing too optimistic, but the truth is that with Synergy, there is tremendous scale-up opportunities whereas NonStop, as we know so well, is master of the scale-out architecture. But from where I sit, any enterprise that deploys large-scale Synergy chassis / servers will have room enough to spare for virtualized NonStop workloads to leverage the many physical servers in the Synergy chassis.

And my point here is that from just a single piece of glass, managing operations may be a whole lot simpler with everything out there running on racks of servers as this or that workload! Of course, why wouldn’t you prefer to provision a vNS environment in support of your most important applications – those mission-critical, real-time, transaction processing solutions!

Perhaps most insightful of all in the earnings call were the comments made by Antonio Neri when at one point, he took time to explain to the financial analysts how, “As you know, we have defined our strategy, built our portfolio based on the market disruptions we’re all experiencing, driven by the digital transformation and the resulting explosion of applications and data. At HP, we help our customers extract critical insights from their data to accelerate business outcomes. We enable our customers to harvest, store and analyze the critical data that improves customer experiences, drive new business models and increases employee productivity.”

When it comes to helping their customers extract critical insights, in my conversations with Ric Lewis, HPE
Senior VP / GM, Software Defined & Cloud Group, during his coffee talk at HPE Discover 2018 Madrid, he reiterated that when it comes to data analytics, HPE would be turning to the partners for support. I find this response extremely encouraging as the more HPE drives its core compute / core servers programs, the better they will be to channel their energy into partnerships for other components than trying to go it alone and play catch-up. That isn’t to say that at some point HPE will not make an investment or acquisition in this space, but for the time being, I see this as being a further example that times are a changing within HPE – and for the positive!

Perhaps the most important aspect of the information coming from HPE as it wraps up its first quarter under new leadership is that HPE does have a vision and is executing to a strategy. HPE executives have said it many times and I think the message is now being understood by financial analysts and the IT industry – empower the edge, IoT and simply the transformation to hybrid IT. It may be a stretch but for those who have been around NonStop for many years, there is an appreciation that NonStop can definitely play a significant role within hybrid IT and given the right situation, even at the edge.

After all, it was Gartner that suggested the edge would eat the cloud, a reference to the eventual demise of the data center and yet another stretch, but worth considering all the same. Yes, a big edge will be as big a target for NonStop in time as the cloud, but at least, we have in NonStop and vNS options and the NonStop community relishes being given a choice!

HPE Discover 2018, Las Vegas, is now four months away and I am thinking about what we can expect to see from HPE. I have already made it known to those sorting out the program that I would like to see more references to how HPE compute is being used in the real world, going so far as to suggest setting up an aisle in the “transformation zone” showcasing real customers with real applications running on select products from the HPE product portfolio – including NonStop.

It has been good to see NonStop at the center of Blockchain / DLT demonstrations – excelling at what it does best; transactions leveraging NonStop SQL- and as pleased as I was to see its presence, I want to see even more! So, who knows – maybe we will see a greater presence of the “applied HPE” as we do the “pure (technology-centric) HPE” we see today.

By June, we should have seen results for Q2, 2018, and like many of those I have talked to over the past couple of days, I am hopeful that momentum continues for HPE. Just as importantly, I am hopeful too that momentum continues for NonStop. I am often asked whether we have seen acceptance of NonStop having turned a corner and whether the plans many current NonStop users have for migrating to other systems have lessened of late and while I don’t have access to real numbers, anecdotally I sense we have turned that corner with NonStop.

But if you really want to see the full picture then make sure you plan on attending HPE Discover 2018 and perhaps we will both get the opportunity to hear the full story for ourselves! Until then, let’s just all sit back and appreciate just how well HPE has performed out of the gate under the leadership of its new CEO, Antonio Neri!