Thursday, September 21, 2017

Look for the signs – NonStop community has much to watch!

Visions, missions, tactics, strategies – how does it all come together for NonStop and just how many paths is NonStop going down that will prove advantageous for the NonStop community?

Reading one of my favorite car magazines (and the source of many anecdotes for posts and articles already published), Road and Track, I came across a feature story on the Nürburgring – a magical place for all car enthusiasts and where Margo and I had the good fortune of laying down laps during a track-only day (no busses or motorcycles, just Porsches, Ferraris, Lotuses and Nissan GT-Rs).

Our time on the ‘Ring was back in late 2010 and you can read more of that in the post Respect the ‘Ring! of September 23, 2010, to our social blog Buckle-Up-Travel The Road and Track article, Master’s Program, informs its readers of how, “At most tracks, it’s not uncommon for a first-rate professional racing driver or even a talented club ‘shoe to be up to speed after as few as a dozen laps.” Then adding, “Not at the’Ring. You won’t come close.” More revealing yet, “Like any track, there are references that a driver can use to let them know exactly where they are, where the car should be positioned, and what the driver should be doing. (The) difference here is, there are easily 10 times as many of those reference points spread out over 14-plus miles.”

The Corkscrew at Laguna Seca, for instance (depicted above and with Margo behind the wheel), is always exciting as it has few signs forewarning you about what comes next. All of which is to say that signage and reference points are important on race circuits and I am sure there are other activities where much the same can be said. Here in Colorado we understand and appreciate how downhill racers know every inch of their run down a snow-covered mountain. When it comes to computers, sometimes it’s more difficult to spot signs and catch important references. It also requires a completely different set of skills most of which take a very long time to develop.

For starters, what does strategic really mean and who really does set strategy? Where do we turn to for confirmation that something is relevant, important, and even strategic? Is it our favorite vendor or our primary vendor? Is it an industry analyst or simply a journalist we know well? Turns out that it is a combination of all of the above that somehow we need to weight and prioritize based on our experience with all parties. Not all information is the same nor does it necessarily reflect the real story – you just have to check the tangible reference points that are all around us.

For the NonStop community it is proving to be a very difficult time. The initial euphoria that came over the community when news first broke that NonStop wasn’t being spun out to Micro Focus as part of the offloading of non-core software assets gave the NonStop community a real boost. NonStop isn’t non-core after all. Furthermore it had escaped being relegated to a couple of part timers determined to keep the torch burning brightly, albeit with somewhat limited funds as had happened to OpenVMS. No, NonStop was proving to be special and something HPE valued more highly than other assets it had accumulated over time. However, being core or at least, not part of the non-core software portfolio, isn’t translating into strategic and that is where the concerns lie – if it is core but non-strategic, what does that mean?

Again, it’s time to check our reference points as we look for familiar signs that will help guide us with the decisions we have to routinely make. Do we continue to invest in NonStop systems? Do we bring additional applications to our NonStop platform? Do we invest in middleware and infrastructure products that will help lift our productivity? Will we even spend money sending staff to events and summits held in support of NonStop? Just as importantly - whereto Mission Critical?

The answers today are not that clear and with the passage of time following the completion of the various spin-merges and offloads, the silence coming from HPE is, as they say, deafening. Could it be that it is a simple case that HPE doesn’t know? Could it be that HPE has as yet not focused its own resources on looking at where NonStop could take them? Might it even be a case that NonStop has been forgotten? We often talk about “best-kept secrets” but when it comes to NonStop it isn’t productive to think this way, as it’s a bit of a cop-out reflecting the lack of any real marketing effort being expended by HPE.

At this point I am observing a paradox. NonStop isn’t strategic for HPE – just look at the HPE vision and you will see references to simplifying hybrid IT, empowering the edge / IoT and then offering services in support of both of these pursuits. As HPE adds additional products and indeed vendors to better support this vision and, essentially, map it into a workable strategy, there’s no specific reference to NonStop. Quite the contrary, look at project New Stack unifying management across on-premise systems and clouds private, hybrid and public as well as Simplivity and Synergy and there is no references to NonStop whatsoever.

Indeed, it would take considerable new development to bring NonStop into play and have it participate in any fashion – and yet, look at the strategy behind these products and you could see real advantages of having NonStop participate. No, NonStop isn’t strategic and isn’t making a contribution to HPE’s strategy at this point. And yet, it is still holding down a place in the HPE slideware pulled out on every occasion – so what gives?

There is no shortage of evangelists supporting NonStop nor is there any shortage of references highlighting exceptional use-case scenarios. We often look to finance and telco industries to find references to NonStop but in reality, NonStop clings to handholds in almost every vertical from manufacturing, distribution, entertainment and yes, even healthcare. With all the security issues we read about almost daily and the outages of critical infrastructure components we hear about almost as often, you would think a lot more attention was being paid to reliability but it just isn’t the case. The assumption is that all systems, no matter who the vendor is, provides 24 x 7 capabilities and outages by any of the systems was just an unfortunate incident most likely attributable to human error. A situation that will always be present as long as there is human engagement involved! So why even consider a NonStop system and all the added expense that would involve?

However, evangelism alone isn’t going to cut it – we need a lot more and we need it from all stakeholders. The simple truth is that NonStop will do very well without being strategic. When you consider other manufacturers you will see there are numerous categories for their products – strategic, tactical, utilitarian, general purpose and yes, even halo! Point is, champing at the bit to push NonStop to the top of the list of strategic products isn’t really going to get us anywhere – what is more important is returning to the topic of HPE having a strategy for NonStop. And this is where the signage is better illuminated.

Being strategic and having a strategy are very different pursuits and are oftentimes, unrelated. We expect HPE to be able to articulate a strategy for the company as a whole – we all want to be better informed about where it is headed. But when it comes to individual products in the portfolio, strategies will vary by marketplace, region and even personalities. Just take a look at the reference points surrounding NonStop today and you will get a good sense of not only where HPE sits with NonStop but where you are relative to a strategy for NonStop, best described as a holding pattern.

There are two very distinct and yes, quite separate NonStop paths today and they contain numerous pitfalls for both users and vendors alike. What may work when following one path will not necessarily work when following the other path. HPE will continue to make NonStop systems – it’s strategy for NonStop systems is to continue to follow the Intel roadmap that will give us the latest x86 chips as Intel regularly ups the performance. The key piece of the strategy here is that the marketplace for these NonStop systems will be existing users with only marginal increase in the NonStop user population at best.

In developing virtualized NonStop (vNS) however with vNS, HPE is taking NonStop down a completely untested path and here’s where the problems arise – we have no reference points. We are approaching blind corners and cresting hills with no visible signage as to where the road is headed. Having said that, there isn’t a firm strategy for vNS as yet and for the NonStop community it appears that the strategy for vNS remains a work in progress. And yet, vNS is anything but in a holding pattern – it may turn out to be the only outcome for NonStop that becomes strategic but for that to happen, vNS may not be solely resident within HPE.

When looking at these paths down which NonStops are headed, there may not be ten times the reference points we need to check but there is still much to observe. Hopefully with the NonStop Technical Boot Camp fast approaching, we may learn more even as these reference points consolidate and we come to see just a simple sign. For now, what matters most isn’t whether NonStop is strategic for HPE but rather, does HPE have a strategy for NonStop and will that strategy embrace the NonStop we know today even as it lifts the veil on the NonStop we all see coming. NonStop systems will likely be with us for a couple more years but as for vNS, well-executed and well-supported by HPE, it’s influence may be felt for decades to come!

Saturday, September 9, 2017

HPE – styling itself to better suit NonStop users

There has been a lot written about HPE of late following the completion of the spinoff – mergers. For the NonStop community, distractions of the past are now behind HPE and this is good news for the NonStop community!

It was back in the mid-1970s when I established my first company, Uralla Holdings Pty Limited. Uralla (and yes, another Australian Aboriginal name) sat atop a rather complex setup that included such companies as Advanced Technology Computing as well as a family trust. At the time, it made a lot of sense and to paraphrase the Eagles’ Joe Walsh yes, “I had accountants (and I paid) for it all!” While not setup to minimize taxes it did help even if in the long run, when all was said and done, I paid a lot of taxes indirectly through leases and purchases as well as the income I shared with others; strange as it may sound, in Australia this was common practice among almost all IT professionals.

When I elected to provide analysis and to write articles and blog posts, creating a company seemed a natural thing for me to do and as a result, Pyalla Technologies was born. It was only recently I caught up with how similar Uralla and Pyalla sound so I am now left to wonder, what happened in my youth to so influence me. Whatever it was the formation of this LLC gave me a vehicle that allowed me to establish business relationships with my clients. Without exception, I am always asked about my company structure and given the nature of my business model, my clients have all come to appreciate this style of operation and the reduced workload this places on them and their staff.

Today, there are many options open to companies and we are hearing a lot more about just how they go about their business. When it comes to HPE, for instance, how many of us had even heard of the term, spin-merge, before it was included in the announcements by HPE that it was divesting itself of its services and software business?

In doing so, and adding weight to what exactly a spin-merge implied, HPE retained a majority, albeit very slim, interest in the partner it worked with as part of the spin-merge. Furthermore, it unlocked considerable value to then-HPE shareholders even as it lessened its own need to fund business entities where margins were always under pressure. Smart? Among the financial community it certainly encouraged them to stay invested in HPE but among technologists, spin-merges of services and software seemed a little odd given how many industry analysts predicted that future growth for technology companies was going to be fueled by precisely these types of businesses.

However, for the customer, HPE’s stated objective of offloading non-core software together with underperforming services while focusing on infrastructure and platforms gave them a strong sense of wellbeing as they became better informed about the strategy and vision of the new HPE. Simply being customer-driven (on the needs of its enterprise customers) looked good in slideware but when it came to specifics, proved to be too broad a vision; customer-driven, as we all now know, is all well and good if you can articulate where you are headed.

Otherwise, no matter how big a vendor you are, you can quickly flounder and be tugged in every direction influenced solely by the customer you last visited. Technology customers absolutely want to know where its vendor-of-choice is headed and just how effective they will be in translating strategy and vision into real products targeting their needs!

Until we heard references being made to spin-mergers, it wasn’t a common term. Spin-offs into new companies that are in turn made up for two or more parties that merge to create a new company place huge demands on the participating companies to be fully in synch with the mission outlined for the new company. With HPE it would seem that they indeed got the ingredients right as I haven’t read one negative response.

For a while there, Dell was suggesting HPE was headed in the wrong direction but soon after, Dell offloaded its own software business mostly, it turns out, to trim the servicing of the debt it took on to go private and then to buy EMC. And the debt the new Dell EMC is carrying is quite a sizable millstone around its neck despite what positive news emanates from the company – who plans to grow a company by first taking on billions of dollars in debt?

No, the path HPE has successfully negotiated has a lot more upside than what many pundits first predicted. Even the recent slide in the stock price was purely an adjustment recognizing the additional value HPE shareholders would realize. As I recently advised my clients, the drop in the price of HPE shares was little more than a market adjustment to the changing world of HPE as the spin merge announcements become reality.

As one publication described it, “this plunge wasn't due to some unfavorable business development. Rather, it was driven by the completion of a planned spinoff and merger of the enterprise computing specialist's non-core software assets with U.K.-based Micro Focus International.” The results of which was that, “HPE received a $2.5 billion cash payment, and its investors received 0.13732611 American depositary shares (ADSs) of Micro Focus for every HPE common share held as of its record date on Aug. 21, 2017. Based on Micro Focus' closing price at $29.50 per share yesterday that equates almost exactly to the per-share ‘decline’ we saw in HPE stock today. In short, the transaction is effectively a zero-sum game for HPE investors, who now hold roughly 222 million Micro Focus ADSs representing 50.1% of its fully diluted ordinary shares.”

In this case and particularly for the NonStop community, HPE’s execution proved to be beneficial to all and gives HPE a better opportunity to not only fund its strategy and vision but to better reshape itself product-wise to better suit the needs of its customers. With the removal of the distractions that otherwise occupied the attention of many HPE executives they could now focus on middleware and platforms and in particular, the transformation to hybrid IT and the empowering of the edge.

As Whitman said, in her briefing to analysts following the publishing of the Q3 results as well as the completion of the spin merge with Micro Focus, “With that transaction now behind us, we have the right strategy and the right portfolio to succeed in today’s environment. Our strategy is clear, to make hybrid IT simple, to power the intelligent edge and to provide the services to make it all happen. It is based on what customers are asking for today and where we see the market moving.”

Yes, HPE is part of that right portfolio (of products) to succeed in today’s environment and this is something the NonStop community shouldn’t lose sight of – whether a user or a vendor, NonStop isn’t “non-core” but rather, another right product for HPE! Of course, we would all like to see a higher profile for NonStop develop as well as to see it occupy more of center stage at major HPE events. However, this may indeed be developing. As much as I champion the attributes of availability and scalability at every opportunity, perhaps the real differentiator for NonStop, particularly as NonStop develops momentum atop virtual machines, is NonStop SQL/MX (NS SQL).

With many of the unique features once only a part of Neoview together with a more complete “Oracle compatibility” option, HPE’s continued investment in NS SQL isn’t happening by accident. HPE has a need for NS SQL internally and it sees NS SQL support of DBaaS as something customers will benefit from as they move to hybrid IT. Furthermore, the work being done in support of blockchain – the port of the R3 Corda to NonStop – is layering the distributed immutable ledger right on top of NS SQL. And for all the right reasons – combine NonStop and NS SQL with blockchain technology and you will provide a very serious solution that absolutely better suits today’s customer’s needs.

HPE is reshaping itself from a business perspective and on a personnel basis as well. There will be a lot of organization changes taking place before we head into the start of HPE’s next financial year some of which have already leaked to the press involving the sales organization. HPE is also reshaping itself from a product portfolio bases and the good news here is that NonStop made the cut – not just NonStop but critical middleware including NS SQL, TMF as well as TS/MP (formerly, Pathway). What’s not to like about all of this?

From my perspective, very little but it will always come back to how well HPE can execute but if the style of the company now being projected continues to better shape it to suit the needs of its customers, the NonStop community should be the last to complain. Look around, when it comes to the right strategy and the right portfolio, isn’t this a big change from the past to know that it includes NonStop? HPE’s spin merge may have the heads of other technology firms spinning but for the NonStop community what it really represents is the creation of a strategy and vision focused solely on where HPE needs to go and with that, as a community, aren’t we the happier to read of this?

Thursday, August 31, 2017

Changing world of communications …

Our desks may be cluttered with phones, laptops, tablets and PCs and there is still a need for cables but we have come a long way with comms and for the NonStop community, we have so much choice when it comes to media focused on NonStop!

Where do we go for news today and how do we keep in touch with colleagues and business partners worldwide? Once, these two matters were very separate topics – we had phones, email, faxes and newspapers.  When the fax machine arrived near my office at Nixdorf Computers, back in the early 1980s, I could see my skills as a telex machine operator go out the window (and after all the time I put into becoming a skilled operator) but on the other hand, I started faxing everyone I knew! When it came time to consider working for Tandem Computers I heard about its Tandem-based email system and I was hooked. So much so that on my first day in the Tandem offices at North Sydney, I accidentally sent an email to everyone in the company only to recoil in shock as it contained some sensitive material. So I cancelled the email …

And that was when I first came to know Ron LaPedis and I must admit, it wasn’t a pleasant conversation. Overloading the Tandem and essentially bringing the Tandem email system to its knees wasn’t the best way to introduce oneself to the company. However, we have come such a long way since then. There is Skype, Hangouts, WebEx, GoToMeetings and a bunch of other options including private deployments of the above to choose from when we wish to give a webinar. It’s almost as if we have replaced a series of simple communications with complexity on another level so much so that when I am called upon to participate in anything online, the ten minutes prior to the call sees me testing and preparing for the call!

Working for nearly all my business life at Tandem Computers I was involved one way or another with Systems Network Architecture (SNA). For nearly three decades this had been the premier communications solution for enterprises globally and even as IBM owned the architecture and dictated the protocols and supporting services, you couldn’t argue against SNA’s popularity. Furthermore, while it had been the presence of email that proved to be the enticement that sealed the deal when it came to joining Tandem Computers, it was the industry recognition that apart from IBM itself Tandem had the best SNA product offerings of any vendor. Cool! However it was only a matter of a few years later that I had to oversee the reduction of investment in SNA offerings on Tandem as the company shifted R&D behind the appearance of TCP/IP and this led to numerous traumatic encounters within the Cupertino campus.

However it has been the industries acceptance of TCP/IP to underpin all of the communication we depend upon today that has led to the explosion of material we can access today. Talking with my good friends at IR and the work they have done to fine-tune Prognosis to better monitor Voice over IP (VoIP) and the IP telephone networks that are blossoming within enterprises worldwide is but one example that comes with rationalizing networks. It was the fodder for many articles published in the 1980s when predictions about the integration of voice and data were all the rage but now, some three decades later, it’s almost a given that communications will be carried over IP whether data, voice, video or simply text messages.

When I turn to my iPAD among the very first things I do is check on the news published by The Sydney Morning Herald even though it has been a very long time since I last resided in that beautiful city. No matter where you live or where your interests lie, you can always check the news as publications worldwide have gone digital and can be universally accessed from any smartphone or tablet. 
Of late I have written about anniversaries as at this time of the year they just seem to happen. My last post to this blog was about wrapping up ten years of posts to this social media communications channel. A little earlier I had started my ninth year of posting to the Pyalla social blog that of late is attracting an even wider audience as some of the topics covered in the post are touching on current business issues. Without the ease with which you can create and publish content, I doubt I would have stayed the course and perhaps had few options open to me when launching Pyalla Technologies.

I still recall the reception I received at one of the last European ITUG Summits held in Brighton, alongside the English Channel, where I gave a keynote presentation only to have colleagues come up to me with counts of how many times I referenced blogs! Ten years ago there were few blogs being published with none focused on NonStop. How the world has changed and there should be no excuse for any member of the NonStop community not to be well-informed about everything that is taking place in the world of NonStop as many such blogs exist today!

Getting balanced information is always an issue – is the source of the story legitimate? Was the information cross checked? Fortunately, for the NonStop community there is a very vocal core of NonStop adherents that it is very rare indeed that anything misleading or simply dead-wrong makes it into the public domain. It is this active peer review process that ensures what we read is both accurate and useful. Even opinion columns of which this blog is typical attracts attention very quickly when the storyline strays a little too far off topic or descends into murky waters with little understanding of the subject matter. So too is the case today with the digital publication, NonStop Insider.

Hard to believe but with the upcoming issue the first full year of publication has been completed with no signs that there will be any letup in the topics being covered. With NonStop Insider Year 1, Issue #12, about to be wrapped up already sights are set on subject matter for inclusion in the first issue of Year 2. Much of the credit has to go to those NonStop community members who have worked with the editorial team on submissions. While almost everything makes it into an issue, a careful eye is kept on the directions some contributors take – after all, the overriding theme for NonStop Insider is NonStop - and when it comes to communicating anything at all on NonStop, there is a very clear bias favoring positive storylines.

Credit too must be given to the management team at TCM Solutions. They were present day one when the decision was made to product NonStop Insider and without their logistic support together with technical knowhow, NonStop Insider would still be an idea sketched on a whiteboard. At every step of the way they have been supporting this digital publication and many of their ideas are now captured in recent issues – and there are more good ideas about to be implemented. As it turns out, working with TCM has proved to be a whole lot of fun as well so that too helps with putting in the effort to ensure a quality publication reaches the NonStop community. The idea for “NonStop CEOs have their say” was first articulated by TCM and we are pleased with how well this has been received among CEOs of NonStop vendors.

The world of business communications has changed considerably through the decades – who remembers waiting anxiously each month for their copy of Datamation to arrive in their mail slot (and yes, do we even have mail slots anymore)? To say we have gone digital may not tell the whole story. When it comes to providing “useful and important information” we have come so far as we look back at how little information was available to us just a couple of decades ago. For a highly specialized product like NonStop to have attracted the variety of publications that it has of late is encouragement of itself – who remembers wishing to read anything at all about NonStop in the “papers” – is remarkable but pleasing all the same. And if as yet you haven’t taken a look at NonStop Insider or become a subscriber then it is not too late to check it out for yourself at:

Sunday, August 20, 2017

Celebrating a full ten years of posts …

Back on August 20, 2007, coming off my time as ITUG Chairman, I had no idea that I would be still writing posts for this blog. But here it is, ten years later, and the fundamental message of NonStop has changed very little …

I have to admit that this post is following quickly on the heels of another post just published, a matter of a few days ago. But tradition remains an important factor; I have written a post to this NonStop community blog, Real Time View, every August 20 for ten full years. Almost every August 20,  as checking the archives, there were a few posts that missed the exact date. For me however, precision is important. It’s rather binary when you think about it. It either happened or it didn’t. I was reminded about this during a conference call with a well-known industry analysis organization when the discussion turned to 24 x 7 and the way it was treated; all systems today run 24 x 7, don’t they?

Right now it feels as though I have been working 24 x 7 as I finally return to my office after almost ten weeks on the road. Living and working from our company command center through thick and thin, as we experienced outages of unimaginable proportions, it is good to be sitting behind a desk in a room where there is a sense of permanency. This office isn’t going anywhere nor is my desk and my computer. However, to suggest to you that I have been working around the clock, 24 x7, just to complete this post, may hold some truth given this topic of 24 x 7 has been on my mind for a long time, but in truth, there has been a lot of down time. The office may be a permanent feature of the home but I will still come and go as business needs dictate and that will rarely ever be 24 x 7.

The almost blasé dismissal of the value proposition of 24 x 7 was a wake-up moment for me. An epiphany of sorts, actually! Have we arrived at a point in time where business continuity isn’t all that important anymore? Across the NonStop community the attention we give to the value that comes with being able to run our mission critical applications continuously, 24 x 7, ensuring our customers have access to the applications they need to access any time they want is sacrosanct. After all, with all that we know today about NonStop would it still interest us if it didn’t run 24 x 7? In my last post, New home, new office; time to take stock … I extoll the virtues of scalability and as much as I really like the value scalability brings to any discussion on the value proposition of a system, it still plays a role to aid availability for most stakeholders in the NonStop community as you do not need to stop your operation to add capacity!

In being blasé about 24 x 7 this week, the analyst with whom I was working did think about the value proposition 24 x 7 provides, but then added that truthfully, it is not fated as strongly as it used to be, as even now, just as with real time, the actual time implications for both 24 x 7 and real time are similar. A few minutes here, perhaps an hour once in a while, surely doesn’t matter in the grand scheme of things. No down time means just that – no down time. Likewise, continuously available sounds a lot like permanent availability and readers of posts to this blog will have come to see that I have made reference to permanent availability many times in the past. Marketing types don’t like this term as it has a sense of no wiggle-room about it and seriously, how can you be sure nothing will fail? However, that’s the whole point – NonStop users keep purchasing NonStop systems because they provide permanency to where their clients will always be able to interact with them.

Like my office and the sense of permanency it affords me NonStop systems can be relied upon to be there even if my own access to the applications is casual at best. However, what I am pushing back on these days is terms like “near real time,” as well as “highly available is good enough!” The origins of real time were in missile detection and interception and there could be nothing vague about that. If you didn’t respond in time and neutralize the threat, you simply ran out of time. Nothing real time about pressing the launch button too late to protect a determined target! Again, this is all well-known to the NonStop community and yet, if industry analysts are becoming blasé about this key attribute of NonStop systems and the solutions that have been designed to run on NonStop, then others will follow and it is most definitely the beginning of a slippery slope leading to enterprises associating any value whatsoever to continuous availability.

A couple of years back, IDC did a pretty good job defining different availability levels with availability level four (AL4) representing the very pinnacle of availability. The only vendors to have systems awarded AL4 were HPE with NonStop and IBM with special Parallel Sysplex configurations of the mainframe. The inclusion of IBM really was for appearances sake – IDC just didn’t want a level of availability where only one vendor had achieved that goal. On the other hand, simply trying to build a Parallel Sysplex configuration for a reasonable cost that supported AL4 has proved allusive to almost every mainframe installation. Adding more components – more moving parts, of you like – to an architecture that never intended to support operations 24 x 7 is hazardous at best, but more often proves foolish at worst.  If AL4 could only be awarded to vendors with systems that provided the level of continuous availability IDC was describing out of the box then NonStop would be the only participant.

There is a lot more to this story, mind you. Transformation to hybrid IT is all the rage right now with HPE among the biggest supporters of hybrids – whether hybrids of servers and private clouds or even hybrids of clouds from different suppliers. However, the word hybrid is pretty scary when you get right down to it and there is no doubt that vendors like HPE recognize the need for them to shine the spotlight on simplifying the transformation.  Stated as straightforward as I can – hybrid IT is at best really hard to do and at worst, a path to disaster. For most enterprises the expectation is that vendors will be able to take what they have today, combine with products they would like to have in the future and after a fashion, mold them to where they appear as just a single system albeit a cluster of sorts. Won’t happen – if the Parallel Sysplex aspect to the mainframe has taught us anything at all, adding availability to a system inherently not architected to run 24 x 7 is as much about smoke and mirrors as it is about fulfilling the wishes of individual system programmers who want an opportunity to give it a try.

Hybrid IT is going to amplify the importance of availability. And what it will not do is be a substitute for 24 x 7 and by this I mean, give data center managers a way by which they can opt out of having any part of providing services on the basis of 24 x 7. “We are working with new technology here, mind you, and it is a little flakey so yes, expect outages,” will be the likely response from a data center manager. “We have negotiated with our cloud vendor for backup with redundant compute and storage capacity but we are still working out the bugs and vendors aren’t willing to sign SLAs that demand zero downtime. Can’t be done!” If you read blog posts, as I do, and work with other writers and analysts from across the industry, you will be familiar with how they almost always dumb-down the issue of availability and it really all comes back to no, they can’t tell you how 24 x 7 could be done. As permanent availability, if you like. And the upshot is that they are all discounting the value proposition of 24 x 7 informing all they come in contact with that like real time, there is considerable wiggle room so don’t be too hard on yourselves if you fail from time to time.

As I look back on ten years of blogging and on all that has been addressed in these posts, in a way it seems so strange that I have to write a post about availability and of how no other vendor can provide 24 x 7 support for any of their systems. As surprising as this may seem to many of us, after these ten years we as a community really haven’t done a very good job of extolling the virtues of this really important contribution to the value proposition of a solution. And yet, I am one of those who simply do not like to interact with an application or more likely these days, an app, and find that it is not available for some reason. Hopefully I will still be posting for another ten years and I just have to say it – will I be writing yet another follow-on post to this one and will we still find an industry as blasé about availability as some analysts are today? 

Friday, August 18, 2017

New home, new office; time to take stock …

Revisiting the topic of scalability reminds us about just how flexible NonStop systems continue to be and with enterprises preparing for ever increasing data volumes, having an out of the box system that readily scales will prove to be a big bonus ...

After two plus months on the road, meeting with many members of the NonStop community as we drove almost 6,000 miles in the company command center, it’s a good feeling to once again be able to say you have a permanent address. At least, as permanent as anything can be these days, given the nature of the business we are in – content generation can be done almost anywhere but even so, having a residence that best could only be considered temporary certainly came with many distractions. On the other hand, we did stop in at N2TUG as well as 2017 HPE Discover and we did get to meet with clients in southern California. 

To directly answer questions I have received of late our new residence is just a little north of the Niwot address where we used to reside. In fact it is just to the east of Ft Collins, home to a number of executives and managers of HPE, and East Harmony Rd is now just two exits further up the highway. In many ways we feel as though we are closer to Cheyenne, Wyoming, than we are to Denver, Colorado. However, having scaled down considerably, the issue of scale has come up a lot of late and for the NonStop community, scalability remains a huge positive when it comes to why applications are still running on NonStop. Perhaps not so much scaling down, as in our case, but much more likely about scaling up, naturally!

You can play scales on the piano all day and you can scale new heights as a mountain climber, but when it comes to computing dimensions are involved; you can scale up and you can scale out. The shared-nothing architecture of NonStop is perhaps the best example of a computer architecture that supports almost unlimited scale-out properties and with as much focus as there is today on cloud computing, it is this scale-out attribute that generates much of the discussion.

Expressed as simply as I can, you want to embrace cloud computing to have no practical limits on the resources available to you – isn’t that what NonStop has been providing for decades? Scale up and scale out are familiar terms for NonStop users but they remain very important attributes of NonStop systems. “When you think of scalability as an architect, you must consider both vertical and horizontal scalability,” NonStop consultant Dean Malone wrote in a recent email exchange. “Vertical scalability is the ability to add more and more processors and storage to a given server whereas horizontal scalability is the ability to add more servers to expand processing capacity.”

During the stock market melt-down on Black Monday, October 19th, 1987, where the largest one-day crash took place with the market losing more than 20% of its value in just that one day, trades were flying thick and fast with volumes way outside anything previously forecasted for the New York Stock Exchange. However, it was running on NonStop at the time, so NonStop engineers simply kept inserting more and more CPUs and the NonStop system just kept on running. Not a hitch as it scaled up to handle the huge volumes of the day. The Exchange only called a halt to trading when the continuous display replicating the ticker tape of old simply couldn’t keep up with all the action. We often reference “elasticity of provisioning” when it comes to clouds but in this case, the NonStop engineers inside the data center demonstrated such elasticity more than thirty years ago!

Almost unlimited scale-out capabilities? Anyone who has sat through the presentation by the OmniPayments team of late had seen referenced an instance of OmniHub that spans 3,000 CPUs that make up a single system and to the best of my knowledge, no IBM mainframe of this size has ever been sold.  For a brief period of time, however, IBM did talk about initiating thousands of Linux workloads on a single System z. I recall it was back when the z10 was first announced and the number of Linux workloads was 4,000+, but I am sure someone will give the exact number as it could have been much higher, like 40,000! The fact remains however that there is a difference between running real world applications versus providing simple demonstrations. As Omnipayments’ CEO, Yash Kapadia, so often reminds me, should OmniHub ever be offline for any reason the multimedia message services for one entire country would come to a grinding halt.

Stock exchanges and telco systems aside, the industry is about to head into essentially unchartered waters – the user experience (UX) or what is increasingly being called the Customer Experience (CX) that, according to one source I checked, is the “product of an interaction between an organization and a customer over the duration of their relationship” is placing new demands on our systems. Call it the intersection of transaction processing with real-time analytics and suddenly your requirements for both compute power and storage become significant unknowns. Just how do you scale and is the cloud the only solution that comes to mind?

In his latest post to the Striim blog, The Rise of Real-Time Data: How Striim Helps You Prepare for Exponential Growth, Striim Cofounder and CTO, Steve Wilkes, writes about these upcoming demands on our systems. “The predicted huge increase in data sources and data volumes will impact the way we need to think about data. The key takeaway is that, if we can’t possibly store all the data being generated, ‘the only logical conclusion is that it must be collected, processed and analyzed in-memory, in real-time, close to where the data is generated.’” For the current users of NonStop systems there is no escaping the message here. To remain in business and be competitive, analytics will be playing an increasingly important role in how you support CX, so enterprises will need systems that can truly scale to meet these new demands for compute power and storage.

Scaling has also been on the mind of Paragon’s CEO, Jim Perry. It may not always be a topic we associate with scale but when it comes to testing of our applications and their automation, the anticipated additional work required in support of CX will necessitate the development of even more applications and these additional applications need to be subject to automated testing. And much more, according to Perry: Today it is all about continuous integration, continuous delivery and continuous testing in order for your enterprise to deliver quality code faster. For Paragon, the challenge that comes with scaling has as much to do with the people as it has with the tools as Paragon sees the enterprise looking to scale the business in a manner that determines whether or not it stays in business.

Scalability has always been an important attribute of NonStop systems. From the earliest days of Tandem Computers it was all about availability, scalability and data integrity and nothing really has changed. Perhaps we associate data integrity with security and that’s OK, but as for availability and scalability, perhaps the NonStop user is being best served because of the inherent architecture of NonStop that allows almost unlimited scalability. While it is true that no enterprise has come close to hitting the size limits of a NonStop system when clustered to the maximum, knowing that already there are users out there with 3,000 processors in their system has to be encouraging.

Scalability may indeed be the “special sauce” that is NonStop today and may even be overlooked at times as it is overshadowed by the availability aspects of NonStop systems. However, every enterprise wants to grow their business in a manner that simply doesn’t break the bank and for NonStop users, it has always been a simple process to add more CPUs – indeed, the architecture of NonStop assumes that the system has been populated to the max with CPUs and is checking to see if they have arrived; cool! As for our scaling down of our home and office well, we may be living smaller but we are still thinking big. And that is something that will not change any time soon!

Tuesday, August 8, 2017

Our colorful lives!

Being on the road as much as we have of late has given us time to talk to members of the NonStop community and it’s been a very interesting indeed colorful ride so far …

There is nothing like seeing the first signs of home appearing on the horizon. With all that has transpired these past couple of months, crossing the state line and seeing the tourist-promotion billboard declaring entry into Colorful Colorado was welcomed, indeed. I am going to leave it to postings to other sites to cover all that has transpired while on the road these past two months but suffice to say that it was most interesting, with challenges coming at us every day. Sounds an awful lot like a typical day inside any enterprise data center as no matter the plans, there is always something unexpected taking place.

It has been on these travels that I have come to truly appreciate the need to have multiples of just about everything. Suddenly primary and secondary are no longer abstract terms as even the best prepared plans go right out the window when unexpected situations develop. For me the big item I have been covering is the real world experiences of those enterprises embracing hybrid IT; in particular, those enterprises that include NonStop in their plans.

So, let me cut to the chase – I have as yet to come across any enterprise that have gone down this path or talked to data center managers tackling the finer details of having mixes of traditional and cloud environments supporting anything apart from pilots and prototypes. So, has something gone awfully wrong and are the reports coming from industry analysts overstating reality?

To be fair, I wasn’t expecting to come across a sizable number of NonStop users kicking off such projects and yet, I have become somewhat curious about what is happening out there in today’s data centers where NonStop has an established presence. Members of the NonStop community that I have met through the years have proved to be colorful individuals, not slow to form opinions and yet, it would seem many of them are looking at their peers within the community running applications on NonStop to see who blinks first. Or not at all!

The promise of NonStop participating in hybrid IT on the other hand has huge potential upside; not just for those enterprises already convinced of the merits of NonStop but to the industry as a whole. Who wouldn’t want better availability, scalability and yes, security? My point here is that all too often the messages we develop in support of a product or service have expiration dates but when it comes to the key attributes of NonStop there are no expiration dates. Have you ever seen a use-by date on a NonStop system?

It isn’t just the colorful characters we come across within the NonStop community that help set directions but general mood of the community as a whole. It’s as if collectively we have all been holding our breath that there was going to be a breakout season for NonStop happening soon. Don’t get me wrong. You don’t have to be colorful to make a contribution. What I am really trying to convey is that after all these years there are still many individuals who champion NonStop on every occasion.

For them there is absolutely no reason on earth why any mission critical application shouldn’t turn to NonStop as the go to first choice system for deployment.  However, what is taking place is a leadership transformation across the vendor community as many of the baby boomers begin to ease up to take a back seat to new leaders. Whether CEOs or Managing Directors, have you noted just how many new folks have come to help companies we know so well over the last year or so?

If industry and financial analysts can be accused of overstating reality and   the progress being made toward embracing the hybrid IT by major enterprises is more snail like than rabbit like, should we start to think that perhaps we have, yet again, come up with a fantastic technology and a terrific answer when it comes to cost containment at a time when the problem isn’t all that bad? Are we at a moment in time when it all sounds good but no, it’s not for us just yet?

In other words, has the “Gucci smart brigade Data Center” not struck as loud a chord with data center managers as the “Bonds t-shirt Data Center?” (For non-Australian, you can substitute any local brand and of course, Fruit of the Loom easily applies!)  Point is, does a glamour solution hold up when all we need is some pretty bread and butter solutions to work better? Wouldn’t it be nice if we could easily connect our silo-ed applications in a way that gave us a hub and spoke model where the hub never failed?

Oftentimes I fret over just how caught up we can become in what’s new rather than what works. With exceptions, rather than what is universal. NonStop at its heart is fundamentally anything but exceptional when it comes to what NonStop runs on – it was always conceived as a solution to a problem of quality and the unexpected. Back in the 1970s no hardware vendor would consider talking about their infallibility but rather talked up the RAS support they had, the wonderful service staff they had on hand and even about backup sites you could turn to if you data center went south! 

And NonStop took all this into consideration and continued to work, 24 x 7, no matter what fell apart. In other words, it was designed to keep working when all else failed. And there is a reason why NonStop talked up take-over versus fail-over. NonStop wasn’t going to wait for a disaster but rather, stepped in to head off disaster once it detected normalcy was being compromised.

There has been a lot of news of late about the success or otherwise of cloud vendors. IBM is talking up its revenues from cloud computing offerings even as Microsoft has detailed how with Azure it is now making more money from clouds than Amazon, even as industry and financial analysts endorse Amazon as the leader.  It's hard not to apply the label “colorful” to someone like Amazon’s CEO, Jeff Bezos, who for a brief couple of hours had more money than Microsoft’s Bill Gates just a few days ago, but for the enterprise, all the color drains from their visage once we get past thoughts of clouds for anything other than select IaaS opportunities.

Perhaps the biggest potential game changer is the availability of Azure for the enterprise to power its own on-premise cloud. This is where I see IBM having difficulty competing over the long haul as enterprises equate cloud with x86 and there really isn’t any enthusiasm in support of Power. Given this, and that Azure will support Hyper V, that is a little more Gucci than it is Bonds.

According to what Microsoft is telling enterprises today, “The Hyper-V server role in Windows Server lets you create a virtualized server computing environment where you can create and manage virtual machines. You can run multiple operating systems on one physical computer and isolate the operating systems from each other. With this technology, you can improve the efficiency of your computing resources and free up your hardware resources.” Key here, for the NonStop community, is running multiple operating systems on one physical computer and isolating (the impact, good or bad) the operating systems from each other. With this, the prospect of firing up a virtualized NonStop platform becomes rather obvious – and can be given serious consideration whether it is Azure on-premise or Azure driving a cloud somewhere.

Here is where it is all headed. Those colorful individuals championing NonStop within their enterprise are going to arrive at the data center one day to find their beloved hardware is out on the loading dock. In its place, aisle after aisle of servers racked to the ceiling with nothing left to do other than to fire up virtualized NonStop as just another Azure-supported operating system; the difference being that, unlike its peers running alongside of NonStop, it can keep running the mission critical applications the enterprise depends upon when they all falter for one reason or another, possibly just an automatic upgrade pushed to them by none other than Microsoft. Yes, it will be NonStop that will make Microsoft look good running some very important applications.

Calamities are always present and disaster is just a few keystrokes away. In our world of highly networked systems, it’s good to know we will be able to turn to one operating system that knows when to take over and not just wait for failure. It’s very much unique to NonStop and it’s hard not to be impressed with this model developed at a time when all hardware was flaky and unreliable. Kind of sounds very much like today, doesn’t it? And here’s the good news, virtualized NonStop on Azure and Hyper V is on the roadmap and likely being tested in the labs of NonStop development even as I type these last few words. Colorful? Well, I can’t think of anything more colorful than seeing a lot of red faces among enterprise executives when they see what today’s NonStop can achieve supporting their business. 

Friday, July 28, 2017

Moving forward - transformation and virtualization makes testing of the business logic even more critical

When we think of virtualization and the coming of clouds and as we consider all that may be involved in transforming to these hybrid combinations incorporating the traditional with the very new, how often does the testing of our applications come to mind?

There have been times these past few days where events have reminded me of practices and disciplines that dominated our discussions in former times. I had the misfortune of breaking things and working with insurance companies and I was left without access to more modern methods of communications to the point where I was asked if I could possible find a fax machine so I could receive a fax.

It was in the early 1980s when the vendor who employed me back in Sydney, Australia, installed a fax machine in my office and I no longer had to take the long walk over to the telex machine where I would have then spend hours preparing a paper tape for transmission back to my head office in Richmond, Virginia. In many ways it was a sad occasion as I had really mastered the telex machine and yet it was progress, given how easy it became not only to transmit the written word, but pictures charts and graphs as well!

Fast forward to today and the power of the mobile phone is undeniable. We can communicate with anyone we want to, at any time, about anything at all. In a couple of recent conversations the talk has led to consideration of whether the mobile phone was about to fade from the scene, to be replaced by even more spectacular technology and whether or not we were entering essentially an era of magic. How else can you explain away the knowledge so many businesses have about everything we do? And yet, even with the most advanced forms of communication there will still be a need for apps to support inquiries as well as many different models used for purchases and other financial transactions.

Point is – we still write code and as much as AI continues to advance there remains a need for humans to remain very much involved in stringing together the logic that drives decisions for success. When we talk about clouds we talk about the elasticity of provisioning that addresses both the needs we have for data storage and business logic. But here’s the rub – we are working diligently to be able to store vast amounts of data even as we continue to write logic practically unchanged from how we did it in the past albeit, a lot more quickly of course.

Let me take you to an earlier time, decades ago. In fact, many decades ago, to when we first started coding the computers that marked the beginning of our adventure with computers. I was recruited by IBM on the campus of Sydney University at a time when I was becoming very bored with academic life. At the time I wasn’t really aware of the implications of my decision to participate in a series of tests the University sponsored but it was only a matter of months before I found myself on another campus; this time, it was the operations center for a steelworks in Wollongong, Australia.

The year I was recruited was the southern hemisphere’s summer of 1969 and my first day on the job was 1970, so effectively I have been looking at code for almost six decades. And the fundamentals haven’t changed, just the timeframes. Ambitions? Well, my first job was to develop applications in support of a new steelworks that was being built but along the way, I was tinkering with the operating system as for a period of time the IBM mainframes the steelworks purchased didn’t have enough main memory to run any IBM operating system so we pretty much came up with our own – just a couple of lines of IBM 360 assembler code together with a bunch of macros.

Timeframes? Well this is where the dramatic changes can be seen, perhaps more so than when it comes to chip power and Moore’s Law. I was writing just one application a year – perhaps a little bit more. I grabbed a coding pad, wrote assembler instructions for the logic I was pulling together to solve a business problem. Pages and pages of assembler code that was then submitted to the data entry folks who oftentimes took a week or more before they returned to me the coding pages along with a box of punched cards. I kept running these decks through the assembler until I got a clean assembly at which time I took the object deck and began to test.

As a matter of practice, we always left an addressable piece of storage (of about 100 to 250 bytes) that if my logic went awry, I could branch to, throw in a couple of correcting statements, and return to the mainline code. Ouch – yes, almost every production application was supported by a series of supplementary corrective card that steered the logic back to where it needed to be without having to reassemble the whole application, or worse, send the coding pages back to the data entry team.

Testing? For my applications, which supported what we called the “online application” I would often resort to booking solo time on the mainframe and dialing in “single cycle” so I could manually step through each instruction and watch the results via the console display lights that changed with the execution of each instruction. Productivity? Wow – I could debug my programs more quickly than others working with me who preferred to go home at the end of the day.The company had enough programmers to complete the implementation of the new application for the steelworks about to be commissioned so it seemed reasonable to function this way. Looking back at what we did all those years ago I am not surprised that applications often stopped but rather that any of them ran successfully at all!

Now let me fast forward to practices of today – attempting to develop and test applications and then ensure that they were maintained same way as we did all those decades ago is not only not possible but runs contrary to the always-on, always-connected 24 X 7 world we live in as we remain tethered to our mobile devices plugging away at the latest app. Languages and development frameworks have changed. We don’t simply write code, we pull code from multiple sources and practically assemble a program that in turn is just a part of an application designed to address a specific business need.

Providing defect-free applications at a fair cost, particularly when these applications have to accommodate today’s multi-vendor and hybrid environments even as they have to be aware of the many regulatory and compliance mandates for each industry needs something a whole lot more sophisticated than simple access to a system that can be set to single cycle!  And I was reminded of this only a few days ago when I had a conversation with folks at Paragon Application Systems. These are the folks who have developed the premier testing solution for the payments industry.

“It’s all about continuous integration, continuous delivery and yes, continuous testing,” I was told by Paragon CEO, Jim Perry. Integration, delivery and testing is a never ending cycle, for the life of the program and application, performed in a seamless manner whereby the state of the program or application is always current and correct. “The growth of our global economy has created payment systems that have grown too intricate and change too quickly for any organization to risk deployments without frequent, comprehensive regression testing. No company can hire enough people to manually perform the testing necessary in the time available within a release cycle. Automation of the software build and delivery cycle, as well as test execution and verification is required.”

Manually perform testing? Grown too intricate? For the NonStop community there has always been concerns about the business logic bringing a NonStop system to a halt. And for good reason! Fault tolerant systems have been designed to keep processing even when facing single points of failure, but oftentimes, poorly implemented and tested business logic can get in the way! Unfortunately it’s about to get a whole lot worse as testing not only has to ensure the application is defect free but that the underlying platform, now being virtualized, is configured in a way that NonStop applications can continue being NonStop.

We have virtualized networks and we have virtualized end points and this has helped considerably with automating our test processes but now the platform itself is being virtualized and this is a whole new ball game for many enterprises IT shops. And this makes the need to have something like Paragon on hand even more important – we have stopped manually checking anything these days so we cannot start now. In the coming months, as we continue to look at the transformation to hybrid IT and to virtualization and the software-defined everything I am planning on devoting more column inches to testing as all too soon our inability to thoroughly test what we are turning on in production could bring many a data center crashing down.

If as yet you haven’t looked at Paragon then you may want to visit the web site and download a couple of papers as I have to believe, for those of you in the NonStop community who are only vaguely familiar with how testing has changed, particularly when it comes to testing for payments solutions, it may very well be an opportunity to rethink just how comfortable we are with the processes we have in place today. And wonder too, how anything worked at all back in the days when it was all performed manually!

Tuesday, July 18, 2017

When things go horribly wrong …

How a few cents of wire lying unnoticed on the road can cripple a vehicle as large as an RV; we continue to value availability and it’s time to double down on the benefits of NonStop!

The most essential attribute of NonStop today is its fault tolerance capabilities. Availability is as highly valued as it has always been and yet, there are many parties advocating that it really isn’t an issue any longer. Push apps and data into the cloud – public or private, it matters little at this point – and the infrastructure on offer from cloud providers ensures your apps and indeed you data is protected and available 24 x 7. But is this really the situation and should CIOs contemplating a future for their IT centered on cloud computing be immune to the many ways apps and data can be taken offline?

Unintended consequences! We read a lot about such outcomes these days and it is a further reflection on just how complex our interdependencies have become. Push a button over here and suddenly way over there, something just stops working. They weren’t even on the same network, or were they? Throw malware onto a Windows server looking after building infrastructure and suddenly, the data on a mainframe is compromised – who knew that they shared a common LAN? Ouch – but it happened as we all know oh so well.

For the past two months, Margo and I have been fulltime RVers. That is, we are without a permanent address and have been living out of our company command center. We have driven to numerous events all of which have been covered in previous posts to this blog. Our travels have continued and this past week we headed down to Southern California to meet with a client and the trip took us through Las Vegas. In the heat of summer in the desserts of Nevada we hit temps exceeding 110F. Overnighting at our regular RV site, we found a collection of fluids pooling underneath the RV and sheer panic set in. After all, this is our home; what has happened?

It has turned out that unknowingly we had run over wire mesh that was completely invisible to the naked eye. But those strands of very thin wire managed to wrap themselves around the drive shaft of the RV where they became an efficient “weed whacker” – you know, those appliances we often see being used to trim hedges and lawn borders. In a matter of seconds our own drive shaft powered these thin wires such that the result was multiple shredded hydraulic lines and air hoses – who could have imagined such innocent strands of wire could be so disruptive or  that they could completely cripple a 15 plus ton coach in a matter of seconds. Yes, unintended consequences are everywhere and for the most part, lie outside any of our plans and procedures, where detection of the event comes too late.

It is exactly the same with all platforms and infrastructure, on-premise or in the cloud, or even hybrid combinations of both! If you don’t design for failure – even the most far-fetched – then you are destined for failure. It is as simple as that. In my time at Tandem Computers we often referred to an incident that led to Tandem systems always being side-vented and never top-vented. The reason for this was that, at an early demo of a NonStop system, coffee was accidentally spilt on top of the machine effectively stopping the NonStop. Now I am not sure of the authenticity of this event but would welcome anyone’s input as to the truth behind this but it does illustrate the value of experience.  Designers would immediately have caught on to the possibility that coffee would be spilt on a system the day it was being demoed but for Tandem engineers, it led to changes that exist to this day.

Experience has led to more observations which in turn have generated more actions and this is all part of the heritage of NonStop and in many respects, is part of the reason why there isn’t any competitors today to NonStop. You simply cannot imagine all of the unintended consequences and then document them in their entirety within the space of a two page business plan. But design them you must and as I look at how the platforms and infrastructure being hawked by vendors selling cloud computing today are dependent solely on the value proposition that comes with redundancy (which is all they ever point to), my head hits the table along with a not-too-subtle sigh in disbelief. Redundancy plays a part, of course, but just one part in negating potential outages but availability needs so much more. But at what cost?

The whole argument for cloud computing today revolves around greatly reduced IT costs – there is an elasticity of provisioning unlike anything we have experienced before but more importantly, given the virtualization that is happening behind the scenes, we can run many more clients on a cloud than was ever conceived as possible back when service bureaus and time-sharing options were being promoted to CIOs as the answer to keeping costs under control. With the greatly reduced costs came the equally important consideration of greatly reduced staff. And this is where the issue of unintended consequences really shows its face. Experience? Observations? Even plans and procedures? Who will be taking responsibility for ensuring the resultant implementations are fully prepared to accommodate elements that fail?

There is a very good reason why pilots run through check lists prior to take off, landings, changes of altitude, etc. Any time an action is to be taken there are procedures that must be followed. When I turn on the ignition of the RV, there is a check list that appears on the digital display and for the same reason as pilots have checklists – too many bad things can happen if you miss something and I have managed to inflict considerable damage to our RV through the years when I forgot to follow all the items on the checklist. And there are best practices in place today at every data center that have been developed over time based yet again on experience – so when next we talk about availability as we head to clouds, who is preparing the next generation of checklists?

It is pleasing to me to see the efforts that OmniPayments is putting into providing cloud computing based on NonStop. For the moment it is solely providing payments solutions  to select financial institutions but even now, the number of clients opting to run their OmniPayments on the basis of SaaS rather than investing in platforms and infrastructure themselves sends a very powerful message to the community. Don’t discount the value of NonStop as has been demonstrated through the ages – get to virtualized NonStop (vNS) as quickly as you can and go champion within your enterprise that yes, you now have the best possible solution that can survive even the strangest of unintended consequences. It’s just what NonStop was designed to do and it keeps on doing it.

You run on NonStop X so you will run on vNS. There is much that can go wrong with traditional physical systems just as there is much that can go wrong with clouds. Simply going for more clouds and leaving it to redundant banks of servers isn’t the safety net any enterprise should rely upon so take it to the next level. Let all you know how NonStop is taking its most prized attribute, availability, high and wide into the clouds! After all, these clouds are every bit as vulnerable to failure as any primitive hardware built in the past and NonStop knows failures when it encounters them and just doesn’t stop! 

Sunday, July 9, 2017

Growth is not optional; it is a must!

NonStop keeps on going no matter what system failures may arise – but is this enough? What follows here is purely speculative on my part but is worth raising with the NonStop community. And yes, any and every comment more than welcome …

Travelling around Colorado these past few weeks it’s so clear just how much growth has occurred. Lakes and reservoirs are full to overflowing – more than one state park we have visited had pathways closed due to local flooding – grasslands are standing tall and trees and bushes are a brilliant green everywhere you turn. Spring rains have continued into the summer with afternoons subject to intense thunderstorms, most days. I can recall that in the past such storms were forming at this time of the year but rarely did the rain reach the ground, but this year there have been more late afternoon storms than I can recall.

Living in a motor coach makes us a little susceptible to inclement weather but so far, we haven’t suffered anything more than a fright from an unexpected thunderclap. The rainfall that continues well into summer isn’t something we aren’t pleased to see of course but the growth these rains have helped produce has turned the Colorado front ranges greener than I have seen for a very long time. It may all be problematic later in summer if it all dries out as we have seen more than our fair share of wildfires with summer’s end but until then, this extended period of growth does a lot of good to the state. Any reader who has also seen photos posted to my Facebook and Twitter accounts may have already seen what I am talking about but just as a reminded, I have included one of the photos above.

For the past week I have been working with vendors on the next issue of NonStop Insider that should appear later this week. What really has struck me is the number of references to growth. Where will it come from? Does the business value proposition of NonStop remain as strong as it once was or will NonStop struggle to sustain double-digit growth year over year? The theme of this issue of NonStop Insider was transformation – you will see numerous references to transformation in the articles that were submitted – but does transformation lead to more sales? It’s questions like these that have come up more than just a couple of times this week and it made me rethink some of the answers I had previously provided to my clients after I had been asked this question.

The business value proposition is as real today as it ever has been – it’s all about availability after all. Out-of-the-box, supported by middleware and utilities that are all part of an integrated stack, from the metal to the user interface! From the perspective of any user developing an application, there is always concern about what will happen if something breaks and knowing that your application will continue to function even as all around it may fail is not something that can be lightly discounted. It’s really a very valuable attribute with an almost “holy grail” consideration about it – just talk to those now building their very first application and watch their reaction when you say you work with a platform that survives failure and just keeps on running. Like the famous “Energizer Bunny!”

However, for most of us, we had this all before. We know the value of NonStop but it’s a strange development environment with legacy tools and some very strange ways of doing things – what’s this about checkpointing? What’s this about redundant storage? Isn’t it all very expensive and don’t you have processors that simply don’t do anything until they are needed? Recently, I have heard just about everything being addressed except for the most important aspect of all – out-of-the-box, it just works! No, you don’t write NonStop programs, you simply let NonStop run the programs you write. You have a rich choice of languages and development environments – NonStop supports it all but with the addition of fault tolerance. It not only just works, but it keeps on working. The Energizer Bunny will eventually stop – its battery will run down. It may last a lot longer than other batteries, but as a power source, it will eventually fail. Not so with NonStop!

So, yes we have the susceptibility to failure covered. But growth? To paraphrase the Apollo space mission, for NonStop growth is not an option. In some respect we have to be very thankful that HPE has given NonStop every chance to build a larger population of users. There has never been serious consideration to discontinuing the NonStop program despite what rumors you may have heard – there are just too many blue-chip customers for HPE to turn them out onto the streets. As witnessed last year at HPE Discover, from the CEO on down, there is a strong appreciation for the value proposition NonStop brings for even the most fastidious of users. However, today’s HPE looks nothing like the company that existed just a few short years ago. Now HPE is looking to all of its products to start producing the type of growth any new company demands.

But here’s the rub; there is opportunity for growth with NonStop for sure but not likely in its present form. Surprised? Well you shouldn’t be. It’s been coming for a very long time – NonStop is going to wash over every product and every HPE system will contain some elements of NonStop as HPE looks to differentiate itself based on availability. A stretch? Dreaming? Perhaps this is taking it a little too far – but then again, is it? Imagine for a moment that any distribution of software HPE builds has a little of NonStop flowing through it, and applications running on HPE as a result just keep on running, would that of itself be the source of future growth for NonStop?

Stepping back for a moment, you will find nothing of this in any NonStop roadmap presentation. For now, the NonStop development team has so much on its plate and as fast as it is moving, there is still so much more to do. However, the judicial placement of a couple of knowledgeable developers within other projects and this could all change in a heartbeat. Yes, NonStop still contains a certain amount of special sauce but it is NonStop’s special sauce and it is NonStop development that has the recipe. Let a couple of chefs loose in other kitchens and stand back – NonStop is no longer just a product but a philosophy and that’s not diluting the business value proposition, to contrary, it certainly would create growth.

You just have to look at NonStop in an entirely different light. It’s not best practices, although best practices have always been a factor in having NonStop applications be as available as they are. Furthermore, it’s not rocket science as much as there are those who think you need a team of specialists to keep NonStop running non-stop. This fear of a graying population of lab-coat wearing engineers is just way over blown. Our graying population is retiring but guess what, there is a developing talent pool of much younger folks that I am not prepared to discount or suggest that they won’t cut it!

Earlier I used the phrase “NonStop is going to wash over every product” and it wasn’t by accident as this phrase too came up in discussions this week. Think of the incoming tide pushing further up the beach and spilling onto rock formations until the tide eventually covers everything. This is exactly one vision I have of NonStop and while I may be the only one predicting such a possibility, HPE has everything to gain in letting the NonStop tide roll in – indeed, let’s go one big step further. Let’s make NonStop open source! Let’s integrate NonStop with OpenStack. Let’s shake it all up – and lets just see who follows NonStop. I know that this highly problematic as well, but why not?

Enterprises will still want a managed software distribution as they continue to abhor the current model of fixes and patches arriving by the hour. Stability and predictability – a new release every summer is something they can handle, but not every hour. So, NonStop becomes a special distribution of OpenStack built to meet these requirements of enterprise IT execs. Think SUSIE, RedHat even Debian – supported distributions are important and have found markets. Put this down as another potential benefit that NonStop brings to the party – availability, scalability and yes, predictability!

In today’s transforming world of IT, there is no such thing as staying within the lines and keeping inside the box. It’s cliché but it’s also very true – to succeed think differently. While much of what I have written above will probably not come to pass even as it’s a stretch to ever think HPE would make NonStop open source, in order to grow and become the best software platform on the planet – HPE has to think of doing the unexpected! The dramatic! And I think it can do just that and it may be coming very soon. Move over Energizer Bunny, not only will NonStop keep on going on but will do so long after your bunny’s battery has died!        

Friday, June 30, 2017

Shipping containers are being repurposed; could we see something similar happening with NonStop!

Seated at a bar made up of repurposed shipping containers prompted me to think of NonStop. Just as shipping containers can be repurposed, with the arrival of Virtualized NonStop can we see NonStop being repurposed as well?   

It was way, way back in 2009 when I wrote a post to this blog site about how HPE was packaging up shipping containers loaded with blades and peripherals and marketing them as a specialty POD – actually, as a “Performance-Optimized Datacenter” and to read more about this, check out the post of July 2, 2009, Common standards, uncommon advantages! I have no real idea of how successful this market campaign proved to be but I only heard that the program was discontinued sometime last year. 

And the reason? HPE was moving beyond blades, where today it was all about hyperconverged infrastructure, hybrid IT and yes, Synergy. Perhaps I was too quick to dismiss Synergy as just software as the more I have dug into HPE Synergy, the first thing I noticed was that it included a whole new approach to hardware packaging to where blades have become legacy technology. Ouch!

I was reminded of the POD this week as I was writing an article to be published shortly on Banking Tech, a publication and web site I support these days. The article focuses on shipping containers and, in particular, the considerable re-purposing of shipping containers that is going on right now. Here in Boulder, inside the local mall, there is a bar that is comprised solely of two 40’ shipping containers lying side by side, slightly offset, out of which a bar operates serving a variety of local craft beers. There’s another one too that was set up a little earlier in Estes Park and, by all accounts, they are both proving to be very popular.

However, setting up a bar in a container is probably not the best example of repurposing of shipping containers. Want to develop a mini hydro-power station in a box? Well, the city of Melbourne has done just that with a shipping container. Want to build a swimming pool that you can take with you when you move? Well, that too is being done with shipping containers – 20’ and 40’ – proving ideal for the purpose.

Need to erect a shopping mall quickly following a natural disaster? Well, check what Christchurch did with shipping containers and the unique shopping experience that they created in the heart of Christchurch following a massive earthquake. After five and a half years it may be winding down but it certainly provided a unique shopping experience. Want low cost housing in the east of London? Well, Container City I and II may be the trick. Shelter for our homeless veterans then yes, it’s being done already in Orange County, California – new housing made from shipping containers.

Point is, the standards that were embrace in the 1960s across the transportation industry not only proved to be disruptive but spawned a number of unrelated industries, not even imagined at the time. Unfortunately standards too are subject to becoming legacy and of late it seems to be happening at an accelerated pace. The big deal here is that the demise of the IT expert or specialist is happening within enterprises across all industries leaving decision making in the hands of new-age managers prone to jumping aboard the next technology wave totally driven by what they just saw in an airline magazine.

There was a time not too long ago where we called this McDonalds Architecture. You know, the arches that symbolize McDonalds,  take another look; did you know there was a bell curve immediately followed by another bell curve that symbolized how quickly architects oftentimes jump from one product or technology curve to the next? And then, of course, the next one, ad nauseam!

With the amount of marketing weight HPE is throwing behind simplifying the transformation to hybrid IT, we already are beginning to see vendors looking at their options with some of them beginning to view the process of transformation as an open invitation to move in different directions. Consider, for example, the homogeneity of the simplification efforts of HPE – for enterprises the investment in Synergy isn’t going to be minor. It’s a big undertaking. However, how many enterprises out there will be all-HPE? So yes, NonStop with its history of being a player among many within the data center has thrived when there has been considerable heterogeneity present.

Could the roll-out of something like Synergy benefit from NonStop for simple connectivity to the rest of the enterprises IT infrastructure? Unfortunately, once you start thinking along these lines it’s hard to see how this could be simplified but surely, there are standards? Ethernet comes to mind and with Virtualized NonStop (VNS), there is RoCE – RDMA over Converged Ethernet.  The Synergy solution does not support RoCE enabled mezzanine cards today, but I can imagine Synergy does have a robust roadmap that may include RoCE support in the future.

And yet, Synergy with its frame that will fit into any regular, industry standard, 19’”rack, “is designed to accept multiple generations of compute, storage, fabric and management modules. It is built on industry standards so it fits easily into existing and new data center environments while preserving the ability to leverage existing storage and connectivity resources.” There it is again, standards. But what standards? Well, it is not about blades anymore, that’s for sure, as today blades have become legacy and are a part of what we are now referring to as traditional computing.

“The direct-connect midplane delivers 16.128 Tbps of bandwidth and is future-proofed with a photonic-ready design that could exceed bandwidth requirements for the next decade.” OK, so photonic-ready is picking up on some of the deliverables from The Machine project from what I could see while on the exhibition floor at HPE Discover. But midplanes are scary things – and rarely provide standards as each vendor tailors them to their own needs – will a Dell or Lenovo 2 or 4 socket “board” plug into the Synergy midplane and work? I haven’t seen anything to suggest that likelihood.

So it’s not blades per se but rather modules, or resources, the items you need today to better support “a broad range of operational models such as virtualization, hybrid cloud and DevOps.” If you do want to know more about HPE Synergy then check out the easy to follow write up that describes Five Steps to building a composable infrastructure with HPE Synergy.

As I look at the containers being repurposed the various transformations that result are kind of mind-boggling. Who knew? However, there is genuine excitement surrounding the repurposing of NonStop – taking it to the virtual world. Again, who knew? I am often asked about the marketing effort HPE is putting behind NonStop, particularly now that the new NonStop X systems are shipping. Well, no surprises here – there will be very little marketing effort. But rather trying to syphon off marketing dollars for NonStop X, the NonStop team is moving NonStop closer to where the marketing dollars lie.

That’s right, VNS isn’t just a cool piece of technology but rather, a really big jump out of traditional systems to where it can finally gain some of the bright spotlight being directed at all things related to simplifying hybrid IT. VNS is aimed at both today’s NonStop users as well as at building a new NonStop user base. And yes, looking at as I tend to do of late, it represents a very effective (and qquite legitimate) repurposing of NonStop!

There is nothing on the VNS roadmap suggesting there is anything under way to bring VNS to Synergy. For the moment, the priorities of the NonStop development lie elsewhere and to some extent I covered that in my previous post, Right time, right place – NonStop lights up the horizon! However, as noted earlier, there are always unintended consequences from actions taken by a major vendor such as HPE even as there are unrelated “industries” likely to appear – for the most part, I am expecting them to surface among the service providers looking to better integrate Synergy with whatever else may be in the data center and where VNS may be of assistance.

Another source may indeed be solutions vendors well versed in NonStop who see opportunities to better differentiate their solutions by capitalizing on both Synergy and VNS. But even with Synergy not making an appearance on any NonStop roadmaps, VNS or otherwise, I cannot imagine that at some point, their paths don’t cross. Perhaps it will come down to whatever happens within HPE’s own data center where NonStop has a presence already.

In many respects the age old truism that the good thing about standards is that there will always be plenty of them quickly comes to mind. Shipping containers started out with 20’ containers only to add 40’ containers into the mix. Now they are almost any size with US containers following a different standard to Asia-Pacific and Europe. We now see containers that are 8’ 6” high even as we see others that are 53’ long.  

Synergy may be unique to HPE even as NonStop provides unique capabilities but ultimately, for it all to work in HPE’s favor, Synergy may need VNS as much as VNS might benefit from Synergy. And not solely for the marketing dollars (although they will help)! Redundancy is highlighted in the Synergy documentation but as well know, today’s enterprises need more than just redundant elements. In wrapping up this post, it is worth noting (as was covered in that previous post to this blog), anything that is manufactured will break and it is NonStop’s ability to recover from breakages that will ensure the ongoing viability within the data center and yes, for the next decade, too!