Sunday, October 15, 2017

NonStop future – not all what it seems!

A week spent interacting with folks far-removed from NonStop and yet well-informed on all things IT; as time wound down, there was little to argue about – NonStop continues to reward those who step back to look at the big picture!

If you have been following Margo and me on Facebook you will know that we have spent the past ten days on the road. This time, the plans we put in place earlier this year didn’t take into account the dates for NonStop regional user group events so no, our travels didn’t include participating in events back on the east coast. Instead, we were off to the southwest where our destination was Albuquerque, New Mexico. We had wanted to catch up with some ex-Tandem folks we knew as well as find time for talks with other professionals and with prospective new clients. Always a fun thing to do for Margo and me!

While I am focused very much on all things NonStop, from time to time it is good to get the perspective of others not quite as close to HPE and NonStop as we are. To this end, a couple of items that came up proved consistent no matter who we talked to – there is a very big disconnect between the industry, the press and even analysts with what is actually happening within enterprise data centers. Ignoring for the moment a small number of enterprises attracting headlines with their pursuits, for almost every other data center it remains “business as usual” with nearly all of them continuing to upgrade their traditional systems even as they are being told about the financial benefits that come with deploying hybrid configurations that include clouds.

Apparently the only way a cloud finds an audience within enterprise IT is when a solution under consideration comes with an option to be configured within a cloud, either as a private cloud on-premise or a vendor’s managed cloud. One such example that is becoming better known among the NonStop community is the work that OmniPayments has done with its product offering OmniCloudX, but apart from that ambitious program and yes, with the success it is already achieving, not much else is happening that involves NonStop. Seriously? Quite to the contrary, as we continue to see NonStop rack up new wins for NonStop X systems even as some of the biggest NonStop users on the planet are electing now to bring in NonStop X systems to meet their future transaction processing needs.

I have always marveled at those artists painting extremely large works of art, even sculptors, as they too tackle a giant piece that is destined for public display, but even they have to step back sometimes just to take a look at the progress they have made and to keep track of proportions as well as ensure minor details haven’t been omitted. Perhaps the most famous statue here in the US is the Statue of Liberty even though for those who have made the journey to black hills of South Dakota, Mt Rushmore is equally as impressive. However, while the work is under way on any major art project should you stop and take a look, the early stages can prove very misleading.

Point is, there is always a need to take a step back from what is the task at hand to ensure it is coming together as conceived and for many IT professionals it appears that this has become a lost art and the repercussions are continuing to be felt to this day. Just think security, for instance, and the doors we simply leave open. Don’t bother me I know what I am doing and I will get to that option / issue / routine shortly … And when it comes to NonStop we are truly blessed to have as many places to turn for help should we elect to take a couple of steps back to take in the vista no matter it’s progress towards completion.

More recently, I have been marveling at just how many sources exist today that carry information about NonStop. Almost every vendor operating in the NonStop marketplace has a web site where you can find a blog that is current and informative. Sometimes even provocative, but all in a good cause – to get us thinking. Indeed, many of my own posts on vendor sites through the years are phrased to essentially “tweak your nose” as one commentator reminded my audience. Not to be taken literally, of course, but rather to get us all thinking about alternate perspectives on well-known issues. A reminder too, if you like, to take a step back from the products and features for a much wider view of the role NonStop plays in your enterprise.

In addition to the many blog sites there are numerous newsletters that vendors write designed to throw the best light possible on their own activities, be that an upcoming event or the latest data sheet. And then, there are the publications – traditional and more recently, digital, that have developed followings of their own. There is no lack of places to turn to for current information on NonStop with the only remaining challenge being just how widely we distribute the many articles and commentaries about NonStop and how much energy we put into keeping our IT brethren as well informed as we are. NonStop isn’t all that it might seem – it is a lot more. And its purpose remains unchanged – to provide enterprises with the most available platform in the marketplace and the only truly, out of the box, Availability Level 4 (AL4) option, according to IDC.

Of course, the publication dearest to our hearts is NonStop Insider – an ambitious project we started a year ago. We have just published the anniversary issue as we kick off the start of year two and what an issue it turned out to be, celebrating a couple of firsts. It proved to be our biggest issue to date with some twenty contributions. It also included a first from Micro Focus (by our good friend Ron who is part of Micro Focus) as well as a first from HEP NonStop (by another good friend, Frans). Their focus was on security and NS SQL/MX – so yes, check it out.

We also received and published words of encouragement. Frans opened his contribution, SQL/MX Update – Frans Jongma (HPE), with the very warm, “Let me start with congratulating NonStop Insider with their first anniversary. I think this format is great to provide information at a regular – i.e. monthly- basis.” And of course, our good friends at TCM opened their contribution, Happy Anniversary NSI and Roll on TBC ’17, noting how NonStop Insider, “had a clean and simple mission: to reach out to the NonStop community and provide a platform to share and learn about every facet of NonStop, from every corner of NonStop.”

TCM then added that, “Over the last year we’ve been delighted to see this mission realized. We’ve heard from the Vendors; companies just like TCM who provide much of NonStop’s supportive backbone including services, applications and tools. TCM has enjoyed both contributing and consuming NSI for the last year and wishes it well for the years to come. We are proud to have been a part of this magazine’s journey as it grows into a respected and valued member of the NonStop community.”

If as yet you haven’t checked out this bumper issue of NonStop Insider then check out Volume 2, Issue 1 at the NonStop Insider web site:
And make sure you read Margo’s editorial page before checking out each of the contributions and while on the web site, consider becoming a subscriber as we sure do appreciate all your support. And so the work on the next issue begins which you will see published just as you head to this year’s NonStop Technical Boot Camp, being held for the first time just outside San Francisco.

Shortly Margo and I will be back behind our desks in our new offices in Windsor, Colorado. Just down the road from HPE’s major campus in Ft Collins, a place we often drive past on our way to the mountains. HPE focus on NonStop continues to be very real and we see a very bright future for NonStop X systems as they seem to still find their way into data centers worldwide. We also see a future for virtualized NonStop (vNS) as it develops traction with telcos worldwide. As for a bigger future for NonStop – X and V – well, we may have to step back a little to grasp the full scope of its potential, but it is there. Already beginning to take shape.

As additional paint is added to the canvas I expect to see a much different form begin to appear but that is a picture I will happily discuss when we next meet at TBC! In the meantime, don’t let up with communicating the value proposition of NonStop to you peers and don’t stop reading the really good supporting material now out on the web. NonStop may not be what it seems; quite the contrary, when you give it serious consideration, it is a whole lot more!

Thursday, October 5, 2017

It’s all happening and it’s happening, non-stop!

Choices need to be made and I have chosen NonStop. HPE now faces some pretty big decisions and it’s become clear that, driving future NonStop sales will be done with partners!

There are times when many of us are left wondering about the choices we make. There are currently advertisements where actors are left wondering about their own choices and it led me to consider how we managed to arrive at this point of time developing, testing, operating and indeed promoting, all-things NonStop! Once again, I find myself scrolling through the photos on Facebook that were taken during the recent reunion of Tandem alumni and if you are interested in all things NonStop and have as yet not joined the Facebook group, Tandem Computers, perhaps you may want to consider joining. As we head into another round of user events in the run-up to the big one, the NonStop Technical Boot Camp (TBC), and apart from everything that may be covered, it is THE gathering for all those IT folks who have made the choice to continue with NonStop.

Before we go too much further, I am one of those who have made the choice to support and indeed promote NonStop, non-stop! As we all know, HPE loves the technical side of IT but when it comes to marketing well, let’s just say anyone grading HPE’s performance over the past decade would have to note that there is “room for improvement!” Furthermore, it’s tough to get HPE relying solely on selling to the business the merits of working with HPE. For as many years as I can recall, it’s always been about the cool technology. Cool city? I toured cool city in Singapore and there was sure a lot of cool stuff being showcased, but again, no real tangible contribution highlighting how business can produce better outcomes as a result. No, it was all about just how technically innovative HPE continued to be at the expense of all else.

For many years I was on the road almost constantly meeting with NonStop customers and prospects, giving presentations on products as a product manager, even as I championed the cause of regional user groups in my capacity of the ITUG Chairman. As he picture above clearly conveys, my enthusiasm for being on the road, standing in front of a screen and telling a story, never waned through the years and I have to thank my good friend Rick Ploen for sending me this particular picture. The memories certainly came flooding back as I recalled the occasion which I think could have been the offices of Insession Technologies somewhere in Europe. Yes, my hair was darker and there was more of it. And so it goes…

As bleak as I sometimes paint the picture of HPE lately and despite the cautionary tales I tell my clients, there is still much that is happening that is positive and warrants further discussion. After all, HPE has embarked on perhaps the most ambitious course of action any major vendor has ever attempted – not for the faint of heart taking a $130+Billion enterprise to just $20+Billion and yet, once you look past the shareholder value unlocked in the process, there is now a new streamlined corporation with an increasingly flatter organization emerging, an organization that is totally focused on platforms and infrastructure, with the goal of simplifying the adoption of hybrid IT.

“The growth of cloud and industrialized services and the decline of traditional data center outsourcing indicate a massive orientation toward hybrid infrastructure services. As the market becomes dominated by the demand for agility and flexibility, this will drive the shift toward more industrialized, less-tailored options,” comes the word from the folks at Gartner. As reported in numerous publications, Gartner then adds, “However, the growth of hybrid infrastructure services will increase the complexity of selecting the right toolset to deliver end-to-end services in a multisourced environment, especially in a fast-changing digital business environment.” Should we be surprised? As IT professionals are we taken aback by the news that as we lean on hybrid IT to meet the demand for agility and flexibility, resultant systems become even more complex? Yes, it is all happening and the only real question – who is HPE going to turn to for assistance?

But fret not. Of late we are seeing the early signs of how the new HPE plans to go to market for almost all of its products – with partners. You want to know more about SimpliVity? Well, here’s a partner you can talk to. And the message HPE is taking to potential partners? “Join our Partner Ready Program and we’ll team with you to help expand your opportunities and increase the portfolio of solutions you can deliver.” HPE isn’t hiding anything from the marketplace. If you need to access skilled personnel for your upcoming project then you will need to engage a partner that’s local to you and with whom HPE is prepared to funnel its technology to you in a manner that is enhanced by the partner whether through the addition of other products and features or simply services and know-how to make the technology being procured work a whole lot better. It’s all about partners.

If it wasn’t for a steady stream of Congrats on your work anniversary! filling my inbox and all originating out of LinkedIn, I wouldn’t have remembered that when I first chose to tell stories, give presentations, write blog posts and to build a business around all of these activities  that I would be still at it some eight years later. But here we are and the choices I made all those years ago do not look quite as foolish as at first they may have appeared. Recalling my first post of October 7, 2009 that was published so soon after launching Pyalla Technologies, Let's talk ... Back then, I reflected on my choice noting that it was the beginning of a new phase, a change of seasons, of adjustments better aligned with what I really like to do. Now there’s every chance, of course, that remaining independent may change as well – privatizing individuals is just as likely as privatizing companies in this economy.

At that time I also remarked on how for me, it’s a new world and I am having a lot of fun – fortunately, creativity and enthusiasm are still in great demand and I just love to talk (Pyalla?) technologies! Fortunately, the world of NonStop just kept on evolving and providing a lot to talk about for eight years. Partnerships come in all shapes and forms and for me and Pyalla Technologies it’s fortunate that HPE encourages the ecosystem of independent partner bloggers that now follow HPE. So not only is there opportunity to continue to work with NonStop outside of HPE, there are many choices we can make as to exactly how to make a contribution. What hasn’t changed in the eight years is the value NonStop brings to businesses everywhere – the most robust, reliable, available platform on the planet. Bar none!

Shortly we will be publishing the first issue of the second year of the digital publication, NonStop Insider. If as yet you haven’t had the chance to read an issue it is easy to find – There was a lot of cooperation involved in getting this publication up and running but it is just one more example we can point to when it comes to what’s happening with NonStop. NonStop Insider joins a number of other publications both traditional and digital and as such is just one more perspective on NonStop and given today’s questioning of NonStop role in hybrid IT and in simplifying transformation to a hybrid IT architecture well, nothing could be further from the truth. Vendors and users alike are already testing the waters with one vendor already up and running with NonStop X anchoring its clouds as it offers the solution on the basis of SaaS.

It’s all happening and it’s all happening for NonStop and with little additional fanfare, the number of NonStop users is just starting to edge upwards ever so slowly. “New Logos” are being added and among the NonStop vendors partnering with HPE, there is a sense of even more interest in NonStop to come – we are only just scratching the surface of possibilities with virtualized NonStop (vNS) that will make even bigger demands of partners should it all pan out the way I suspect it will. As for the choices we can make, I have already made my choice and it is NonStop for the foreseeable future. So, get ready, it’s all happening and now that HPE is streamlining its business and operational models, it’s all beginning to happen, non-stop!   

Wednesday, September 27, 2017

Volunteering, networking and blogging – it all helps!

The opportunities for NonStop programmers may not be as bountiful as they once were but this doesn’t mean we need to let the knowledge we have about NonStop be overtaken by current events …

Many years ago we took our car to a track where we had to do double duty as volunteer corner flag workers. Hadn’t done it before nor was I confident I would grab the right flag at the right time but as it was pointed out at the time, by volunteering we made track rental a lot less expensive and everyone I talked to was in favor of lower fees. After all, we still had to pay for gas, tires, brakes etc. so volunteering didn’t see all that bad. And did I say I was a corner worker at the time Margo was on track and there is no better close up of a driver than from behind the corner barrier! Or more entertaining than waving flags as hard as I could whenever Margo was right on the bumper of a lesser performing driver!

There are a lot of talented NonStop community members now on the sidelines, often times far removed from the action. When you consider what has happened across the NonStop community for the past decade it hasn’t been the best of times. Sure, NonStop systems are still playing a critical role within many enterprises and there are more than enough financial institutions and telcos committed to NonStop to keep many NonStop programmers gainfully employed, but increasingly, fully employed NonStop programmers are becoming scarcer by the year. Technology moves and continues to evolve and capitalizes on changing customer expectations as well as faster and cheaper components. Listening to the Apple announcement of their new phones and watches it was hard to ignore just how much computing power could be packaged within a watch – a dual-core chip, no less! In a wristwatch!

There continues to be numerous discussions over the strategic posture of NonStop – for a four-decade old architecture that hasn’t been matched by any other it’s a constant reminder of just how impressive the original architects had been in getting the basics of fault tolerant so right in a system so relevant even today. But this doesn’t change the prospects of NonStop programmers all that much other than to keep their expectations high – surely, there will be a new NonStop user nearby where I can continue to contribute and help deploy new applications on NonStop. Unfortunately, taking a leaf from the cloud phenomena, any demand being created isn’t quite what we expected as increasingly it’s a world best described as looking for NonStop programmers “as-a-service!” We used to call them contractors and even consultants, but mostly, independent contributors who are focused on projects.

So, what of volunteering? In part the answer is visibility – NonStop programmers cannot simply post resumes or hire companies to look for opportunities. The internet and social media have changed the game considerably but it is still mostly about the network and how strong our network remains is where I see so much talent being left warming the benches. As the likelihood of Insession remaining independent and the prospect of ACI acquiring Insession strengthened, I volunteered for ITUG and in due course, I managed to get elected to the ITUG Board of Directors. But this only came about because of my network – in the late 1980s I brought about the formation of a local RUG in Sydney that became OzTUG.

In due time, Tony Bond became head of OzTUG and it was through Tony, that I made it onto the ITUG Board. Margo and I were only looking back at how this all came about and it was really due to the support of some forward-thinking managers in the Sydney branch offices of Tandem Computers. However, here’s the bottom line. I had only been with Tandem Computers for just on a year and by volunteering to work on forming a user group, my visibility within the company and indeed, as it turned out, the local press, expanded considerably. As I left for fulltime employment with Tandem in Cupertino, the local Australian publisher of ComputerWorld approached me and asked me to write a feature story centered on my observations of life in Silicon Valley and this I agreed to do. He already knew that I wasn’t short of stories so perhaps this little nudge into the world of publishing was self-serving on his part, but the fallout from volunteering (I was an unpaid feature writer), was to expand my network even further.

Again, volunteer and build the network. I am recounting these experiences to encourage those NonStop programmers who are wondering where their next gig will come from. Finding a fulltime assignment that is well-paid isn’t on the cards for most NonStop programmers I talk to. Seriously, it’s very slim pickings out there and the time-off between productive days only seems to get longer. But we all have skills and they are mostly centered on what it takes to support a fault tolerant solution and how best to leverage the NonStop integrated stack! And here’s the really good news. The lack of training of the next generation of NonStop programmers is going to see the needle swing back firmly towards us – we are more than likely going to see a pickup in demand for our skills as junior, over there on his tablet, knows little about the real world of transaction processing!

There are a number of members of our community that you see commenting on posts to blogs, asking questions on social media groups – LinkedIn groups being one example – and yes, regularly contributing to their own blog. The one observation about opinions is that yes, we all have them. The same goes for assumptions as we all assume one thing or another even as we form and then communicate these opinions. We need to remember, however, when it comes to opinions, according to Australian writer, Peter Mulraney, “There’s nothing wrong with having an opinion, as long as you understand that it’s just how you see it.” On the other hand, it was Isaac Asimov who wrote of how, “Your assumptions are your windows on the world. Scrub them off every once in a while, or the light won't come in.”

Point is, our experiences with NonStop have placed us in a position where we can provide sound advice, most of the time, and as we continue to write we develop a brand that in time becomes recognized. So yes, to all my good friends out there who email me about what they should be considering doing next I am suggesting just three things – volunteer, network and yes, join the world of social media and find your voice. Blog! As I have already mentioned the world of NonStop is about to come to you.

Think for just one moment of what may transpire should virtualized NonStop (vNS) really take off. Consider too that the community of vNS users will most likely be entirely new and that they are running it on VMs within server farms and clouds – do you think for one moment HPE has the skillsets in place to cover such a possibility? When it comes to global coverage, HPE or their chanel partner selling vNonStop systems, is going to have to tap the knowledge that already exists and they won’t know about you unless you get ahead of the curve and build your brand!

As I look back at how I came to be doing what I am doing today I have just a couple of parties to thank – the managing editor at ComputerWorld (Alan Power along with Graeme Philipson); Tandem Computers, Sydney, manager Steve Bailey; OzTUG and later, ITUG Chairman, Tony Bond; GoldenGate Software executives Tim Rathbun and Sami Akbay; and finally, my wife Margo who today continues to be my primary editor. As you can see, it wasn’t a large group of folks and when you spread their input across the three decades involved, it didn’t require very much from them either. We all need mentors and for me, the mentoring I received came from this group.

In closing, we are coming up to this year’s NonStop Technical Boot Camp. It is still very heavily skewed towards vendors and HPE but then again, that isn’t necessarily a bad thing if it is our objective to stay close to NonStop. Volunteering, networking and indeed blogging all benefit from opportunity so even as you may be considering just how to get off that bench and let the broader NonStop community benefit from your expertise, start thinking about where best to showcase your skills.

Boot Camp may be a tad too expensive for many but at a minimum, there are always the RUGs and after all, if it wasn’t for the time I put in as a volunteer in support of OzTUG, I wouldn’t be doing what I do today. Wouldn’t have happened! And should you happen to see me at one event or another and this is a path down which you would like to go then by all means stop me and I will be only too happy to chat about all of this and more. Waving flags or not, these are intriguing times so don’t let the opportunities that I see coming to fruition pass you by!      

Thursday, September 21, 2017

Look for the signs – NonStop community has much to watch!

Visions, missions, tactics, strategies – how does it all come together for NonStop and just how many paths is NonStop going down that will prove advantageous for the NonStop community?

Reading one of my favorite car magazines (and the source of many anecdotes for posts and articles already published), Road and Track, I came across a feature story on the Nürburgring – a magical place for all car enthusiasts and where Margo and I had the good fortune of laying down laps during a track-only day (no busses or motorcycles, just Porsches, Ferraris, Lotuses and Nissan GT-Rs).

Our time on the ‘Ring was back in late 2010 and you can read more of that in the post Respect the ‘Ring! of September 23, 2010, to our social blog Buckle-Up-Travel The Road and Track article, Master’s Program, informs its readers of how, “At most tracks, it’s not uncommon for a first-rate professional racing driver or even a talented club ‘shoe to be up to speed after as few as a dozen laps.” Then adding, “Not at the’Ring. You won’t come close.” More revealing yet, “Like any track, there are references that a driver can use to let them know exactly where they are, where the car should be positioned, and what the driver should be doing. (The) difference here is, there are easily 10 times as many of those reference points spread out over 14-plus miles.”

The Corkscrew at Laguna Seca, for instance (depicted above and with Margo behind the wheel), is always exciting as it has few signs forewarning you about what comes next. All of which is to say that signage and reference points are important on race circuits and I am sure there are other activities where much the same can be said. Here in Colorado we understand and appreciate how downhill racers know every inch of their run down a snow-covered mountain. When it comes to computers, sometimes it’s more difficult to spot signs and catch important references. It also requires a completely different set of skills most of which take a very long time to develop.

For starters, what does strategic really mean and who really does set strategy? Where do we turn to for confirmation that something is relevant, important, and even strategic? Is it our favorite vendor or our primary vendor? Is it an industry analyst or simply a journalist we know well? Turns out that it is a combination of all of the above that somehow we need to weight and prioritize based on our experience with all parties. Not all information is the same nor does it necessarily reflect the real story – you just have to check the tangible reference points that are all around us.

For the NonStop community it is proving to be a very difficult time. The initial euphoria that came over the community when news first broke that NonStop wasn’t being spun out to Micro Focus as part of the offloading of non-core software assets gave the NonStop community a real boost. NonStop isn’t non-core after all. Furthermore it had escaped being relegated to a couple of part timers determined to keep the torch burning brightly, albeit with somewhat limited funds as had happened to OpenVMS. No, NonStop was proving to be special and something HPE valued more highly than other assets it had accumulated over time. However, being core or at least, not part of the non-core software portfolio, isn’t translating into strategic and that is where the concerns lie – if it is core but non-strategic, what does that mean?

Again, it’s time to check our reference points as we look for familiar signs that will help guide us with the decisions we have to routinely make. Do we continue to invest in NonStop systems? Do we bring additional applications to our NonStop platform? Do we invest in middleware and infrastructure products that will help lift our productivity? Will we even spend money sending staff to events and summits held in support of NonStop? Just as importantly - whereto Mission Critical?

The answers today are not that clear and with the passage of time following the completion of the various spin-merges and offloads, the silence coming from HPE is, as they say, deafening. Could it be that it is a simple case that HPE doesn’t know? Could it be that HPE has as yet not focused its own resources on looking at where NonStop could take them? Might it even be a case that NonStop has been forgotten? We often talk about “best-kept secrets” but when it comes to NonStop it isn’t productive to think this way, as it’s a bit of a cop-out reflecting the lack of any real marketing effort being expended by HPE.

At this point I am observing a paradox. NonStop isn’t strategic for HPE – just look at the HPE vision and you will see references to simplifying hybrid IT, empowering the edge / IoT and then offering services in support of both of these pursuits. As HPE adds additional products and indeed vendors to better support this vision and, essentially, map it into a workable strategy, there’s no specific reference to NonStop. Quite the contrary, look at project New Stack unifying management across on-premise systems and clouds private, hybrid and public as well as Simplivity and Synergy and there is no references to NonStop whatsoever.

Indeed, it would take considerable new development to bring NonStop into play and have it participate in any fashion – and yet, look at the strategy behind these products and you could see real advantages of having NonStop participate. No, NonStop isn’t strategic and isn’t making a contribution to HPE’s strategy at this point. And yet, it is still holding down a place in the HPE slideware pulled out on every occasion – so what gives?

There is no shortage of evangelists supporting NonStop nor is there any shortage of references highlighting exceptional use-case scenarios. We often look to finance and telco industries to find references to NonStop but in reality, NonStop clings to handholds in almost every vertical from manufacturing, distribution, entertainment and yes, even healthcare. With all the security issues we read about almost daily and the outages of critical infrastructure components we hear about almost as often, you would think a lot more attention was being paid to reliability but it just isn’t the case. The assumption is that all systems, no matter who the vendor is, provides 24 x 7 capabilities and outages by any of the systems was just an unfortunate incident most likely attributable to human error. A situation that will always be present as long as there is human engagement involved! So why even consider a NonStop system and all the added expense that would involve?

However, evangelism alone isn’t going to cut it – we need a lot more and we need it from all stakeholders. The simple truth is that NonStop will do very well without being strategic. When you consider other manufacturers you will see there are numerous categories for their products – strategic, tactical, utilitarian, general purpose and yes, even halo! Point is, champing at the bit to push NonStop to the top of the list of strategic products isn’t really going to get us anywhere – what is more important is returning to the topic of HPE having a strategy for NonStop. And this is where the signage is better illuminated.

Being strategic and having a strategy are very different pursuits and are oftentimes, unrelated. We expect HPE to be able to articulate a strategy for the company as a whole – we all want to be better informed about where it is headed. But when it comes to individual products in the portfolio, strategies will vary by marketplace, region and even personalities. Just take a look at the reference points surrounding NonStop today and you will get a good sense of not only where HPE sits with NonStop but where you are relative to a strategy for NonStop, best described as a holding pattern.

There are two very distinct and yes, quite separate NonStop paths today and they contain numerous pitfalls for both users and vendors alike. What may work when following one path will not necessarily work when following the other path. HPE will continue to make NonStop systems – it’s strategy for NonStop systems is to continue to follow the Intel roadmap that will give us the latest x86 chips as Intel regularly ups the performance. The key piece of the strategy here is that the marketplace for these NonStop systems will be existing users with only marginal increase in the NonStop user population at best.

In developing virtualized NonStop (vNS) however with vNS, HPE is taking NonStop down a completely untested path and here’s where the problems arise – we have no reference points. We are approaching blind corners and cresting hills with no visible signage as to where the road is headed. Having said that, there isn’t a firm strategy for vNS as yet and for the NonStop community it appears that the strategy for vNS remains a work in progress. And yet, vNS is anything but in a holding pattern – it may turn out to be the only outcome for NonStop that becomes strategic but for that to happen, vNS may not be solely resident within HPE.

When looking at these paths down which NonStops are headed, there may not be ten times the reference points we need to check but there is still much to observe. Hopefully with the NonStop Technical Boot Camp fast approaching, we may learn more even as these reference points consolidate and we come to see just a simple sign. For now, what matters most isn’t whether NonStop is strategic for HPE but rather, does HPE have a strategy for NonStop and will that strategy embrace the NonStop we know today even as it lifts the veil on the NonStop we all see coming. NonStop systems will likely be with us for a couple more years but as for vNS, well-executed and well-supported by HPE, it’s influence may be felt for decades to come!

Saturday, September 9, 2017

HPE – styling itself to better suit NonStop users

There has been a lot written about HPE of late following the completion of the spinoff – mergers. For the NonStop community, distractions of the past are now behind HPE and this is good news for the NonStop community!

It was back in the mid-1970s when I established my first company, Uralla Holdings Pty Limited. Uralla (and yes, another Australian Aboriginal name) sat atop a rather complex setup that included such companies as Advanced Technology Computing as well as a family trust. At the time, it made a lot of sense and to paraphrase the Eagles’ Joe Walsh yes, “I had accountants (and I paid) for it all!” While not setup to minimize taxes it did help even if in the long run, when all was said and done, I paid a lot of taxes indirectly through leases and purchases as well as the income I shared with others; strange as it may sound, in Australia this was common practice among almost all IT professionals.

When I elected to provide analysis and to write articles and blog posts, creating a company seemed a natural thing for me to do and as a result, Pyalla Technologies was born. It was only recently I caught up with how similar Uralla and Pyalla sound so I am now left to wonder, what happened in my youth to so influence me. Whatever it was the formation of this LLC gave me a vehicle that allowed me to establish business relationships with my clients. Without exception, I am always asked about my company structure and given the nature of my business model, my clients have all come to appreciate this style of operation and the reduced workload this places on them and their staff.

Today, there are many options open to companies and we are hearing a lot more about just how they go about their business. When it comes to HPE, for instance, how many of us had even heard of the term, spin-merge, before it was included in the announcements by HPE that it was divesting itself of its services and software business?

In doing so, and adding weight to what exactly a spin-merge implied, HPE retained a majority, albeit very slim, interest in the partner it worked with as part of the spin-merge. Furthermore, it unlocked considerable value to then-HPE shareholders even as it lessened its own need to fund business entities where margins were always under pressure. Smart? Among the financial community it certainly encouraged them to stay invested in HPE but among technologists, spin-merges of services and software seemed a little odd given how many industry analysts predicted that future growth for technology companies was going to be fueled by precisely these types of businesses.

However, for the customer, HPE’s stated objective of offloading non-core software together with underperforming services while focusing on infrastructure and platforms gave them a strong sense of wellbeing as they became better informed about the strategy and vision of the new HPE. Simply being customer-driven (on the needs of its enterprise customers) looked good in slideware but when it came to specifics, proved to be too broad a vision; customer-driven, as we all now know, is all well and good if you can articulate where you are headed.

Otherwise, no matter how big a vendor you are, you can quickly flounder and be tugged in every direction influenced solely by the customer you last visited. Technology customers absolutely want to know where its vendor-of-choice is headed and just how effective they will be in translating strategy and vision into real products targeting their needs!

Until we heard references being made to spin-mergers, it wasn’t a common term. Spin-offs into new companies that are in turn made up for two or more parties that merge to create a new company place huge demands on the participating companies to be fully in synch with the mission outlined for the new company. With HPE it would seem that they indeed got the ingredients right as I haven’t read one negative response.

For a while there, Dell was suggesting HPE was headed in the wrong direction but soon after, Dell offloaded its own software business mostly, it turns out, to trim the servicing of the debt it took on to go private and then to buy EMC. And the debt the new Dell EMC is carrying is quite a sizable millstone around its neck despite what positive news emanates from the company – who plans to grow a company by first taking on billions of dollars in debt?

No, the path HPE has successfully negotiated has a lot more upside than what many pundits first predicted. Even the recent slide in the stock price was purely an adjustment recognizing the additional value HPE shareholders would realize. As I recently advised my clients, the drop in the price of HPE shares was little more than a market adjustment to the changing world of HPE as the spin merge announcements become reality.

As one publication described it, “this plunge wasn't due to some unfavorable business development. Rather, it was driven by the completion of a planned spinoff and merger of the enterprise computing specialist's non-core software assets with U.K.-based Micro Focus International.” The results of which was that, “HPE received a $2.5 billion cash payment, and its investors received 0.13732611 American depositary shares (ADSs) of Micro Focus for every HPE common share held as of its record date on Aug. 21, 2017. Based on Micro Focus' closing price at $29.50 per share yesterday that equates almost exactly to the per-share ‘decline’ we saw in HPE stock today. In short, the transaction is effectively a zero-sum game for HPE investors, who now hold roughly 222 million Micro Focus ADSs representing 50.1% of its fully diluted ordinary shares.”

In this case and particularly for the NonStop community, HPE’s execution proved to be beneficial to all and gives HPE a better opportunity to not only fund its strategy and vision but to better reshape itself product-wise to better suit the needs of its customers. With the removal of the distractions that otherwise occupied the attention of many HPE executives they could now focus on middleware and platforms and in particular, the transformation to hybrid IT and the empowering of the edge.

As Whitman said, in her briefing to analysts following the publishing of the Q3 results as well as the completion of the spin merge with Micro Focus, “With that transaction now behind us, we have the right strategy and the right portfolio to succeed in today’s environment. Our strategy is clear, to make hybrid IT simple, to power the intelligent edge and to provide the services to make it all happen. It is based on what customers are asking for today and where we see the market moving.”

Yes, HPE is part of that right portfolio (of products) to succeed in today’s environment and this is something the NonStop community shouldn’t lose sight of – whether a user or a vendor, NonStop isn’t “non-core” but rather, another right product for HPE! Of course, we would all like to see a higher profile for NonStop develop as well as to see it occupy more of center stage at major HPE events. However, this may indeed be developing. As much as I champion the attributes of availability and scalability at every opportunity, perhaps the real differentiator for NonStop, particularly as NonStop develops momentum atop virtual machines, is NonStop SQL/MX (NS SQL).

With many of the unique features once only a part of Neoview together with a more complete “Oracle compatibility” option, HPE’s continued investment in NS SQL isn’t happening by accident. HPE has a need for NS SQL internally and it sees NS SQL support of DBaaS as something customers will benefit from as they move to hybrid IT. Furthermore, the work being done in support of blockchain – the port of the R3 Corda to NonStop – is layering the distributed immutable ledger right on top of NS SQL. And for all the right reasons – combine NonStop and NS SQL with blockchain technology and you will provide a very serious solution that absolutely better suits today’s customer’s needs.

HPE is reshaping itself from a business perspective and on a personnel basis as well. There will be a lot of organization changes taking place before we head into the start of HPE’s next financial year some of which have already leaked to the press involving the sales organization. HPE is also reshaping itself from a product portfolio bases and the good news here is that NonStop made the cut – not just NonStop but critical middleware including NS SQL, TMF as well as TS/MP (formerly, Pathway). What’s not to like about all of this?

From my perspective, very little but it will always come back to how well HPE can execute but if the style of the company now being projected continues to better shape it to suit the needs of its customers, the NonStop community should be the last to complain. Look around, when it comes to the right strategy and the right portfolio, isn’t this a big change from the past to know that it includes NonStop? HPE’s spin merge may have the heads of other technology firms spinning but for the NonStop community what it really represents is the creation of a strategy and vision focused solely on where HPE needs to go and with that, as a community, aren’t we the happier to read of this?

Thursday, August 31, 2017

Changing world of communications …

Our desks may be cluttered with phones, laptops, tablets and PCs and there is still a need for cables but we have come a long way with comms and for the NonStop community, we have so much choice when it comes to media focused on NonStop!

Where do we go for news today and how do we keep in touch with colleagues and business partners worldwide? Once, these two matters were very separate topics – we had phones, email, faxes and newspapers.  When the fax machine arrived near my office at Nixdorf Computers, back in the early 1980s, I could see my skills as a telex machine operator go out the window (and after all the time I put into becoming a skilled operator) but on the other hand, I started faxing everyone I knew! When it came time to consider working for Tandem Computers I heard about its Tandem-based email system and I was hooked. So much so that on my first day in the Tandem offices at North Sydney, I accidentally sent an email to everyone in the company only to recoil in shock as it contained some sensitive material. So I cancelled the email …

And that was when I first came to know Ron LaPedis and I must admit, it wasn’t a pleasant conversation. Overloading the Tandem and essentially bringing the Tandem email system to its knees wasn’t the best way to introduce oneself to the company. However, we have come such a long way since then. There is Skype, Hangouts, WebEx, GoToMeetings and a bunch of other options including private deployments of the above to choose from when we wish to give a webinar. It’s almost as if we have replaced a series of simple communications with complexity on another level so much so that when I am called upon to participate in anything online, the ten minutes prior to the call sees me testing and preparing for the call!

Working for nearly all my business life at Tandem Computers I was involved one way or another with Systems Network Architecture (SNA). For nearly three decades this had been the premier communications solution for enterprises globally and even as IBM owned the architecture and dictated the protocols and supporting services, you couldn’t argue against SNA’s popularity. Furthermore, while it had been the presence of email that proved to be the enticement that sealed the deal when it came to joining Tandem Computers, it was the industry recognition that apart from IBM itself Tandem had the best SNA product offerings of any vendor. Cool! However it was only a matter of a few years later that I had to oversee the reduction of investment in SNA offerings on Tandem as the company shifted R&D behind the appearance of TCP/IP and this led to numerous traumatic encounters within the Cupertino campus.

However it has been the industries acceptance of TCP/IP to underpin all of the communication we depend upon today that has led to the explosion of material we can access today. Talking with my good friends at IR and the work they have done to fine-tune Prognosis to better monitor Voice over IP (VoIP) and the IP telephone networks that are blossoming within enterprises worldwide is but one example that comes with rationalizing networks. It was the fodder for many articles published in the 1980s when predictions about the integration of voice and data were all the rage but now, some three decades later, it’s almost a given that communications will be carried over IP whether data, voice, video or simply text messages.

When I turn to my iPAD among the very first things I do is check on the news published by The Sydney Morning Herald even though it has been a very long time since I last resided in that beautiful city. No matter where you live or where your interests lie, you can always check the news as publications worldwide have gone digital and can be universally accessed from any smartphone or tablet. 
Of late I have written about anniversaries as at this time of the year they just seem to happen. My last post to this blog was about wrapping up ten years of posts to this social media communications channel. A little earlier I had started my ninth year of posting to the Pyalla social blog that of late is attracting an even wider audience as some of the topics covered in the post are touching on current business issues. Without the ease with which you can create and publish content, I doubt I would have stayed the course and perhaps had few options open to me when launching Pyalla Technologies.

I still recall the reception I received at one of the last European ITUG Summits held in Brighton, alongside the English Channel, where I gave a keynote presentation only to have colleagues come up to me with counts of how many times I referenced blogs! Ten years ago there were few blogs being published with none focused on NonStop. How the world has changed and there should be no excuse for any member of the NonStop community not to be well-informed about everything that is taking place in the world of NonStop as many such blogs exist today!

Getting balanced information is always an issue – is the source of the story legitimate? Was the information cross checked? Fortunately, for the NonStop community there is a very vocal core of NonStop adherents that it is very rare indeed that anything misleading or simply dead-wrong makes it into the public domain. It is this active peer review process that ensures what we read is both accurate and useful. Even opinion columns of which this blog is typical attracts attention very quickly when the storyline strays a little too far off topic or descends into murky waters with little understanding of the subject matter. So too is the case today with the digital publication, NonStop Insider.

Hard to believe but with the upcoming issue the first full year of publication has been completed with no signs that there will be any letup in the topics being covered. With NonStop Insider Year 1, Issue #12, about to be wrapped up already sights are set on subject matter for inclusion in the first issue of Year 2. Much of the credit has to go to those NonStop community members who have worked with the editorial team on submissions. While almost everything makes it into an issue, a careful eye is kept on the directions some contributors take – after all, the overriding theme for NonStop Insider is NonStop - and when it comes to communicating anything at all on NonStop, there is a very clear bias favoring positive storylines.

Credit too must be given to the management team at TCM Solutions. They were present day one when the decision was made to product NonStop Insider and without their logistic support together with technical knowhow, NonStop Insider would still be an idea sketched on a whiteboard. At every step of the way they have been supporting this digital publication and many of their ideas are now captured in recent issues – and there are more good ideas about to be implemented. As it turns out, working with TCM has proved to be a whole lot of fun as well so that too helps with putting in the effort to ensure a quality publication reaches the NonStop community. The idea for “NonStop CEOs have their say” was first articulated by TCM and we are pleased with how well this has been received among CEOs of NonStop vendors.

The world of business communications has changed considerably through the decades – who remembers waiting anxiously each month for their copy of Datamation to arrive in their mail slot (and yes, do we even have mail slots anymore)? To say we have gone digital may not tell the whole story. When it comes to providing “useful and important information” we have come so far as we look back at how little information was available to us just a couple of decades ago. For a highly specialized product like NonStop to have attracted the variety of publications that it has of late is encouragement of itself – who remembers wishing to read anything at all about NonStop in the “papers” – is remarkable but pleasing all the same. And if as yet you haven’t taken a look at NonStop Insider or become a subscriber then it is not too late to check it out for yourself at:

Sunday, August 20, 2017

Celebrating a full ten years of posts …

Back on August 20, 2007, coming off my time as ITUG Chairman, I had no idea that I would be still writing posts for this blog. But here it is, ten years later, and the fundamental message of NonStop has changed very little …

I have to admit that this post is following quickly on the heels of another post just published, a matter of a few days ago. But tradition remains an important factor; I have written a post to this NonStop community blog, Real Time View, every August 20 for ten full years. Almost every August 20,  as checking the archives, there were a few posts that missed the exact date. For me however, precision is important. It’s rather binary when you think about it. It either happened or it didn’t. I was reminded about this during a conference call with a well-known industry analysis organization when the discussion turned to 24 x 7 and the way it was treated; all systems today run 24 x 7, don’t they?

Right now it feels as though I have been working 24 x 7 as I finally return to my office after almost ten weeks on the road. Living and working from our company command center through thick and thin, as we experienced outages of unimaginable proportions, it is good to be sitting behind a desk in a room where there is a sense of permanency. This office isn’t going anywhere nor is my desk and my computer. However, to suggest to you that I have been working around the clock, 24 x7, just to complete this post, may hold some truth given this topic of 24 x 7 has been on my mind for a long time, but in truth, there has been a lot of down time. The office may be a permanent feature of the home but I will still come and go as business needs dictate and that will rarely ever be 24 x 7.

The almost blasé dismissal of the value proposition of 24 x 7 was a wake-up moment for me. An epiphany of sorts, actually! Have we arrived at a point in time where business continuity isn’t all that important anymore? Across the NonStop community the attention we give to the value that comes with being able to run our mission critical applications continuously, 24 x 7, ensuring our customers have access to the applications they need to access any time they want is sacrosanct. After all, with all that we know today about NonStop would it still interest us if it didn’t run 24 x 7? In my last post, New home, new office; time to take stock … I extoll the virtues of scalability and as much as I really like the value scalability brings to any discussion on the value proposition of a system, it still plays a role to aid availability for most stakeholders in the NonStop community as you do not need to stop your operation to add capacity!

In being blasé about 24 x 7 this week, the analyst with whom I was working did think about the value proposition 24 x 7 provides, but then added that truthfully, it is not fated as strongly as it used to be, as even now, just as with real time, the actual time implications for both 24 x 7 and real time are similar. A few minutes here, perhaps an hour once in a while, surely doesn’t matter in the grand scheme of things. No down time means just that – no down time. Likewise, continuously available sounds a lot like permanent availability and readers of posts to this blog will have come to see that I have made reference to permanent availability many times in the past. Marketing types don’t like this term as it has a sense of no wiggle-room about it and seriously, how can you be sure nothing will fail? However, that’s the whole point – NonStop users keep purchasing NonStop systems because they provide permanency to where their clients will always be able to interact with them.

Like my office and the sense of permanency it affords me NonStop systems can be relied upon to be there even if my own access to the applications is casual at best. However, what I am pushing back on these days is terms like “near real time,” as well as “highly available is good enough!” The origins of real time were in missile detection and interception and there could be nothing vague about that. If you didn’t respond in time and neutralize the threat, you simply ran out of time. Nothing real time about pressing the launch button too late to protect a determined target! Again, this is all well-known to the NonStop community and yet, if industry analysts are becoming blasé about this key attribute of NonStop systems and the solutions that have been designed to run on NonStop, then others will follow and it is most definitely the beginning of a slippery slope leading to enterprises associating any value whatsoever to continuous availability.

A couple of years back, IDC did a pretty good job defining different availability levels with availability level four (AL4) representing the very pinnacle of availability. The only vendors to have systems awarded AL4 were HPE with NonStop and IBM with special Parallel Sysplex configurations of the mainframe. The inclusion of IBM really was for appearances sake – IDC just didn’t want a level of availability where only one vendor had achieved that goal. On the other hand, simply trying to build a Parallel Sysplex configuration for a reasonable cost that supported AL4 has proved allusive to almost every mainframe installation. Adding more components – more moving parts, of you like – to an architecture that never intended to support operations 24 x 7 is hazardous at best, but more often proves foolish at worst.  If AL4 could only be awarded to vendors with systems that provided the level of continuous availability IDC was describing out of the box then NonStop would be the only participant.

There is a lot more to this story, mind you. Transformation to hybrid IT is all the rage right now with HPE among the biggest supporters of hybrids – whether hybrids of servers and private clouds or even hybrids of clouds from different suppliers. However, the word hybrid is pretty scary when you get right down to it and there is no doubt that vendors like HPE recognize the need for them to shine the spotlight on simplifying the transformation.  Stated as straightforward as I can – hybrid IT is at best really hard to do and at worst, a path to disaster. For most enterprises the expectation is that vendors will be able to take what they have today, combine with products they would like to have in the future and after a fashion, mold them to where they appear as just a single system albeit a cluster of sorts. Won’t happen – if the Parallel Sysplex aspect to the mainframe has taught us anything at all, adding availability to a system inherently not architected to run 24 x 7 is as much about smoke and mirrors as it is about fulfilling the wishes of individual system programmers who want an opportunity to give it a try.

Hybrid IT is going to amplify the importance of availability. And what it will not do is be a substitute for 24 x 7 and by this I mean, give data center managers a way by which they can opt out of having any part of providing services on the basis of 24 x 7. “We are working with new technology here, mind you, and it is a little flakey so yes, expect outages,” will be the likely response from a data center manager. “We have negotiated with our cloud vendor for backup with redundant compute and storage capacity but we are still working out the bugs and vendors aren’t willing to sign SLAs that demand zero downtime. Can’t be done!” If you read blog posts, as I do, and work with other writers and analysts from across the industry, you will be familiar with how they almost always dumb-down the issue of availability and it really all comes back to no, they can’t tell you how 24 x 7 could be done. As permanent availability, if you like. And the upshot is that they are all discounting the value proposition of 24 x 7 informing all they come in contact with that like real time, there is considerable wiggle room so don’t be too hard on yourselves if you fail from time to time.

As I look back on ten years of blogging and on all that has been addressed in these posts, in a way it seems so strange that I have to write a post about availability and of how no other vendor can provide 24 x 7 support for any of their systems. As surprising as this may seem to many of us, after these ten years we as a community really haven’t done a very good job of extolling the virtues of this really important contribution to the value proposition of a solution. And yet, I am one of those who simply do not like to interact with an application or more likely these days, an app, and find that it is not available for some reason. Hopefully I will still be posting for another ten years and I just have to say it – will I be writing yet another follow-on post to this one and will we still find an industry as blasé about availability as some analysts are today? 

Friday, August 18, 2017

New home, new office; time to take stock …

Revisiting the topic of scalability reminds us about just how flexible NonStop systems continue to be and with enterprises preparing for ever increasing data volumes, having an out of the box system that readily scales will prove to be a big bonus ...

After two plus months on the road, meeting with many members of the NonStop community as we drove almost 6,000 miles in the company command center, it’s a good feeling to once again be able to say you have a permanent address. At least, as permanent as anything can be these days, given the nature of the business we are in – content generation can be done almost anywhere but even so, having a residence that best could only be considered temporary certainly came with many distractions. On the other hand, we did stop in at N2TUG as well as 2017 HPE Discover and we did get to meet with clients in southern California. 

To directly answer questions I have received of late our new residence is just a little north of the Niwot address where we used to reside. In fact it is just to the east of Ft Collins, home to a number of executives and managers of HPE, and East Harmony Rd is now just two exits further up the highway. In many ways we feel as though we are closer to Cheyenne, Wyoming, than we are to Denver, Colorado. However, having scaled down considerably, the issue of scale has come up a lot of late and for the NonStop community, scalability remains a huge positive when it comes to why applications are still running on NonStop. Perhaps not so much scaling down, as in our case, but much more likely about scaling up, naturally!

You can play scales on the piano all day and you can scale new heights as a mountain climber, but when it comes to computing dimensions are involved; you can scale up and you can scale out. The shared-nothing architecture of NonStop is perhaps the best example of a computer architecture that supports almost unlimited scale-out properties and with as much focus as there is today on cloud computing, it is this scale-out attribute that generates much of the discussion.

Expressed as simply as I can, you want to embrace cloud computing to have no practical limits on the resources available to you – isn’t that what NonStop has been providing for decades? Scale up and scale out are familiar terms for NonStop users but they remain very important attributes of NonStop systems. “When you think of scalability as an architect, you must consider both vertical and horizontal scalability,” NonStop consultant Dean Malone wrote in a recent email exchange. “Vertical scalability is the ability to add more and more processors and storage to a given server whereas horizontal scalability is the ability to add more servers to expand processing capacity.”

During the stock market melt-down on Black Monday, October 19th, 1987, where the largest one-day crash took place with the market losing more than 20% of its value in just that one day, trades were flying thick and fast with volumes way outside anything previously forecasted for the New York Stock Exchange. However, it was running on NonStop at the time, so NonStop engineers simply kept inserting more and more CPUs and the NonStop system just kept on running. Not a hitch as it scaled up to handle the huge volumes of the day. The Exchange only called a halt to trading when the continuous display replicating the ticker tape of old simply couldn’t keep up with all the action. We often reference “elasticity of provisioning” when it comes to clouds but in this case, the NonStop engineers inside the data center demonstrated such elasticity more than thirty years ago!

Almost unlimited scale-out capabilities? Anyone who has sat through the presentation by the OmniPayments team of late had seen referenced an instance of OmniHub that spans 3,000 CPUs that make up a single system and to the best of my knowledge, no IBM mainframe of this size has ever been sold.  For a brief period of time, however, IBM did talk about initiating thousands of Linux workloads on a single System z. I recall it was back when the z10 was first announced and the number of Linux workloads was 4,000+, but I am sure someone will give the exact number as it could have been much higher, like 40,000! The fact remains however that there is a difference between running real world applications versus providing simple demonstrations. As Omnipayments’ CEO, Yash Kapadia, so often reminds me, should OmniHub ever be offline for any reason the multimedia message services for one entire country would come to a grinding halt.

Stock exchanges and telco systems aside, the industry is about to head into essentially unchartered waters – the user experience (UX) or what is increasingly being called the Customer Experience (CX) that, according to one source I checked, is the “product of an interaction between an organization and a customer over the duration of their relationship” is placing new demands on our systems. Call it the intersection of transaction processing with real-time analytics and suddenly your requirements for both compute power and storage become significant unknowns. Just how do you scale and is the cloud the only solution that comes to mind?

In his latest post to the Striim blog, The Rise of Real-Time Data: How Striim Helps You Prepare for Exponential Growth, Striim Cofounder and CTO, Steve Wilkes, writes about these upcoming demands on our systems. “The predicted huge increase in data sources and data volumes will impact the way we need to think about data. The key takeaway is that, if we can’t possibly store all the data being generated, ‘the only logical conclusion is that it must be collected, processed and analyzed in-memory, in real-time, close to where the data is generated.’” For the current users of NonStop systems there is no escaping the message here. To remain in business and be competitive, analytics will be playing an increasingly important role in how you support CX, so enterprises will need systems that can truly scale to meet these new demands for compute power and storage.

Scaling has also been on the mind of Paragon’s CEO, Jim Perry. It may not always be a topic we associate with scale but when it comes to testing of our applications and their automation, the anticipated additional work required in support of CX will necessitate the development of even more applications and these additional applications need to be subject to automated testing. And much more, according to Perry: Today it is all about continuous integration, continuous delivery and continuous testing in order for your enterprise to deliver quality code faster. For Paragon, the challenge that comes with scaling has as much to do with the people as it has with the tools as Paragon sees the enterprise looking to scale the business in a manner that determines whether or not it stays in business.

Scalability has always been an important attribute of NonStop systems. From the earliest days of Tandem Computers it was all about availability, scalability and data integrity and nothing really has changed. Perhaps we associate data integrity with security and that’s OK, but as for availability and scalability, perhaps the NonStop user is being best served because of the inherent architecture of NonStop that allows almost unlimited scalability. While it is true that no enterprise has come close to hitting the size limits of a NonStop system when clustered to the maximum, knowing that already there are users out there with 3,000 processors in their system has to be encouraging.

Scalability may indeed be the “special sauce” that is NonStop today and may even be overlooked at times as it is overshadowed by the availability aspects of NonStop systems. However, every enterprise wants to grow their business in a manner that simply doesn’t break the bank and for NonStop users, it has always been a simple process to add more CPUs – indeed, the architecture of NonStop assumes that the system has been populated to the max with CPUs and is checking to see if they have arrived; cool! As for our scaling down of our home and office well, we may be living smaller but we are still thinking big. And that is something that will not change any time soon!

Tuesday, August 8, 2017

Our colorful lives!

Being on the road as much as we have of late has given us time to talk to members of the NonStop community and it’s been a very interesting indeed colorful ride so far …

There is nothing like seeing the first signs of home appearing on the horizon. With all that has transpired these past couple of months, crossing the state line and seeing the tourist-promotion billboard declaring entry into Colorful Colorado was welcomed, indeed. I am going to leave it to postings to other sites to cover all that has transpired while on the road these past two months but suffice to say that it was most interesting, with challenges coming at us every day. Sounds an awful lot like a typical day inside any enterprise data center as no matter the plans, there is always something unexpected taking place.

It has been on these travels that I have come to truly appreciate the need to have multiples of just about everything. Suddenly primary and secondary are no longer abstract terms as even the best prepared plans go right out the window when unexpected situations develop. For me the big item I have been covering is the real world experiences of those enterprises embracing hybrid IT; in particular, those enterprises that include NonStop in their plans.

So, let me cut to the chase – I have as yet to come across any enterprise that have gone down this path or talked to data center managers tackling the finer details of having mixes of traditional and cloud environments supporting anything apart from pilots and prototypes. So, has something gone awfully wrong and are the reports coming from industry analysts overstating reality?

To be fair, I wasn’t expecting to come across a sizable number of NonStop users kicking off such projects and yet, I have become somewhat curious about what is happening out there in today’s data centers where NonStop has an established presence. Members of the NonStop community that I have met through the years have proved to be colorful individuals, not slow to form opinions and yet, it would seem many of them are looking at their peers within the community running applications on NonStop to see who blinks first. Or not at all!

The promise of NonStop participating in hybrid IT on the other hand has huge potential upside; not just for those enterprises already convinced of the merits of NonStop but to the industry as a whole. Who wouldn’t want better availability, scalability and yes, security? My point here is that all too often the messages we develop in support of a product or service have expiration dates but when it comes to the key attributes of NonStop there are no expiration dates. Have you ever seen a use-by date on a NonStop system?

It isn’t just the colorful characters we come across within the NonStop community that help set directions but general mood of the community as a whole. It’s as if collectively we have all been holding our breath that there was going to be a breakout season for NonStop happening soon. Don’t get me wrong. You don’t have to be colorful to make a contribution. What I am really trying to convey is that after all these years there are still many individuals who champion NonStop on every occasion.

For them there is absolutely no reason on earth why any mission critical application shouldn’t turn to NonStop as the go to first choice system for deployment.  However, what is taking place is a leadership transformation across the vendor community as many of the baby boomers begin to ease up to take a back seat to new leaders. Whether CEOs or Managing Directors, have you noted just how many new folks have come to help companies we know so well over the last year or so?

If industry and financial analysts can be accused of overstating reality and   the progress being made toward embracing the hybrid IT by major enterprises is more snail like than rabbit like, should we start to think that perhaps we have, yet again, come up with a fantastic technology and a terrific answer when it comes to cost containment at a time when the problem isn’t all that bad? Are we at a moment in time when it all sounds good but no, it’s not for us just yet?

In other words, has the “Gucci smart brigade Data Center” not struck as loud a chord with data center managers as the “Bonds t-shirt Data Center?” (For non-Australian, you can substitute any local brand and of course, Fruit of the Loom easily applies!)  Point is, does a glamour solution hold up when all we need is some pretty bread and butter solutions to work better? Wouldn’t it be nice if we could easily connect our silo-ed applications in a way that gave us a hub and spoke model where the hub never failed?

Oftentimes I fret over just how caught up we can become in what’s new rather than what works. With exceptions, rather than what is universal. NonStop at its heart is fundamentally anything but exceptional when it comes to what NonStop runs on – it was always conceived as a solution to a problem of quality and the unexpected. Back in the 1970s no hardware vendor would consider talking about their infallibility but rather talked up the RAS support they had, the wonderful service staff they had on hand and even about backup sites you could turn to if you data center went south! 

And NonStop took all this into consideration and continued to work, 24 x 7, no matter what fell apart. In other words, it was designed to keep working when all else failed. And there is a reason why NonStop talked up take-over versus fail-over. NonStop wasn’t going to wait for a disaster but rather, stepped in to head off disaster once it detected normalcy was being compromised.

There has been a lot of news of late about the success or otherwise of cloud vendors. IBM is talking up its revenues from cloud computing offerings even as Microsoft has detailed how with Azure it is now making more money from clouds than Amazon, even as industry and financial analysts endorse Amazon as the leader.  It's hard not to apply the label “colorful” to someone like Amazon’s CEO, Jeff Bezos, who for a brief couple of hours had more money than Microsoft’s Bill Gates just a few days ago, but for the enterprise, all the color drains from their visage once we get past thoughts of clouds for anything other than select IaaS opportunities.

Perhaps the biggest potential game changer is the availability of Azure for the enterprise to power its own on-premise cloud. This is where I see IBM having difficulty competing over the long haul as enterprises equate cloud with x86 and there really isn’t any enthusiasm in support of Power. Given this, and that Azure will support Hyper V, that is a little more Gucci than it is Bonds.

According to what Microsoft is telling enterprises today, “The Hyper-V server role in Windows Server lets you create a virtualized server computing environment where you can create and manage virtual machines. You can run multiple operating systems on one physical computer and isolate the operating systems from each other. With this technology, you can improve the efficiency of your computing resources and free up your hardware resources.” Key here, for the NonStop community, is running multiple operating systems on one physical computer and isolating (the impact, good or bad) the operating systems from each other. With this, the prospect of firing up a virtualized NonStop platform becomes rather obvious – and can be given serious consideration whether it is Azure on-premise or Azure driving a cloud somewhere.

Here is where it is all headed. Those colorful individuals championing NonStop within their enterprise are going to arrive at the data center one day to find their beloved hardware is out on the loading dock. In its place, aisle after aisle of servers racked to the ceiling with nothing left to do other than to fire up virtualized NonStop as just another Azure-supported operating system; the difference being that, unlike its peers running alongside of NonStop, it can keep running the mission critical applications the enterprise depends upon when they all falter for one reason or another, possibly just an automatic upgrade pushed to them by none other than Microsoft. Yes, it will be NonStop that will make Microsoft look good running some very important applications.

Calamities are always present and disaster is just a few keystrokes away. In our world of highly networked systems, it’s good to know we will be able to turn to one operating system that knows when to take over and not just wait for failure. It’s very much unique to NonStop and it’s hard not to be impressed with this model developed at a time when all hardware was flaky and unreliable. Kind of sounds very much like today, doesn’t it? And here’s the good news, virtualized NonStop on Azure and Hyper V is on the roadmap and likely being tested in the labs of NonStop development even as I type these last few words. Colorful? Well, I can’t think of anything more colorful than seeing a lot of red faces among enterprise executives when they see what today’s NonStop can achieve supporting their business. 

Friday, July 28, 2017

Moving forward - transformation and virtualization makes testing of the business logic even more critical

When we think of virtualization and the coming of clouds and as we consider all that may be involved in transforming to these hybrid combinations incorporating the traditional with the very new, how often does the testing of our applications come to mind?

There have been times these past few days where events have reminded me of practices and disciplines that dominated our discussions in former times. I had the misfortune of breaking things and working with insurance companies and I was left without access to more modern methods of communications to the point where I was asked if I could possible find a fax machine so I could receive a fax.

It was in the early 1980s when the vendor who employed me back in Sydney, Australia, installed a fax machine in my office and I no longer had to take the long walk over to the telex machine where I would have then spend hours preparing a paper tape for transmission back to my head office in Richmond, Virginia. In many ways it was a sad occasion as I had really mastered the telex machine and yet it was progress, given how easy it became not only to transmit the written word, but pictures charts and graphs as well!

Fast forward to today and the power of the mobile phone is undeniable. We can communicate with anyone we want to, at any time, about anything at all. In a couple of recent conversations the talk has led to consideration of whether the mobile phone was about to fade from the scene, to be replaced by even more spectacular technology and whether or not we were entering essentially an era of magic. How else can you explain away the knowledge so many businesses have about everything we do? And yet, even with the most advanced forms of communication there will still be a need for apps to support inquiries as well as many different models used for purchases and other financial transactions.

Point is – we still write code and as much as AI continues to advance there remains a need for humans to remain very much involved in stringing together the logic that drives decisions for success. When we talk about clouds we talk about the elasticity of provisioning that addresses both the needs we have for data storage and business logic. But here’s the rub – we are working diligently to be able to store vast amounts of data even as we continue to write logic practically unchanged from how we did it in the past albeit, a lot more quickly of course.

Let me take you to an earlier time, decades ago. In fact, many decades ago, to when we first started coding the computers that marked the beginning of our adventure with computers. I was recruited by IBM on the campus of Sydney University at a time when I was becoming very bored with academic life. At the time I wasn’t really aware of the implications of my decision to participate in a series of tests the University sponsored but it was only a matter of months before I found myself on another campus; this time, it was the operations center for a steelworks in Wollongong, Australia.

The year I was recruited was the southern hemisphere’s summer of 1969 and my first day on the job was 1970, so effectively I have been looking at code for almost six decades. And the fundamentals haven’t changed, just the timeframes. Ambitions? Well, my first job was to develop applications in support of a new steelworks that was being built but along the way, I was tinkering with the operating system as for a period of time the IBM mainframes the steelworks purchased didn’t have enough main memory to run any IBM operating system so we pretty much came up with our own – just a couple of lines of IBM 360 assembler code together with a bunch of macros.

Timeframes? Well this is where the dramatic changes can be seen, perhaps more so than when it comes to chip power and Moore’s Law. I was writing just one application a year – perhaps a little bit more. I grabbed a coding pad, wrote assembler instructions for the logic I was pulling together to solve a business problem. Pages and pages of assembler code that was then submitted to the data entry folks who oftentimes took a week or more before they returned to me the coding pages along with a box of punched cards. I kept running these decks through the assembler until I got a clean assembly at which time I took the object deck and began to test.

As a matter of practice, we always left an addressable piece of storage (of about 100 to 250 bytes) that if my logic went awry, I could branch to, throw in a couple of correcting statements, and return to the mainline code. Ouch – yes, almost every production application was supported by a series of supplementary corrective card that steered the logic back to where it needed to be without having to reassemble the whole application, or worse, send the coding pages back to the data entry team.

Testing? For my applications, which supported what we called the “online application” I would often resort to booking solo time on the mainframe and dialing in “single cycle” so I could manually step through each instruction and watch the results via the console display lights that changed with the execution of each instruction. Productivity? Wow – I could debug my programs more quickly than others working with me who preferred to go home at the end of the day.The company had enough programmers to complete the implementation of the new application for the steelworks about to be commissioned so it seemed reasonable to function this way. Looking back at what we did all those years ago I am not surprised that applications often stopped but rather that any of them ran successfully at all!

Now let me fast forward to practices of today – attempting to develop and test applications and then ensure that they were maintained same way as we did all those decades ago is not only not possible but runs contrary to the always-on, always-connected 24 X 7 world we live in as we remain tethered to our mobile devices plugging away at the latest app. Languages and development frameworks have changed. We don’t simply write code, we pull code from multiple sources and practically assemble a program that in turn is just a part of an application designed to address a specific business need.

Providing defect-free applications at a fair cost, particularly when these applications have to accommodate today’s multi-vendor and hybrid environments even as they have to be aware of the many regulatory and compliance mandates for each industry needs something a whole lot more sophisticated than simple access to a system that can be set to single cycle!  And I was reminded of this only a few days ago when I had a conversation with folks at Paragon Application Systems. These are the folks who have developed the premier testing solution for the payments industry.

“It’s all about continuous integration, continuous delivery and yes, continuous testing,” I was told by Paragon CEO, Jim Perry. Integration, delivery and testing is a never ending cycle, for the life of the program and application, performed in a seamless manner whereby the state of the program or application is always current and correct. “The growth of our global economy has created payment systems that have grown too intricate and change too quickly for any organization to risk deployments without frequent, comprehensive regression testing. No company can hire enough people to manually perform the testing necessary in the time available within a release cycle. Automation of the software build and delivery cycle, as well as test execution and verification is required.”

Manually perform testing? Grown too intricate? For the NonStop community there has always been concerns about the business logic bringing a NonStop system to a halt. And for good reason! Fault tolerant systems have been designed to keep processing even when facing single points of failure, but oftentimes, poorly implemented and tested business logic can get in the way! Unfortunately it’s about to get a whole lot worse as testing not only has to ensure the application is defect free but that the underlying platform, now being virtualized, is configured in a way that NonStop applications can continue being NonStop.

We have virtualized networks and we have virtualized end points and this has helped considerably with automating our test processes but now the platform itself is being virtualized and this is a whole new ball game for many enterprises IT shops. And this makes the need to have something like Paragon on hand even more important – we have stopped manually checking anything these days so we cannot start now. In the coming months, as we continue to look at the transformation to hybrid IT and to virtualization and the software-defined everything I am planning on devoting more column inches to testing as all too soon our inability to thoroughly test what we are turning on in production could bring many a data center crashing down.

If as yet you haven’t looked at Paragon then you may want to visit the web site and download a couple of papers as I have to believe, for those of you in the NonStop community who are only vaguely familiar with how testing has changed, particularly when it comes to testing for payments solutions, it may very well be an opportunity to rethink just how comfortable we are with the processes we have in place today. And wonder too, how anything worked at all back in the days when it was all performed manually!