Thursday, August 31, 2017

Changing world of communications …

Our desks may be cluttered with phones, laptops, tablets and PCs and there is still a need for cables but we have come a long way with comms and for the NonStop community, we have so much choice when it comes to media focused on NonStop!

Where do we go for news today and how do we keep in touch with colleagues and business partners worldwide? Once, these two matters were very separate topics – we had phones, email, faxes and newspapers.  When the fax machine arrived near my office at Nixdorf Computers, back in the early 1980s, I could see my skills as a telex machine operator go out the window (and after all the time I put into becoming a skilled operator) but on the other hand, I started faxing everyone I knew! When it came time to consider working for Tandem Computers I heard about its Tandem-based email system and I was hooked. So much so that on my first day in the Tandem offices at North Sydney, I accidentally sent an email to everyone in the company only to recoil in shock as it contained some sensitive material. So I cancelled the email …

And that was when I first came to know Ron LaPedis and I must admit, it wasn’t a pleasant conversation. Overloading the Tandem and essentially bringing the Tandem email system to its knees wasn’t the best way to introduce oneself to the company. However, we have come such a long way since then. There is Skype, Hangouts, WebEx, GoToMeetings and a bunch of other options including private deployments of the above to choose from when we wish to give a webinar. It’s almost as if we have replaced a series of simple communications with complexity on another level so much so that when I am called upon to participate in anything online, the ten minutes prior to the call sees me testing and preparing for the call!

Working for nearly all my business life at Tandem Computers I was involved one way or another with Systems Network Architecture (SNA). For nearly three decades this had been the premier communications solution for enterprises globally and even as IBM owned the architecture and dictated the protocols and supporting services, you couldn’t argue against SNA’s popularity. Furthermore, while it had been the presence of email that proved to be the enticement that sealed the deal when it came to joining Tandem Computers, it was the industry recognition that apart from IBM itself Tandem had the best SNA product offerings of any vendor. Cool! However it was only a matter of a few years later that I had to oversee the reduction of investment in SNA offerings on Tandem as the company shifted R&D behind the appearance of TCP/IP and this led to numerous traumatic encounters within the Cupertino campus.

However it has been the industries acceptance of TCP/IP to underpin all of the communication we depend upon today that has led to the explosion of material we can access today. Talking with my good friends at IR and the work they have done to fine-tune Prognosis to better monitor Voice over IP (VoIP) and the IP telephone networks that are blossoming within enterprises worldwide is but one example that comes with rationalizing networks. It was the fodder for many articles published in the 1980s when predictions about the integration of voice and data were all the rage but now, some three decades later, it’s almost a given that communications will be carried over IP whether data, voice, video or simply text messages.

When I turn to my iPAD among the very first things I do is check on the news published by The Sydney Morning Herald even though it has been a very long time since I last resided in that beautiful city. No matter where you live or where your interests lie, you can always check the news as publications worldwide have gone digital and can be universally accessed from any smartphone or tablet. 
Of late I have written about anniversaries as at this time of the year they just seem to happen. My last post to this blog was about wrapping up ten years of posts to this social media communications channel. A little earlier I had started my ninth year of posting to the Pyalla social blog that of late is attracting an even wider audience as some of the topics covered in the post are touching on current business issues. Without the ease with which you can create and publish content, I doubt I would have stayed the course and perhaps had few options open to me when launching Pyalla Technologies.

I still recall the reception I received at one of the last European ITUG Summits held in Brighton, alongside the English Channel, where I gave a keynote presentation only to have colleagues come up to me with counts of how many times I referenced blogs! Ten years ago there were few blogs being published with none focused on NonStop. How the world has changed and there should be no excuse for any member of the NonStop community not to be well-informed about everything that is taking place in the world of NonStop as many such blogs exist today!

Getting balanced information is always an issue – is the source of the story legitimate? Was the information cross checked? Fortunately, for the NonStop community there is a very vocal core of NonStop adherents that it is very rare indeed that anything misleading or simply dead-wrong makes it into the public domain. It is this active peer review process that ensures what we read is both accurate and useful. Even opinion columns of which this blog is typical attracts attention very quickly when the storyline strays a little too far off topic or descends into murky waters with little understanding of the subject matter. So too is the case today with the digital publication, NonStop Insider.

Hard to believe but with the upcoming issue the first full year of publication has been completed with no signs that there will be any letup in the topics being covered. With NonStop Insider Year 1, Issue #12, about to be wrapped up already sights are set on subject matter for inclusion in the first issue of Year 2. Much of the credit has to go to those NonStop community members who have worked with the editorial team on submissions. While almost everything makes it into an issue, a careful eye is kept on the directions some contributors take – after all, the overriding theme for NonStop Insider is NonStop - and when it comes to communicating anything at all on NonStop, there is a very clear bias favoring positive storylines.

Credit too must be given to the management team at TCM Solutions. They were present day one when the decision was made to product NonStop Insider and without their logistic support together with technical knowhow, NonStop Insider would still be an idea sketched on a whiteboard. At every step of the way they have been supporting this digital publication and many of their ideas are now captured in recent issues – and there are more good ideas about to be implemented. As it turns out, working with TCM has proved to be a whole lot of fun as well so that too helps with putting in the effort to ensure a quality publication reaches the NonStop community. The idea for “NonStop CEOs have their say” was first articulated by TCM and we are pleased with how well this has been received among CEOs of NonStop vendors.

The world of business communications has changed considerably through the decades – who remembers waiting anxiously each month for their copy of Datamation to arrive in their mail slot (and yes, do we even have mail slots anymore)? To say we have gone digital may not tell the whole story. When it comes to providing “useful and important information” we have come so far as we look back at how little information was available to us just a couple of decades ago. For a highly specialized product like NonStop to have attracted the variety of publications that it has of late is encouragement of itself – who remembers wishing to read anything at all about NonStop in the “papers” – is remarkable but pleasing all the same. And if as yet you haven’t taken a look at NonStop Insider or become a subscriber then it is not too late to check it out for yourself at:  http://www.nonstopinsider.com/

Sunday, August 20, 2017

Celebrating a full ten years of posts …

Back on August 20, 2007, coming off my time as ITUG Chairman, I had no idea that I would be still writing posts for this blog. But here it is, ten years later, and the fundamental message of NonStop has changed very little …

I have to admit that this post is following quickly on the heels of another post just published, a matter of a few days ago. But tradition remains an important factor; I have written a post to this NonStop community blog, Real Time View, every August 20 for ten full years. Almost every August 20,  as checking the archives, there were a few posts that missed the exact date. For me however, precision is important. It’s rather binary when you think about it. It either happened or it didn’t. I was reminded about this during a conference call with a well-known industry analysis organization when the discussion turned to 24 x 7 and the way it was treated; all systems today run 24 x 7, don’t they?

Right now it feels as though I have been working 24 x 7 as I finally return to my office after almost ten weeks on the road. Living and working from our company command center through thick and thin, as we experienced outages of unimaginable proportions, it is good to be sitting behind a desk in a room where there is a sense of permanency. This office isn’t going anywhere nor is my desk and my computer. However, to suggest to you that I have been working around the clock, 24 x7, just to complete this post, may hold some truth given this topic of 24 x 7 has been on my mind for a long time, but in truth, there has been a lot of down time. The office may be a permanent feature of the home but I will still come and go as business needs dictate and that will rarely ever be 24 x 7.

The almost blasé dismissal of the value proposition of 24 x 7 was a wake-up moment for me. An epiphany of sorts, actually! Have we arrived at a point in time where business continuity isn’t all that important anymore? Across the NonStop community the attention we give to the value that comes with being able to run our mission critical applications continuously, 24 x 7, ensuring our customers have access to the applications they need to access any time they want is sacrosanct. After all, with all that we know today about NonStop would it still interest us if it didn’t run 24 x 7? In my last post, New home, new office; time to take stock … I extoll the virtues of scalability and as much as I really like the value scalability brings to any discussion on the value proposition of a system, it still plays a role to aid availability for most stakeholders in the NonStop community as you do not need to stop your operation to add capacity!

In being blasé about 24 x 7 this week, the analyst with whom I was working did think about the value proposition 24 x 7 provides, but then added that truthfully, it is not fated as strongly as it used to be, as even now, just as with real time, the actual time implications for both 24 x 7 and real time are similar. A few minutes here, perhaps an hour once in a while, surely doesn’t matter in the grand scheme of things. No down time means just that – no down time. Likewise, continuously available sounds a lot like permanent availability and readers of posts to this blog will have come to see that I have made reference to permanent availability many times in the past. Marketing types don’t like this term as it has a sense of no wiggle-room about it and seriously, how can you be sure nothing will fail? However, that’s the whole point – NonStop users keep purchasing NonStop systems because they provide permanency to where their clients will always be able to interact with them.

Like my office and the sense of permanency it affords me NonStop systems can be relied upon to be there even if my own access to the applications is casual at best. However, what I am pushing back on these days is terms like “near real time,” as well as “highly available is good enough!” The origins of real time were in missile detection and interception and there could be nothing vague about that. If you didn’t respond in time and neutralize the threat, you simply ran out of time. Nothing real time about pressing the launch button too late to protect a determined target! Again, this is all well-known to the NonStop community and yet, if industry analysts are becoming blasé about this key attribute of NonStop systems and the solutions that have been designed to run on NonStop, then others will follow and it is most definitely the beginning of a slippery slope leading to enterprises associating any value whatsoever to continuous availability.

A couple of years back, IDC did a pretty good job defining different availability levels with availability level four (AL4) representing the very pinnacle of availability. The only vendors to have systems awarded AL4 were HPE with NonStop and IBM with special Parallel Sysplex configurations of the mainframe. The inclusion of IBM really was for appearances sake – IDC just didn’t want a level of availability where only one vendor had achieved that goal. On the other hand, simply trying to build a Parallel Sysplex configuration for a reasonable cost that supported AL4 has proved allusive to almost every mainframe installation. Adding more components – more moving parts, of you like – to an architecture that never intended to support operations 24 x 7 is hazardous at best, but more often proves foolish at worst.  If AL4 could only be awarded to vendors with systems that provided the level of continuous availability IDC was describing out of the box then NonStop would be the only participant.

There is a lot more to this story, mind you. Transformation to hybrid IT is all the rage right now with HPE among the biggest supporters of hybrids – whether hybrids of servers and private clouds or even hybrids of clouds from different suppliers. However, the word hybrid is pretty scary when you get right down to it and there is no doubt that vendors like HPE recognize the need for them to shine the spotlight on simplifying the transformation.  Stated as straightforward as I can – hybrid IT is at best really hard to do and at worst, a path to disaster. For most enterprises the expectation is that vendors will be able to take what they have today, combine with products they would like to have in the future and after a fashion, mold them to where they appear as just a single system albeit a cluster of sorts. Won’t happen – if the Parallel Sysplex aspect to the mainframe has taught us anything at all, adding availability to a system inherently not architected to run 24 x 7 is as much about smoke and mirrors as it is about fulfilling the wishes of individual system programmers who want an opportunity to give it a try.

Hybrid IT is going to amplify the importance of availability. And what it will not do is be a substitute for 24 x 7 and by this I mean, give data center managers a way by which they can opt out of having any part of providing services on the basis of 24 x 7. “We are working with new technology here, mind you, and it is a little flakey so yes, expect outages,” will be the likely response from a data center manager. “We have negotiated with our cloud vendor for backup with redundant compute and storage capacity but we are still working out the bugs and vendors aren’t willing to sign SLAs that demand zero downtime. Can’t be done!” If you read blog posts, as I do, and work with other writers and analysts from across the industry, you will be familiar with how they almost always dumb-down the issue of availability and it really all comes back to no, they can’t tell you how 24 x 7 could be done. As permanent availability, if you like. And the upshot is that they are all discounting the value proposition of 24 x 7 informing all they come in contact with that like real time, there is considerable wiggle room so don’t be too hard on yourselves if you fail from time to time.

As I look back on ten years of blogging and on all that has been addressed in these posts, in a way it seems so strange that I have to write a post about availability and of how no other vendor can provide 24 x 7 support for any of their systems. As surprising as this may seem to many of us, after these ten years we as a community really haven’t done a very good job of extolling the virtues of this really important contribution to the value proposition of a solution. And yet, I am one of those who simply do not like to interact with an application or more likely these days, an app, and find that it is not available for some reason. Hopefully I will still be posting for another ten years and I just have to say it – will I be writing yet another follow-on post to this one and will we still find an industry as blasé about availability as some analysts are today? 

Friday, August 18, 2017

New home, new office; time to take stock …


Revisiting the topic of scalability reminds us about just how flexible NonStop systems continue to be and with enterprises preparing for ever increasing data volumes, having an out of the box system that readily scales will prove to be a big bonus ...


After two plus months on the road, meeting with many members of the NonStop community as we drove almost 6,000 miles in the company command center, it’s a good feeling to once again be able to say you have a permanent address. At least, as permanent as anything can be these days, given the nature of the business we are in – content generation can be done almost anywhere but even so, having a residence that best could only be considered temporary certainly came with many distractions. On the other hand, we did stop in at N2TUG as well as 2017 HPE Discover and we did get to meet with clients in southern California. 

To directly answer questions I have received of late our new residence is just a little north of the Niwot address where we used to reside. In fact it is just to the east of Ft Collins, home to a number of executives and managers of HPE, and East Harmony Rd is now just two exits further up the highway. In many ways we feel as though we are closer to Cheyenne, Wyoming, than we are to Denver, Colorado. However, having scaled down considerably, the issue of scale has come up a lot of late and for the NonStop community, scalability remains a huge positive when it comes to why applications are still running on NonStop. Perhaps not so much scaling down, as in our case, but much more likely about scaling up, naturally!

You can play scales on the piano all day and you can scale new heights as a mountain climber, but when it comes to computing dimensions are involved; you can scale up and you can scale out. The shared-nothing architecture of NonStop is perhaps the best example of a computer architecture that supports almost unlimited scale-out properties and with as much focus as there is today on cloud computing, it is this scale-out attribute that generates much of the discussion.

Expressed as simply as I can, you want to embrace cloud computing to have no practical limits on the resources available to you – isn’t that what NonStop has been providing for decades? Scale up and scale out are familiar terms for NonStop users but they remain very important attributes of NonStop systems. “When you think of scalability as an architect, you must consider both vertical and horizontal scalability,” NonStop consultant Dean Malone wrote in a recent email exchange. “Vertical scalability is the ability to add more and more processors and storage to a given server whereas horizontal scalability is the ability to add more servers to expand processing capacity.”

During the stock market melt-down on Black Monday, October 19th, 1987, where the largest one-day crash took place with the market losing more than 20% of its value in just that one day, trades were flying thick and fast with volumes way outside anything previously forecasted for the New York Stock Exchange. However, it was running on NonStop at the time, so NonStop engineers simply kept inserting more and more CPUs and the NonStop system just kept on running. Not a hitch as it scaled up to handle the huge volumes of the day. The Exchange only called a halt to trading when the continuous display replicating the ticker tape of old simply couldn’t keep up with all the action. We often reference “elasticity of provisioning” when it comes to clouds but in this case, the NonStop engineers inside the data center demonstrated such elasticity more than thirty years ago!

Almost unlimited scale-out capabilities? Anyone who has sat through the presentation by the OmniPayments team of late had seen referenced an instance of OmniHub that spans 3,000 CPUs that make up a single system and to the best of my knowledge, no IBM mainframe of this size has ever been sold.  For a brief period of time, however, IBM did talk about initiating thousands of Linux workloads on a single System z. I recall it was back when the z10 was first announced and the number of Linux workloads was 4,000+, but I am sure someone will give the exact number as it could have been much higher, like 40,000! The fact remains however that there is a difference between running real world applications versus providing simple demonstrations. As Omnipayments’ CEO, Yash Kapadia, so often reminds me, should OmniHub ever be offline for any reason the multimedia message services for one entire country would come to a grinding halt.

Stock exchanges and telco systems aside, the industry is about to head into essentially unchartered waters – the user experience (UX) or what is increasingly being called the Customer Experience (CX) that, according to one source I checked, is the “product of an interaction between an organization and a customer over the duration of their relationship” is placing new demands on our systems. Call it the intersection of transaction processing with real-time analytics and suddenly your requirements for both compute power and storage become significant unknowns. Just how do you scale and is the cloud the only solution that comes to mind?

In his latest post to the Striim blog, The Rise of Real-Time Data: How Striim Helps You Prepare for Exponential Growth, Striim Cofounder and CTO, Steve Wilkes, writes about these upcoming demands on our systems. “The predicted huge increase in data sources and data volumes will impact the way we need to think about data. The key takeaway is that, if we can’t possibly store all the data being generated, ‘the only logical conclusion is that it must be collected, processed and analyzed in-memory, in real-time, close to where the data is generated.’” For the current users of NonStop systems there is no escaping the message here. To remain in business and be competitive, analytics will be playing an increasingly important role in how you support CX, so enterprises will need systems that can truly scale to meet these new demands for compute power and storage.

Scaling has also been on the mind of Paragon’s CEO, Jim Perry. It may not always be a topic we associate with scale but when it comes to testing of our applications and their automation, the anticipated additional work required in support of CX will necessitate the development of even more applications and these additional applications need to be subject to automated testing. And much more, according to Perry: Today it is all about continuous integration, continuous delivery and continuous testing in order for your enterprise to deliver quality code faster. For Paragon, the challenge that comes with scaling has as much to do with the people as it has with the tools as Paragon sees the enterprise looking to scale the business in a manner that determines whether or not it stays in business.

Scalability has always been an important attribute of NonStop systems. From the earliest days of Tandem Computers it was all about availability, scalability and data integrity and nothing really has changed. Perhaps we associate data integrity with security and that’s OK, but as for availability and scalability, perhaps the NonStop user is being best served because of the inherent architecture of NonStop that allows almost unlimited scalability. While it is true that no enterprise has come close to hitting the size limits of a NonStop system when clustered to the maximum, knowing that already there are users out there with 3,000 processors in their system has to be encouraging.

Scalability may indeed be the “special sauce” that is NonStop today and may even be overlooked at times as it is overshadowed by the availability aspects of NonStop systems. However, every enterprise wants to grow their business in a manner that simply doesn’t break the bank and for NonStop users, it has always been a simple process to add more CPUs – indeed, the architecture of NonStop assumes that the system has been populated to the max with CPUs and is checking to see if they have arrived; cool! As for our scaling down of our home and office well, we may be living smaller but we are still thinking big. And that is something that will not change any time soon!

Tuesday, August 8, 2017

Our colorful lives!

Being on the road as much as we have of late has given us time to talk to members of the NonStop community and it’s been a very interesting indeed colorful ride so far …

There is nothing like seeing the first signs of home appearing on the horizon. With all that has transpired these past couple of months, crossing the state line and seeing the tourist-promotion billboard declaring entry into Colorful Colorado was welcomed, indeed. I am going to leave it to postings to other sites to cover all that has transpired while on the road these past two months but suffice to say that it was most interesting, with challenges coming at us every day. Sounds an awful lot like a typical day inside any enterprise data center as no matter the plans, there is always something unexpected taking place.

It has been on these travels that I have come to truly appreciate the need to have multiples of just about everything. Suddenly primary and secondary are no longer abstract terms as even the best prepared plans go right out the window when unexpected situations develop. For me the big item I have been covering is the real world experiences of those enterprises embracing hybrid IT; in particular, those enterprises that include NonStop in their plans.

So, let me cut to the chase – I have as yet to come across any enterprise that have gone down this path or talked to data center managers tackling the finer details of having mixes of traditional and cloud environments supporting anything apart from pilots and prototypes. So, has something gone awfully wrong and are the reports coming from industry analysts overstating reality?

To be fair, I wasn’t expecting to come across a sizable number of NonStop users kicking off such projects and yet, I have become somewhat curious about what is happening out there in today’s data centers where NonStop has an established presence. Members of the NonStop community that I have met through the years have proved to be colorful individuals, not slow to form opinions and yet, it would seem many of them are looking at their peers within the community running applications on NonStop to see who blinks first. Or not at all!

The promise of NonStop participating in hybrid IT on the other hand has huge potential upside; not just for those enterprises already convinced of the merits of NonStop but to the industry as a whole. Who wouldn’t want better availability, scalability and yes, security? My point here is that all too often the messages we develop in support of a product or service have expiration dates but when it comes to the key attributes of NonStop there are no expiration dates. Have you ever seen a use-by date on a NonStop system?

It isn’t just the colorful characters we come across within the NonStop community that help set directions but general mood of the community as a whole. It’s as if collectively we have all been holding our breath that there was going to be a breakout season for NonStop happening soon. Don’t get me wrong. You don’t have to be colorful to make a contribution. What I am really trying to convey is that after all these years there are still many individuals who champion NonStop on every occasion.

For them there is absolutely no reason on earth why any mission critical application shouldn’t turn to NonStop as the go to first choice system for deployment.  However, what is taking place is a leadership transformation across the vendor community as many of the baby boomers begin to ease up to take a back seat to new leaders. Whether CEOs or Managing Directors, have you noted just how many new folks have come to help companies we know so well over the last year or so?

If industry and financial analysts can be accused of overstating reality and   the progress being made toward embracing the hybrid IT by major enterprises is more snail like than rabbit like, should we start to think that perhaps we have, yet again, come up with a fantastic technology and a terrific answer when it comes to cost containment at a time when the problem isn’t all that bad? Are we at a moment in time when it all sounds good but no, it’s not for us just yet?

In other words, has the “Gucci smart brigade Data Center” not struck as loud a chord with data center managers as the “Bonds t-shirt Data Center?” (For non-Australian, you can substitute any local brand and of course, Fruit of the Loom easily applies!)  Point is, does a glamour solution hold up when all we need is some pretty bread and butter solutions to work better? Wouldn’t it be nice if we could easily connect our silo-ed applications in a way that gave us a hub and spoke model where the hub never failed?

Oftentimes I fret over just how caught up we can become in what’s new rather than what works. With exceptions, rather than what is universal. NonStop at its heart is fundamentally anything but exceptional when it comes to what NonStop runs on – it was always conceived as a solution to a problem of quality and the unexpected. Back in the 1970s no hardware vendor would consider talking about their infallibility but rather talked up the RAS support they had, the wonderful service staff they had on hand and even about backup sites you could turn to if you data center went south! 

And NonStop took all this into consideration and continued to work, 24 x 7, no matter what fell apart. In other words, it was designed to keep working when all else failed. And there is a reason why NonStop talked up take-over versus fail-over. NonStop wasn’t going to wait for a disaster but rather, stepped in to head off disaster once it detected normalcy was being compromised.

There has been a lot of news of late about the success or otherwise of cloud vendors. IBM is talking up its revenues from cloud computing offerings even as Microsoft has detailed how with Azure it is now making more money from clouds than Amazon, even as industry and financial analysts endorse Amazon as the leader.  It's hard not to apply the label “colorful” to someone like Amazon’s CEO, Jeff Bezos, who for a brief couple of hours had more money than Microsoft’s Bill Gates just a few days ago, but for the enterprise, all the color drains from their visage once we get past thoughts of clouds for anything other than select IaaS opportunities.

Perhaps the biggest potential game changer is the availability of Azure for the enterprise to power its own on-premise cloud. This is where I see IBM having difficulty competing over the long haul as enterprises equate cloud with x86 and there really isn’t any enthusiasm in support of Power. Given this, and that Azure will support Hyper V, that is a little more Gucci than it is Bonds.

According to what Microsoft is telling enterprises today, “The Hyper-V server role in Windows Server lets you create a virtualized server computing environment where you can create and manage virtual machines. You can run multiple operating systems on one physical computer and isolate the operating systems from each other. With this technology, you can improve the efficiency of your computing resources and free up your hardware resources.” Key here, for the NonStop community, is running multiple operating systems on one physical computer and isolating (the impact, good or bad) the operating systems from each other. With this, the prospect of firing up a virtualized NonStop platform becomes rather obvious – and can be given serious consideration whether it is Azure on-premise or Azure driving a cloud somewhere.

Here is where it is all headed. Those colorful individuals championing NonStop within their enterprise are going to arrive at the data center one day to find their beloved hardware is out on the loading dock. In its place, aisle after aisle of servers racked to the ceiling with nothing left to do other than to fire up virtualized NonStop as just another Azure-supported operating system; the difference being that, unlike its peers running alongside of NonStop, it can keep running the mission critical applications the enterprise depends upon when they all falter for one reason or another, possibly just an automatic upgrade pushed to them by none other than Microsoft. Yes, it will be NonStop that will make Microsoft look good running some very important applications.

Calamities are always present and disaster is just a few keystrokes away. In our world of highly networked systems, it’s good to know we will be able to turn to one operating system that knows when to take over and not just wait for failure. It’s very much unique to NonStop and it’s hard not to be impressed with this model developed at a time when all hardware was flaky and unreliable. Kind of sounds very much like today, doesn’t it? And here’s the good news, virtualized NonStop on Azure and Hyper V is on the roadmap and likely being tested in the labs of NonStop development even as I type these last few words. Colorful? Well, I can’t think of anything more colorful than seeing a lot of red faces among enterprise executives when they see what today’s NonStop can achieve supporting their business.