Skip to main content

Glitches, the norm?

Still in Sydney – but the headlines of the past few weeks have been bothering me somewhat. Have you seen all of them – it looks like computer glitches are hitting us hard! I have included a picture of Sydney in case you aren't sure what it looks like!

In my August 27th blog posting – Is 30 mins too long? – I remarked that “I have little patience for any retailer of financial institution that skips on their infrastructure”. But now I am seeing whole sectors of the community being affected. I have to start wondering – are we becoming desensitized to all of this?

What caught my attention was the headlines here last week – well, actually, a small article in one of the financial papers – “Glitch shuts out Westpac online customers”! It turned out that about 30 percent of the bank’s 400,000 internet banking customer could not access the (online banking) service. The paper I was reading went on to add that according to Westpac, “it appears to be related to an internal systems error which we’re still trying to isolate” and then added a comment that the bank wasn’t sure whether this was related to a recent website revamp at the bank.

Now, in isolation, this would have just been something I read and had a brief chuckle about. But unfortunately, I had only moments early read on my blackberry about Barclay’s having a big problem in the UK that forced them to borrow 1.6 Billion Pounds. According to news@finextra.com “Barclay’s blames technical glitch for 1.6 Bn Pound emergency loan”! A problem with the link between its electronic settlement system and the CREST settlement house on Wednesday broke down … for an hour!

Going back to my August 27th blog posting, you may recall that I mentioned, in passing, that Wells Fargo had suffered a serious outage on the West Coast that not only affected ATMs but major portion of the branch banking business, as well. I just went back and googled the Wells Fargo outage and the first link I was directed to was something called SFGate.com and the heading simply stated “Wells Fargo ATM, other glitches last longer than first reported”. The report also put the timing in perspective as well, when it added “Well’s computer glitch came at a poor time for nervous banking customers, considering the recent turmoil in the mortgage and stock markets.”

I began to look at this after I met with a former colleague of mine, Dieter Monch. Dieter was the Australian Managing Director of Nixdorf Computer when I worked for Nixdorf, back in the early ‘80s. Dieter is an investor, and now manages, the company that sells red-light and speeding cameras around Australia. He recently attended a state government presentation that asked potential vendors to look into providing a camera network that wouldn’t fail – borrowing words from NASA, failure was not an option. Dieter simply, and I have to believe, politely – how much are you prepared to pay?

Now, I am not all that sympathetic to the loss of a speeding camera – and the revenue opportunity missed. I don’t think many of us are – and don’t look positively on this form of revenue generation. But looking at it from a different perspective – if these were cameras tracking vital security operations and went down at the time a key illegal or terrorist activity was being executed – then I can see a time in the future when even these types of networks just have to remain operational at all costs.

So, glitches and their implied outages, as well as the implications of lost revenue, are beginning to show up across all industries and markets. So we are taking the issue pretty seriously, and we seem to understand the problem. But with the news coverage I have seen over the past couple of days – I am not sure how seriously we are taking the fall-out from today’s glitches. Surely, the loss of credibility in a marketplace of 400,000 as was the case in Australia, or millions I would have to believe in the US – as well as the real cost in terms of interest on the short-term borrowing of 1.6 Bn Pounds is pretty serious. Again, have we become desensitized to the issue of computer glitches? Has the term become an easy way out – a catch-all phrase to cover up any infrastructure stuff-up we may make?

Do we aggressively promote the value of applications and data bases that survive single (and now, multiple) points of failure? Do we explain how all this works and the value we can provide? Or, do we simply leave it to others – the comms guys? the web server guys? to explain why an element of the infrastructure failed?


Do we still believe that some subset of these applications are so fundamentally important to us that we view them as "mission critical applications", and ar we prioritizing and routing these "mission critical transactions" to a platform that is orders of magnitude more reliable than the other servers we may have deployed?


While we, as users of NonStop, have come a long way in removing many sources of outages – how strong a voice do we have in other areas of infrastructure? And are we still strongly advocating NonStop in support of mission critical applications, or have we elected to just to sit back and watch as less reliable platforms siphon-off these transactions? In other words, have glitches become the norm and have we reached a time where it’s OK to simply explain away a service interruption to the dreaded glitch?

Comments

Anonymous said…
Great post! Actually, I DO think the financial sector is starting to take it more seriously than ever, but I think it shows just how complex the problem space is.

It goes beyond hardening even the subsystems that you mentioned (processors, database, web/application servers, etc.). That's necessary but not sufficient (as RT Writer would say). The organization has to 'plan for failure' in establishing operational processes/procedures as well. That means taking the time to draft the procedures, but it also means doing 'just enough' testing of those to validate them.

Of course, the costs of that are hard to quantify. The value proposition of the enterprise class systems (NonStop one of the leading contenders there) is that at least you can remove the hardware/OS/database from your list of worries!

Popular posts from this blog

If it’s June then it’s time for HPE Discover 2021.

  For the NonStop community there has always been an annual event that proved hard to resist; with changing times these events are virtual – but can we anticipate change down the road? Just recently Margo and I chose to return home via US Highway 129. It may not ring any bells, but for those who prefer to call it the Tail of the Dragon – 318 curves in 11 miles – it represents the epitome of mountain excitement. For Margo and me, having now driven the tail in both directions, driving hard through all these turns never gets old. Business took us to Florida for an extended week of meetings that were mostly conversations. Not everything went to plan and we didn’t get to see some folks, but just to have an opportunity to hit the road and meet in person certainly made the 4,500 miles excursion worthwhile. The mere fact that we made touring in a roadster work for us and we were comfortable in doing so, well, that was a real trick with a car better suited to day trips. This is all just a p

Three more wishes coming soon – the path ahead for NonStop.

So, another three years have passed by and I find myself writing a preview of what I will likely focus on in eighteen months’ time – my next three wishes for NonStop! It wouldn’t be fair on my family if I said 2019 had been a routine year for Pyalla Technologies. It started with the return flight from Sydney, Australia, and continued with three separate trips to Europe plus a lengthy road trip to Las Vegas for HPE Discover 2019 combined with stops in southern California and participation in N2TUG back in Texas. The miles have added up but all the while even as the adventurous life continued to unfold, there was so much news coming out of HPE that scarcely a day passed without a discussion or two over what it all means. Margo and I have our roots firmly anchored in NonStop, dating back to Tandem Computers where Margo had risen through the development organization all the way to the COO role under the stewardship of Bill Heil when Bill headed the NonStop Software BU. As for me

ACI Strategy - it's all about choice!

I have just returned from spending a few days in Omaha attending the annual ACE Focus meeting. These two day meetings provide more in-depth technical coverage than is usually found at the regular ACI user events, and ACI customers have been coming for more than a decade to hear the messages directly from company executives. The picture I have included here is of the venue of the Wednesday night social event – a reception held at a local sports bar called the ICEHOUSE. And I found this extremely ironic as my own involvement with ACI came through my association with the ICE product. For most of the ‘90s, ACI had been the global distributor for ICE and then, as we began the new millennium, ACI purchased Insession, creating a separate business unit that it named Insession Technologies. For nearly six years, as part of ACI it enjoyed a successful partnership with the NonStop community and had provided a number of solutions in communications, web services, and security. But the decision in l