Skip to main content

Glitches, the norm?

Still in Sydney – but the headlines of the past few weeks have been bothering me somewhat. Have you seen all of them – it looks like computer glitches are hitting us hard! I have included a picture of Sydney in case you aren't sure what it looks like!

In my August 27th blog posting – Is 30 mins too long? – I remarked that “I have little patience for any retailer of financial institution that skips on their infrastructure”. But now I am seeing whole sectors of the community being affected. I have to start wondering – are we becoming desensitized to all of this?

What caught my attention was the headlines here last week – well, actually, a small article in one of the financial papers – “Glitch shuts out Westpac online customers”! It turned out that about 30 percent of the bank’s 400,000 internet banking customer could not access the (online banking) service. The paper I was reading went on to add that according to Westpac, “it appears to be related to an internal systems error which we’re still trying to isolate” and then added a comment that the bank wasn’t sure whether this was related to a recent website revamp at the bank.

Now, in isolation, this would have just been something I read and had a brief chuckle about. But unfortunately, I had only moments early read on my blackberry about Barclay’s having a big problem in the UK that forced them to borrow 1.6 Billion Pounds. According to news@finextra.com “Barclay’s blames technical glitch for 1.6 Bn Pound emergency loan”! A problem with the link between its electronic settlement system and the CREST settlement house on Wednesday broke down … for an hour!

Going back to my August 27th blog posting, you may recall that I mentioned, in passing, that Wells Fargo had suffered a serious outage on the West Coast that not only affected ATMs but major portion of the branch banking business, as well. I just went back and googled the Wells Fargo outage and the first link I was directed to was something called SFGate.com and the heading simply stated “Wells Fargo ATM, other glitches last longer than first reported”. The report also put the timing in perspective as well, when it added “Well’s computer glitch came at a poor time for nervous banking customers, considering the recent turmoil in the mortgage and stock markets.”

I began to look at this after I met with a former colleague of mine, Dieter Monch. Dieter was the Australian Managing Director of Nixdorf Computer when I worked for Nixdorf, back in the early ‘80s. Dieter is an investor, and now manages, the company that sells red-light and speeding cameras around Australia. He recently attended a state government presentation that asked potential vendors to look into providing a camera network that wouldn’t fail – borrowing words from NASA, failure was not an option. Dieter simply, and I have to believe, politely – how much are you prepared to pay?

Now, I am not all that sympathetic to the loss of a speeding camera – and the revenue opportunity missed. I don’t think many of us are – and don’t look positively on this form of revenue generation. But looking at it from a different perspective – if these were cameras tracking vital security operations and went down at the time a key illegal or terrorist activity was being executed – then I can see a time in the future when even these types of networks just have to remain operational at all costs.

So, glitches and their implied outages, as well as the implications of lost revenue, are beginning to show up across all industries and markets. So we are taking the issue pretty seriously, and we seem to understand the problem. But with the news coverage I have seen over the past couple of days – I am not sure how seriously we are taking the fall-out from today’s glitches. Surely, the loss of credibility in a marketplace of 400,000 as was the case in Australia, or millions I would have to believe in the US – as well as the real cost in terms of interest on the short-term borrowing of 1.6 Bn Pounds is pretty serious. Again, have we become desensitized to the issue of computer glitches? Has the term become an easy way out – a catch-all phrase to cover up any infrastructure stuff-up we may make?

Do we aggressively promote the value of applications and data bases that survive single (and now, multiple) points of failure? Do we explain how all this works and the value we can provide? Or, do we simply leave it to others – the comms guys? the web server guys? to explain why an element of the infrastructure failed?


Do we still believe that some subset of these applications are so fundamentally important to us that we view them as "mission critical applications", and ar we prioritizing and routing these "mission critical transactions" to a platform that is orders of magnitude more reliable than the other servers we may have deployed?


While we, as users of NonStop, have come a long way in removing many sources of outages – how strong a voice do we have in other areas of infrastructure? And are we still strongly advocating NonStop in support of mission critical applications, or have we elected to just to sit back and watch as less reliable platforms siphon-off these transactions? In other words, have glitches become the norm and have we reached a time where it’s OK to simply explain away a service interruption to the dreaded glitch?

Comments

Anonymous said…
Great post! Actually, I DO think the financial sector is starting to take it more seriously than ever, but I think it shows just how complex the problem space is.

It goes beyond hardening even the subsystems that you mentioned (processors, database, web/application servers, etc.). That's necessary but not sufficient (as RT Writer would say). The organization has to 'plan for failure' in establishing operational processes/procedures as well. That means taking the time to draft the procedures, but it also means doing 'just enough' testing of those to validate them.

Of course, the costs of that are hard to quantify. The value proposition of the enterprise class systems (NonStop one of the leading contenders there) is that at least you can remove the hardware/OS/database from your list of worries!

Popular posts from this blog

If it’s June then it’s time for HPE Discover 2021.

  For the NonStop community there has always been an annual event that proved hard to resist; with changing times these events are virtual – but can we anticipate change down the road? Just recently Margo and I chose to return home via US Highway 129. It may not ring any bells, but for those who prefer to call it the Tail of the Dragon – 318 curves in 11 miles – it represents the epitome of mountain excitement. For Margo and me, having now driven the tail in both directions, driving hard through all these turns never gets old. Business took us to Florida for an extended week of meetings that were mostly conversations. Not everything went to plan and we didn’t get to see some folks, but just to have an opportunity to hit the road and meet in person certainly made the 4,500 miles excursion worthwhile. The mere fact that we made touring in a roadster work for us and we were comfortable in doing so, well, that was a real trick with a car better suited to day trips. This is all just...

The folly that was Tandem Computers and the path that led me to NonStop ...

With the arrival of 2018 I am celebrating thirty years of association with NonStop and before that, Tandem Computers. And yes, a lot has changed but the fundamentals are still very much intact! The arrival of 2018 has a lot of meaning for me, but perhaps nothing more significant than my journey with Tandem and later NonStop can be traced all the way back to 1988 – yes, some thirty years ago. But I am getting a little ahead of myself and there is much to tell before that eventful year came around. And a lot was happening well before 1988. For nearly ten years I had really enjoyed working with Nixdorf Computers and before that, with The Computer Software Company (TCSC) out of Richmond Virginia. It was back in 1979 that I first heard about Nixdorf’s interests in acquiring TCSC which they eventually did and in so doing, thrust me headlong into a turbulent period where I was barely at home – flying to meetings after meetings in Europe and the US. All those years ago there was ...

An era ends!

I have just spent a couple of days back on the old Tandem Computers Cupertino campus. Staying at a nearby hotel, this offered me an opportunity to take an early morning walk around the streets once so densely populated with Tandem Computers buildings – and it was kind of sad to see so many of them empty. It was also a little amusing to see many of them now adorned with Apple tombstone markers and with the Apple logo splashed liberally around. The photo at the top of this posting is of Tandem Way – the exit off Tantau Avenue that leads to what was once Jimmy’s headquarters building. I looked for the Tandem flag flying from the flagpole – but that one has been absent for many years now. When I arrived at Tandem in late ’88 I have just missed the “Billion Dollar Party” but everyone continued to talk about it. There was hardly an employee on the campus not wearing the black sweatshirt given to everyone at the party. And it wasn’t too long before the obelisk, with every employee’s signature...