Tuesday, February 19, 2008

Out of Africa

It just takes a long time to get to Johannesburg, South Africa. The route I took had me changing flights in Denver, and then again in Frankfurt. After 9 hours across the Atlantic, on top of the 2 plus hours getting into Denver, its then just another 10 hours to Johannesburg. The photo here is of me in Frankfurt between flights - the ubiquitous Starbucks behind me - and a little worse for ware!

But I made it, and I am in my hotel nearby Johannesburg airport - and it’s a case of "welcome to darkest Africa"! This is the 7th time I have made the trip here to participate in the annual SATUG event, and I have always look forward to spending a few days with the community as it is one of the better user-run events on the calendar!

This morning I am blogging directly from my blackberry - a first for me - and I am in total darkness, illuminated solely by the blackberry display I am holding. An hour or so ago, the darkness enveloped me, as the power in this part of Johannesburg went out. It's the second time this district has been hit with a total power loss, as two days before they had taken a five hour hit. Talking with the hotel staff quickly brought me up to speed on the politics of power generation and distribution - politics being the key word here, apparently.

South Africa is growing fast, and you can really see all the signs as you make your final approach into the airport. There's now almost no break in the suburban sprawl between Johannesburg, the commercial center, and Pretoria, the country's capital. And the infrastructure has not been able to keep up. For many years electricity was bountiful but now they need a couple of new power stations and I am afraid they may have left it a bit late. There is considerable reliance on nuclear power in this country, and adding another generation plant will not come quickly. The story goes that additional power may come online as of 2010, while others think that perhaps not before 2012.

Infrastructure, and the development and roll-out of any infrastructure project is a lengthy process. And I am reminded how often we in IT stretch our infrastructure! How often do we blow the limits of a processor capacity, of network bandwidth, or of a data base volume! The decisions we take in favor of a solution, in any one of these areas, ties us in and defines the scope of applications we can support, for many years. Infrastructure is not an old house we can easily jack up and transport to another location leaving behind foundations we can reuse! And
haven't our own experiences taught us there's times where a little planning in advance helps us keep the lights on?

Like power grids, and old houses, infrastructure often shows it age - it creaks, it breaks, and is often only supported by a single developer. And I was reminded of this when I talked recently with our Golden Gate CEO, Ali Kutay. He had been driving back to the office and had decided to stop at a local Whole Earth store to get something to eat. As he entered the store, he came across a display that stopped him in his tracks. It was a big barrel with a sign encouraging shoppers to throw in their old phones, pda's, whatever - for disposal. Here was technology that only a few years ago we had so anxiously sought, being relegated to the dumpster.

And Ali went on to point out how infrastructure changes - the arrival of faster and smarter networks - led to more powerful devices, delivering so much more functionality, relegating these former, highly desirable, devices obsolete. The blackberry I am using right now, and a recent purchase, has a camera, GPS, and a great navigational map capability - none of which I cared all that much about or considered crucial for my needs - until I had them! Yes, I am now adding my usage, with its bandwidth demands, back onto the infrastructure.

In a recent email exchange with NonStop product management, we talked about NS Blades and the future of blades. For sure, we will quickly see the arrival of support for multiple OS's in the same cabinet ecosystem. This has been talked about at user events for some time - often just referred to as “Martin's wish”. "Hard" configurations, mapping each blade to a specific OS, will appear at first but then later, we agreed, there should be support for more dynamic, "soft" configuration options. Whether supporting NonStop, or Linux, or even a Windows configuration, users will be able to change how many blades in the chassis are supporting a specific OS.

Cool - and incredibly powerful. Particularly, should an API be provided. But its not all smooth sailing - what about the databases and no, Mr. Oracle, RAC is not the answer and universal deployment of a massive, cross-platform, Oracle data base may not appeal to everyone. Data base software, a key infrastructure component these days, is not something that can be easily changed. But neither should it be used without an understanding of its costs - and the experience we have gained drives us to ensure the data base selected aligns properly with the application characteristics. Mission critical applications really benefit from NonStop and the NonStop SQL data base - but not every transaction is mission critical.

From a future infrastructure perspective, what we will more likely see will be combinations of NonStop SQL, a lot of Oracle for sure, and perhaps a mix of SQL Server, Ingres, MySQL, etc. In other words, in an upcoming blade ecosystem concurrently running multiple OS instances, there will be a collection of un-integrated data bases still very much tied to the OS.

This has been the subject of many of my questions lately within GoldenGate - with our collective knowledge, this has to be a big opportunity. We had worked with Sabre to put together a very robust database tiering solution with NonStop SQL at the heart and MySQL downstream - and now, with what can see under development at HP, there could be real value in bringing this type of infrastructure support back "inside the box".

Surely, there will come a time where reliable database tiering will become "soft" as well. Stringing together these different data bases and making sure they contain the same fresh data is something we want to have happen as transparently as possible - after all, what use is having the option to bring up different OS's if the data they need is not available immediately?

The airport here in Johannesburg is a total mess with every piece torn up. Another war zone? Far worse, its all part of the preparation for the upcoming soccer world cup! But at least there's electricity at the airport and I was able to get coffee. Whether its transportation, power utilities, or computer systems - highly critical infrastructure components are always difficult to change, even when planned.

As we head for a bladed world where virtualization will impact servers, networking, and data, its not going to eliminate all of our infrastructure issues - just move the bar up a level, hitting us with harder issues higher up the technology stack. The flexibility to maximize our usage of processors and to enjoy a much better alignment with the applications, will be extremely important. But dealing with different data base infrastructures shows no signs of getting any easier as we continually seek the best cost / performance trade-offs.

Planning is always important and the early stages of IT infrastructure upgrades and redeployments can look chaotic at times. But history teaches us that when we see the signs of decay, and as we hear the creaks - it maybe too late!

Before we see our applications plunge us into darkness - shouldn't we have foreseen it all and capitalized on our own experiences sooner? Knowing full well that dramatic changes are on the way in terms of how are servers will be packaged and appreciative of the value as well as the potential savings, shouldn't we be taking a much closer look at critical elements of our infrastructure? After all, darkest Africa should only be a metaphor and not something we have to explain to our bosses!

No comments: