Saturday, May 31, 2008

Heading for the exit!

I have just returned to the West Coast after taking some down-time during my last trip to Europe. Before getting back into the swing of things, however, I spent last weekend at Willow Springs – a well-known Californian race track. The picture I have here is of me behind the wheel during the one session I raced solo. While, strictly speaking, this was part of a High Performance Driver Education (HPDE) program supported by the National Auto Sport Association (NASA), it wasn’t long before the pace being set by the students became pretty competitive.

A few months back a friend of mine came over for diner and, as we talked, I discovered he was a real racer – competing in events around California. After getting out to the tracks and watching him race, I signed up for a couple of weekends. Today we buy cars that cannot be driven anywhere close to the limits of their design capabilities, and since there continues to be no legal restrictions to taking these cars onto the race track, I thought that it could be fun. Can I still drive smoothly, and can I avoid embarrassment alongside much younger participants?

For readers not familiar with the format of these open track events, novices (such as me) are required to participate in classroom sessions before going trackside under the guidance of an instructor. Following each session there are mandatory “downloads” where our performance is reviewed with a little education thrown in as well. It was during one of these downloads that our class instructor walked us through one of the most important aspects of racing – negotiating the corners.

The process of cornering was broken into four components; breaking, turn-in, apex, and exit. Of these, the most important component being the exit – the car will always follow your eyes - so we had to make sure we went into the corner looking for the exit and then, when committed to the corner, be looking for the next corner! As we reviewed these components, I was struck by the similarities with what we do in IT.

Breaking, and being prepared to change direction, is as much in the minds of CIOs as racers. As we survey the IT landscape, there comes those times where we just have to hold off deploying new applications and consider whether the time is right to change direction. There may have been a recent merger or acquisition where the different IT operations cannot be easily integrated. Perhaps a key vendor has decided not to support the environment, or has come out with enhancements that you cannot support. There’s always something just around the corner that necessitate a pause, whether it’s car racing or IT.

Turn-in, and knowing just when to make the decision to change, separates good IT management from those with less experience, as for many of us, determining when to change calls on skills accumulated over a lifetime in IT. You can turn-in too early only to be forced to ease up and loose ground, or you can turn in too late only to scramble to keep everything together and risk catastrophe.

Apex, that mythical point lying somewhere in the corner that immediately tells you that it’s OK to resume your attack on the next corner – that next milestone that will mandate another change. It was the great Formula One (F1) racing driver, Jackie Stewart, who on one episode of the BBC’s Top Gear redefined the meaning of the apex by instructing his student to “never press the gas pedal until you know you never have to take it off!” In other words, there comes a time when you reach a point where it’s just good to go – no more looking around and no more hesitations. And isn’t this the same in IT as we reach a point where our testing is over, where running in parallel with the old is no longer highlighting anything significant, and where we can commit to the course we decided to pursue.

Exit, the most important element of all. Jackie Stewart, in reviewing the early driving efforts of his student, observed “maybe you’re trying to think too much, about how you’re doing, rather than what’s coming up! The exit is far more important than the entry of the corner with regards to smoothness.” Speaking as a complete novice racer, this was a hard lesson to learn – early laps of the circuit had me watching the track just in front of the car and I was constantly making adjustments to the steering as well as coming off the gas peddle – I wasn’t comfortable and I certainly wasn’t smooth. Don’t we often do this, as our project enters the final stages. Don’t we become concerned that perhaps we aren’t quite ready – yet we really are at the point where we can go “live!” Shouldn’t we now be looking “through the exit” with our eyes focused squarely on the next project?

The car I took to the race track was the C6 Corvette. As the weekend unfolded I really gained a new appreciation for the car – I had many friends, particularly those from Europe, who really didn’t think that a Corvette was a serious car for weekend track days. Sure, it was fast in a straight line – but would it go round corners? They may be powerful but they are just too heavy and difficult to steer and there are many more-nimble alternatives! The cynical ones even suggested that American-built cars lacked the all-round capabilities of their more capable brethren from Europe and Asia. Could you ever be smooth in something as big as a Corvette? Could you maintain the pace?

So it was very encouraging to hear our instructor take time out to talk about safety and the need to pay attention to our seats, our seatbelts, and what we need to do to protect ourselves. “Cars built today are just so hindered by the restrictions placed on them by the lawyers and it takes a lot of effort to get them ready for track days,” he said. He then added “unless, of course you have come here with a BMW, a Porsche, or a Corvette, as these cars were developed for the race track before being refined for every day road use.” Fancy that – and who knew! It turns out that, among the racers, Corvettes hold considerable prestige and are viewed as highly desirable track day cars.

I have written previously of how difficult it is to shake off a label no matter how undeserved. And I have also observed how difficult it is to maintain relevancy when your key attributes have become “legendary.” For many, the “Stingray” suffers much the same fete as Tandem – they have a wonderful heritage but perhaps they no longer provide the right balance between power and weight, between price and performance. As the original Stingray name faded, and the Corvette brand became recognized worldwide, so too has the Tandem name, retired now following the acquisitions, become a footnote in history, replaced by NonStop. But with the resurgence in popularity the Corvette brand has enjoyed, will we see something similar happen for NonStop? As the Corvette shed weight and improved its performance, are we likely to see NonStop shed costs as its performance is improved?

In the coming days, HP will be rolling out a new NonStop product line built around commodity blades. This will have enormous impact on many market segments where previously, the thought of deploying NonStop was considered a backward step – something difficult to openly talk about with colleagues without appearing as though you were diverging from “mainstream” computing. Nothing could be further from the truth. Industry-standard chip sets, vendor commodity packaging, uniform operations and management tools, open languages and standard utilities ensures NonStop remains mainstream.

Late last year I posted a blog that I titled “Grading HP?” (December 29, 2007) where I suggested HP sales and marketing could do a better job. But in going through some of the comments posted on the blog, I came across one comment that said “So, who has failed? Have we application programmers failed to extend our applications into ways that realize the dream of the Tandem architecture? Has cost and the (purported) disappearance of (good) Tandem application programmers crushed the dreams?”

And it really hit me – as poorly as we may think HP marketing has been performing – have we stopped to think about the legendary capabilities of NonStop and to take advantage of the solutions that are out there, and available? Have we stopped thinking about writing the next application on NonStop?

I have included a second picture here – this time, it is of the former Vice Chairman of ITUG and now Vice President of CONNECT, Margo Holen – turning laps on Willow Springs. At ease with the Corvette, she turned in a couple of impressive laps and during one session, blew past a number of her fellow track day students, shredding her previous lap times and enjoying herself immensely.

Surely, shouldn’t we be encouraging our colleagues to put down a few laps with the NonStop? Preconceptions really should be dealt with and any old labels torn off as we gain in confidence. You certainly won’t need to look over your shoulder for anything from Europe or Asia - and I have to believe HP will only be even happier the more we shred the legacy labels! After all hasn’t HP, in electing to deploy NonStop at the heart of its server consolidation program, laid down the “lap times” to beat, and shouldn’t we now have this as the goal within our IT organizations?

Friday, May 23, 2008

Roman Holiday!

I have just returned from a weekend in Rome, where I took in some of the sights. Apart from stepping inside every church I came across, I took a bus trip up into the hills to the town of Tivoli. It’s about 20 miles north-east of Rome and has enough elevation so that you can see Rome’s skyline even through the murkiness of a late afternoon.

The picture I have included here is of me alongside the fountains of Villa d’Este, a 16th century site well worth visiting. It wasn’t so much the residence that was the main attraction – although the story of Cardinal Ippolito d’Este is pretty interesting – but the gardens he created draping the cliffs behind the villa. With waterfalls, pools, hundreds of fountains, and even an entirely water-powered “Fountain of the Organ” playing tunes and producing trumpet blasts, it is the impact of this incredibly imaginative garden that is the main attraction!

The Cardinal was very ambitious and viewed himself as a potential future Pope, so the creation of this spectacle was to be a tangible projection of his influence and importance. The river Aniene wasn’t quite close enough to power all of the many garden water features so he simply demolished the town and re-routed the river, via a complex system of underground aqueducts and cisterns, and had it flow to his new villa.

Given that the Renaissance was well under way by this time, it was only appropriate that the architect he recruited, Pirro Ligorio, was keen to explore the limits of the technology of the day. Ligorio had been working nearby at the ruins of Hadrian’s Villa, a sprawling 120 acre compound built by the Roman Emperor Hadrian back in the second century, and had seen the role that baths, and the water that fed them, had played in the layout of that Villa. Hadrian had a number of pools, as well as an artificial grotto, that fed different bath complexes and he really liked to use them as he entertained the elite of Rome. But after Hadrian’s death, the site ceased to be of any interest to the emperors that followed, and for more than a thousand years sit lay in ruin buried under a mountain of earth.

It had to have been quite a surprise for the renaissance architects that unearthed it, and I have to believe that they were a bit overwhelmed as they began to comprehend its complexities. I certainly was, and I spent several hours walking through the ruins. Ligorio recovered from the surprise pretty quickly though, as he then helped himself to much of the remaining marble and statues and used them to great effect decorating the Villa of Cardinal Ippolito d’Este. The completed villa and gardens, with the most innovative use of water ever seen at that time, continues to impress visitors as much as it had done nearly five hundred years before.

While Ligorio was building the Villa d’Este between 1560 and 1572, back in Rome, Michelangelo was doing something similar around the same time. Between 1563 and 1566, in the last architectural undertaking of his life, Michelangelo was transforming the old baths of Diocletian into the “Santa Maria degli Angeli e dei Martiri” church. What had been one of the biggest public bath complexes built by the Romans back in the 4th century, finally fell into disuse two hundred years later when the invading Goths cut off the water supply.

Following a similar story line to that of Hadrian’s villa, for more than a thousand years, it had lain in ruins before Michelangelo took on the project of turning it into a very unique and highly imaginative church. Rather ordinary on the outside, when compared to other basilicas of the time, Michelangelo mastered some pretty complex architectural spaces and came up with a structure where you can still recognize some of the elements of the former Roman baths.

I had been hoping to visit Florence on this trip and had been disappointed to learn that the boat I had been cruising on didn’t have enough time to make it to the port of Lugarno. The Renaissance has always held a fascination for me and I had been looking forward to it, but having the weekend in Rome more than made up for missing the visit to Florence. And in seeing the creative way the Renaissance leaders leveraged what they unearthed from a thousand years before, was something completely new to me.

Frans Johansson, in his book “The Medici Effect”, talks broadly about the impact the Medici family had on Florence and how that they attracted a collection of artists, craftsman, and philosophers from across Europe that proved to be the spark that ignited the Renaissance. The Medici’s found that bringing people together from different cultures, with different skill-sets, and creating an environment where they could openly interact anticipating that, as Johansson writes, “where ideas collide, innovation happens!” Johansson then goes on to add “breakthrough insights occur at novel intersections (and) is an enduring principle of creativity.”

Ambitious, yet very rich, Cardinals, mixing with a new breed of architects recently exposed to ancient feats of engineering, led to some amazing creativity. And it is the imagination, and engaging the imagination as the Cardinals of the day had done, that really compelled me to revisit the topic of innovation. Whenever we get together, whether at user events or just in our break rooms, there is always “the potential for the spark of a new idea to ignite, as a new opportunity is recognized!” This is a quote from an earlier blog “Preventer of Information Services” that I posted last year, and I continue to see evidence of this regularly.

But today, it’s not jus getting the culture and skill set balance right – we also have to cross the generation divide as well. I was reminded of this in a recent email exchange with Sami Akbay, the VP of Marketing for GoldenGate, when he reminded me that “even though GoldenGate had its origins in NonStop, we deliberately went after a number of key folks from other vendors who brought with them special skills in other areas of data. We also bridged the generation gaps by recruiting a broad mix of age groups to the company." The result was the development of a raft of new products and features.

Innovation is important as it is the special ingredient within all companies that puts distance between those that can innovate and those that fail to recognize opportunities as they arise. The history of NonStop – from the appearance of the first Tandem Computer, to the introduction of ServerNet – has been liberally sprinkled with innovative technologies that contributed significantly to keeping the Tandem Computer ahead of other market entrants. Each innovation simply raised the bar even higher and saw all competing solutions eventually fall by the wayside.

The imminent arrival of the new bladed architecture solutions that further commoditize the hardware will be one more examples of innovation in play. No longer being concerned about the basic hardware building blocs and being able to focus once more on the processes and the data itself, will give rise to even more innovation I believe. In the same exchange I had with Sami Akbay, he pointed out to me that “we have moved to where data itself is what's strategic! Access in real time to operational data allows companies to innovate in ways not thought of, or even considered possible, before."

While there are a lot of papers being generated on innovation, and plenty of surveys being carried out to determine the characteristics of an innovative company – one recent white paper I came across suggested that if a company was deploying AJAX, using BI to boost productivity, and communicating with customers via wikis, blogs, and social networking, then this was clear evidence that they had become highly innovative. If only it was this easy to pinpoint and this easy to check!

I missed going to Florence, and to see more examples of the impact of the Renaissance. But as I thought about it, I found that the contrast between Florence the other cities I did visit – Monte Carlo and Rome – couldn’t be more representative of the alternative paths to innovation. And I had to ask myself – would I prefer to rely on the roll of the dice, as they do in Monte Carlo, and gamble on future innovation, or perhaps go the other route and hope and pray, as they do so frequently in Rome, and rely on faith to get me the results I need?

Innovation is so dependent on seeing opportunities and on engaging the imagination. There are changes coming from HP and I have to wonder, who will be the first to come up with something completely new? Will the new hardware generate a new spurt of creativity and will there be a technology renaissance? Its early days, of course, but I have to believe that just as the architects of old were able to push the available technology beyond previous limits, we will see a something similar occur. Who knows, maybe the next masterpiece is already being sketched on a table napkin …

Sunday, May 18, 2008

Virtualization? A Testing Time!

I have just departed Monte Carlo after a longer then planned stay. Yes, I managed to get done the things I had come to do, but anyone who has been to Monte Carlo in May can attest to, just getting around the city can be like walking through an ancient maze. In a little over a weeks time, there will be the running of the Monte Carlo Grand Prix, and already, the stands are being erected, streets closed, and ten foot high fencing is already in place blocking many of the footpaths. The picture I have included here is of me sitting trackside as the work went on around me.

The Monte Carlo Grand Prix is one of the few Formula One (F1) races on a pure street track where the cars race through the streets of Monte Carlo following a circuit laid out more than fifty years ago. There have been only a small number of changes – the chicane after the tunnel exit being one – and while it makes little sense these days to torture the cars the way this course does, it’s tradition!

And part of the tradition is the creation of a city within a city. From the dockside quays on up through to Casino Square, the area surrounding the streets making up the track undergo a radical transformation as the “F1 circus” comes to town. For all other events, this is hidden from the spectators and other city residents, but in Monte Carlo, its citizens witness first hand the hosting of a completely foreign world. And the incongruity of it all - with the newest of F1 cars racing alongside some of the Mediterranean’s oldest structures – continues to amaze me.

I came to Monte Carlo by sailing ship. I had the option to fly or to catch the train, but in the end elected to sail. I was fortunate enough to catch a commercial yacht on a “repositioning voyage” as it was being moved from the Caribbean to Istanbul for a season of sailing through the Turkish and Greek Islands. I had sailed on this vessel a few times and was pretty familiar with its operations, and one reason I liked it was that it maintained an open bridge where passengers were welcomed at all times of the day.

Being this familiar with the yacht, I was somewhat surprised to see the tug come to help us out of the port in Barcelona. The vessel’s bow-thruster, a separate propeller system that allows the yacht to move sideways, wasn’t being used. The yacht had been in dry dock immediately following the Atlantic crossing and for the week it was there, dock engineers had overhauled the hydraulics used to drive the propeller. All the testing they had done but one week later, the whole system had failed. For the next eighteen months, the yacht was going to have to pay additional port fees for the use of a tug.

It wasn’t the only surprise in stall for us – as we sailed through islands just south of Marseilles, the yacht began to gyrate severely between headings. One minute the vessel was heading out to sea while only minutes later it was headed back to land. The changes in direction were being executed more rapidly than I could recall having ever witnessed before. With the open bridge policy, I walked onto the bridge to watch the captain giving orders to the helmsman to swing from 20 degrees to port back to mid-ships, and then across to 20 degrees to starboard. For a vessel a little over 400 ft, this was giving everyone a pretty interesting ride.

The captain explained to me that during the Atlantic crossing, he had noticed that one of the two propeller shafts wasn’t “working as hard” when close to maximum throttle settings, particularly when there was a heavy sea and the swells were sizeable. So, as he had time to spare and it was a rather sheltered sea lane, he was simulating the swells to gather data that could help him understand what was happening to the one propeller with the theory being that perhaps the propeller folding mechanism was allowing the propeller to partially and temporarily fold and not working as well.After a few hours the yacht came back on course and proceeded as planned to Monte Carlo.

On the afternoon of our departure, with the tug alongside and the pilot onboard, the yacht had just backed out of the harbor at Monte Carlo – and yes, to those familiar with the harbor, maneuvering a 400 ft yacht within the inner harbor was a lot of fun – all the yachts generators failed as one. Nothing! Not a light, not a refrigerator, not a toilet (marine toilets require power).

Test after test failed – and we had come to an end only a few boat-lengths from the harbor entry. While the crew was frantically trying to localize the problem, the authorities towed us a short distance from the entry, but wouldn’t allow a “disabled vessel” to move out of their control. Essentially, we remained under the supervision of the Monaco port authority. The problem? As all generators failed as a unit – it had to be the fuel delivery system and yes, they had just taken on a new fuel load in port. The fuel was the same octane and to the same specs, but as the old and new fuel worked their way into the fuel lines, it turned to a jellified mess and blocked the filters.

Coming as this did after the failure of the bow thruster, the unusual behavior of one of the propeller-shafts, to some of us it was rather amusing. We would spend another night in Monte Carlo – onboard, unfortunately, but enjoying an outside BBQ with the skyline of Monte Carlo behind us, wasn’t all that bad.

From my experience working with Container ships, I know that you don’t mix new fuel with old, with separate tanks for each, and that there’s a highly complex method of delivery to ensure the stability of the vessel is not put at risk. You don’t simply empty one tank and move on to the next and I assumed the same would apply for the yacht I was now on While they had tested the fuel on delivery, and had run additional tests of old versus new, they were quite unprepared for the fall out from mixing the two. They had not been able to simulate in their onboard lab what they encountered as they left the harbor.

While all of this was happening, I took a call from one of the sales folks at GoldenGate concerned over our ability to support an environment she wasn’t familiar with. The prospect was very anxious to make sure we had been certified in a cluster environment that turned about to be another virtualization deployment. The concern was whether we knew for sure we would operate with the host operating system and data base management system. We knew that we supported this combination at other customer sites, but could we be absolutely sure we would not be impacted by the virtualization software?

Just as Monte Carlo was watching a virtual city unwrap on its streets, and just as the yachts captain was looking at the results of lad simulations, it really did occur to me that we are going to see a lot of changes coming with the increased popularity of virtualization software. And I don’t think NonStop users will be immune to this, or be left completely unaffected. To some extent, as we move to a bladed architecture where the number of processors will depend on the number of sockets on the blade and the number of cores in the Intel chip used, some form of virtualization will be encountered immediately – the blade will appear to NonStop as just a single CPU, or processor. The level of granularity for fail-over will be at the blade level.

But there’s much more than this coming, I have to believe. As blades are “pooled” in support of a hypervisor, there’s every potential that we will see NonStop hosted in a virtual machine – and how will we test that environment? How will we simulate fail over in a completely virtual world. And will it be worth it? Will we want to see a return to simpler configurations just for reliability?

All of us that have been around technology for any length of time know, we can never turn back the clock. We continue to adapt. I can see virtualization affecting us all and I can see NonStop being caught up in it – operating systems that cannot be hosted within a virtual machine will fall by the wayside and become no more than historical reference points.

But are we truly prepared to go to sea with these new configurations? Have we begun to think about the ramifications that will come from an additional software layer between NonStop and the hardware? I have been reminded, in a very practical way, that when we begin to mix the new with the old then we need to be prepared for just about anything!

After all the data had been collected on the faulty propeller, it turned out to have nothing to do with the mechanism supporting the folding propeller. The “in port” software was being used by the auto pilot and not the “at sea” software, a situation where much finer tolerances were calibrated. By having this software engaged at sea, the propeller was trying to adjust to every swell it encountered! And yes, the yacht is now under full power once again, the fuel delivery problem resolved, and we are all headed to Civitavecchia, the port of Rome.

Monday, May 12, 2008

The Clouds in Spain

I have just spent the weekend in Barcelona. It was cloudy and overcast. And it rained. Not exactly what I have been looking forward to, but it was a good opportunity to unwind, take some time off, and relax. As I had never before been to Barcelona, I was looking forward to the visit, and the picture I have included here is of me outside the Sagrada Familia church designed by Antonio Gaudi - symbolic of the “modernistic” architecture of Barcelona in the late 1800s. (Photo to follow ...)

Every time I visit a European city I first look at the architecture, and then at the cars. For me, Europe continues to be the heart of the auto industry, and dictates many of the styles that find their way into the cars we all end up driving in the US. The Formula One (F1) event had only just been run the weekend before so memories of the city, as shown on television, were still fresh in my mind. But it turns out, this part of Spain is a motorcycle haven – one of the F1 TV commentators suggested “the fine for speeding in Barcelona was to watch the F1 event” – and I was disappointed in not seeing anything of interest this time!

Having some time to myself, I had the opportunity to catch up on reading material that I had to put to one side. Among the papers I had brought with me were email exchanges with Scott Healy, ITUG Chairman, and Nina Buik, Encompass President. As of the time of writing, we now have the new Connect user community and Scott is the Past President and Nina the new President – and both will be very active through the transition to the new organization, more of which will be unveiled, I’m sure, at the upcoming HP Technical Forum and Expo (HPTF&E) in Las Vegas. Scott and Nina had attended the HP Technology@Work taking place just a few weeks ago in Barcelona, and they reported on presentations made by key HP BCS executives.

The highlight was the presentation of Martin Fink, Senior VP of HP BCS where he talked about the move from Monolithic to Polymorphic computing. Scott emailed me and suggested that, after hearing Martin describing HPs polymorphic approach to provisioning for IT, he felt that this really did pave the way for new opportunities for NonStop and would be a good way to introduce NonStop to the rest of the HP user community. What Scott called, the “Stop crowd!” Nina posted an update on the new HP community blog where she observed “simply put, lower cost of entry (Industry Standard Servers / Components) and higher pricing on software!”

I have been posting my own blog entries on the changes going on within the data center and it looks to me like HP is about to initiate sweeping changes here, if they fully execute on all that’s being described at this high a level. In previous blog postings I have covered the topic of the “greening of the data center” and of how the dual issues of power and cooling we driving many of us to rethink the traditional packaging of servers. I have also covered the issue of security and, as long as a data center remains a cost center, the issues of outsourcing and the added pressures this brought to data center managers. But what Martin talks about, could change the whole way we plan future hardware purchases.

In a follow-up email exchange I had with HP executives, following the event in Barcelona, I had one of them go on to explain to me, in terms of something I could grasp pretty quickly – cars. “What if you went to the store and you purchased a generic vehicle, rather than a specific car. Every time you go out, your vehicle morphs to your need at the specific moment – a sports car to impress your girlfriend, or a pick-up to gather home improvement supplies. This is the power of polymorphism.”

For years I have been watching the pendulum swing back and forth – first, it is centralization, then it’s distributed computing, then back to servers and server farms, and then on to grids combining any server out on the internet. Could we be approaching a time where the pendulum stops swinging completely? Could we be coming to a point where the servers in our data center are just going to adapt and reconfigure according to the workload they encountered? Just because we have a new SAP application that requires a 2-socket computer with 48 Gb of RAM – do we have to call up our supplier and order a new server? Or will the existing servers just dynamically reconfigure and provide sufficient resource for the duration of the SAP requirement?

As the availability of HPs bladed architecture nears, one aspect of this transition to Intel-based “lower cost of entry (Industry Standard Servers / Components)” as Nina mentioned, is that these blades will begin to support everything from Windows to NonStop. Already, the first blades capable of supporting NonStop, are showing up in the data centers of early adopters, and they are beginning to educate us all on how inexpensive yet highly flexible the hardware is becoming. And for some time, I have been speculating on a future where it will be the arriving transaction mix that determines which operating system / middleware configuration will process the transaction. There will always be those transactions paramount to any business that absolutely have to be processed without any loss or corruption of the data, and the option to have a NonStop instance become available for this purpose is going to be a huge advantage for any business running the new HP bladed architecture.

After explaining the car example to me, the HP exectuive went on to add “the idea is that you can disaggregate all computing resources (memory, cpu, I/O, networking, storage, etc.) and have them morph / adapt to whatever your workload needs are at a specific moment in time. So, rather the ‘early-bind’ your computer purchase decisions (I need a 2-socket computer with 48Gb of RAM to run my SAP application) all you do is load up a data center with discreet compute resources that you just throw workloads at. Those resources morph themselves into an aggregate that runs the workload.”

And then it really hit me – will HP deliver the first “cloud” box? After years of talking about grid computing and, more recently, about cloud computing, will we see the option to have a cloud packaged in a single box? For nearly four decades I have been happily drawing clouds on configuration diagrams – whether on whiteboards, overhead foils, or my in latest powerpoint presentation. The first usage I made of these clouds was to simplify what was inside an IBM System Network Architecture (SNA) network – for my purposes, I never needed to include all of the modems, the SDLC lines, and so forth. As the industry turned to TCP/IP then the IP clouds simply replaced the SNA clouds. The storage begun to use clouds until finally, any location with servers (including complete data centers) were represented by clouds.

When data centers first moved from mainframes to distributed computers and began to deploy minicomputers into regional offices, a similar minicomputer was shipped to each office. They were purchased as commodity hardware and we rolled-them-in as, and when, we could manage their deployment. Grid computing, where we assembled computing power – even supercomputing power –to tackle computationally-intensive operations, they were most often made up of multiples of the same computer. Most often, networks of Windows or Linux servers, were the preferred computer. But with cloud computing, any server type and model can be included – whether Windows, Linux, Unix, and even NonStop. Giving the computers within the cloud the ability to adapt to the application mix and to support whatever operating system best matches the requirements, simplifies the whole nature of clouds, and cloud computing enormously.

We aren’t simply talking about clouds these days because we can. The bigger requirement turns out to be Software as a Service (SaaS), and the demands supporting more applications on this basis. There is a growing recognition that many applications can be sourced on a SaaS basis (everything from simple email solutions, to expense reporting, etc.) and whether provided in-house or not, the option to dynamically change the mix of operating systems running to best meet the application’s demands at any time of day, simplifies the whole data center acquisition process.

Barcelona may have been cloudy and overcast and I may have missed sighting any exciting cars, but the fall-out from the presentations made at the recent Barcelona event may have more far-reaching ramifications than many data center managers may have initially thought. Their tasks have never been easy and coming up with the right mix of hardware for the best price has never been simple. But who knows, the pendulum may have finally stopped swinging as wildly as it has been and “the approach to provisioning for IT” may have become truly as easy as inserting another blade! It’s all very much in its infancy, but I see enough here to really convince me that the clouds are here, and here for good, and that they have nothing to do with rain!

Tuesday, May 6, 2008

A battle of tag lines – who are you going to call?

I am back in Boulder for a few days before heading back to Europe. This time I will be mixing in a little vacation time and I am looking forward to decompressing. The picture I have included here is of what “Springtime in the Rockies” looks like – with the Snowberry tress in full bloom. After so many months of lying dormant, it never ceases to amaze me how magic the place becomes in spring.

Earlier today, the Reuters agency reported that IBM senior executives Bill Zeitler, Senior VP and Group Executive, Systems and Technology Group, and Nick Donofrio, Executive VP, Innovation and Technology, would be retiring. Bob Moffat would be replacing Bill, but Nick’s position would not be filled. No replacement for the head of Innovation and Technology it appears – yet the folks over in the systems group (where the big iron comes from) will get a new boss immediately.

I surfed the web for other press reports and began to “google” for more information. But it wasn’t the stories that caught my attention, although the number of reports was pretty slim, as much as all the different tag lines and messages I found IBM using these days.

Why IBM Software? And the message from IBM was “Open. Scalable. Secure. Industry focused. IBM Software product and solutions help you innovate and become more flexible, while making the most of the current resources and controlling costs.”

Why IBM Systems and Servers? Again, the message from IBM was “IBM Systems can provide building blocks of simpler, more integrated infrastructure that can power innovation while helping you protect your current investments and dramatically improve the economies of IT.”

Apart from looking like the collective thoughts of multiple committees, it was the inclusion of “help you innovate” and then “can power innovation” that I found most interesting. Recall the TV commercials IBM ran that featured Innovation Man?

“I am Innovation Man! I am here to get you all fired up about Innovation! Are you fired-up? … I said, are you fired up?”
“Yes!”
“Why are you fired-up?
“I haven’t any idea …”
“Aaahhhhhh …….”

And so we now read that the most senior IBM executive for Innovation and Technology is retiring and there will be no replacement – several of his reports will now report directly to Sam Palmisano, IBM’s Chairman and CEO. Curious – isn’t innovation a key driver within our industry? As much as I am interested in innovation however, it is the changes at IBM’s Systems and Technology group that interest me even more. This has been the heartland of the mainframe and today, more than 50 years after the mainframe first appeared, nothing gets IBM quite as excited as the deployment of a new mainframe.

But isn’t the mainframe business dying? In the months before IBM announced the z10 mainframe, didn’t Gartner report that mainframe server revenues were plunging? And doesn’t the mainframe server stand apart in the IBM systems group – leveraging very little from the other platforms? At a time when IBM has elected to go with blades for both the System i and System p platforms, the mainframe continues to remain untouched by the forces of commoditization sweeping through the industry maintaining layers of complexity that only expensive IBM services folks seem to know how to orchestrate!

In InteropNews, an electronic newsletter primarily for the Windows, Linux, and Unix community, columnist Jeff Gould in a piece titled “Will the mainframe ever die?” asked “Why aren’t these cantankerous old beasts as dead as diplodocus and triceratops?” He then goes on to quote from the same Gartner report “IBM’s mainframe hardware revenue plunged precipitously during the 1990’s … to a measly $4 billion in 1999.”

But Gould then looks at the latest figures and begins to see the true impact of the mainframe on IBM. “A quick back-of-the-envelope calculation based on IBM’s breakout of its Q4 2007 numbers suggests that 40% of its software business and perhaps a third of its services revenues are ultimately driven by zSeries hardware. That would put IBM’s overall take from big iron at roughly $30 billion per year.” Whether this is accurate or not – the point is, IBM replaced its head of Systems and Technology group pretty quickly, as almost one third of all of its revenues are related to the success of its big iron mainframe platform.

Now contrast that to what we are seeing with HP, and with the influence the NonStop line is having on HP. But before going any further, what is HPs tag line these days? “HP focuses on simplifying technology experiences for all of its customers – from individual consumers to the largest businesses.” As I contemplate a little down time, I can sympathize with those that seek simplicity as I look forward to quiet strolls and long naps.

The original Tandem Computer was a radically innovative breakthrough technology – it changed the rules. Computers could be deployed into the most mission-critical areas of the business, where any downtime seriously impacted the business. In situations where customers had a choice, one factor that drove them to a competing offering was the dreaded “out of service” message. With the fault tolerant technology within the Tandem Computer, no single point of failure would take it down. It would just run. Over the years, the configurations grew and the capabilities of the platform began to match the biggest big iron mainframe offerings. When the Cyclone appeared towards the end of 1989, its physical appearance was hard to overlook and it had certainly grown bigger than any other midrange or front-end computer of the day. In hushed tones around the corridors of Cupertino, there was even speculation that Tandem had hatched a “pocket-mainframe” that was the equal to any mainframe “battleship” available from IBM.

How big an impact is the NonStop having on HP? It is hard to ignore the impact of HPs own use of the core NonStop technology (in Neoview guise) in the huge, company-wide, server and application consolidation program where it will be supporting Petabytes of data. But this is just the starting point. Saving costs is only one side of the ledger – generating revenues is always the biggest issue. When you add up the revenues HP generates from the software on NonStop, the services being provided, its role in key industry segments such as Telco and Finance, and look at the impact of the arrival of a new application such as Neoview, then the whole is considerably greater than the parts.

With its roots deep in OnLine Transaction Processing (OLTP) where availability, scalability and data integrity were paramount, its selection by companies needing the most-available mission critical server came as no surprise. And it’s this ability to continuously exceed expectations, in the mission critical application space, that keeps it relevant for many CIOs. But again, the argument always comes back to costs and it’s here where the paths of IBM and HP may soon begin to diverge with the advantage headed firmly HPs way.

The soon-to-be-announced bladed architecture in support of all platforms, including NonStop, will only drive the product costs down. Commoditization is what it’s all about – and HP has elected not to keep the NonStop product line separate (as IBM continues to do, with the System z), and to include it as an integral part of the blades program.

Returning to Gould’s column, where he observed “IBM has bumbled along by persuading customers to surround legacy apps and data with expensive Java middleware implemented by even more expensive professional services. But inevitably the time will come – probably sooner rather than later – when this strategy chokes on its own complexity and cost.” Recognizing the changes that are coming, Gould adds “it does seem that common sense is on the side of the ‘commodity’ platforms.”

Mission critical applications will continue to demand even faster processors, and as long as the NonStop product costs continue to be exorcised, HP will see even more growth from the HP NonStop platform. HPs message is very simple – they are going to simplify technology! Tag lines may not always serve the buyer well, but in this case, it certainly brings with it an expectation that through simplicity will come considerable cost reductions.

I’m heading off for some downtime and plan on doing very little of anything, and just decompressing. But can a major vendor, like IBM, ever ease up? IBM had been pushing innovation – but where will the leadership come from now? And if they don’t continue to push innovation, then what will they be pushing? Certainly, not simplification!

It’s probably way too early to herald success for everything HP is doing, but it will be an interesting year! I guess I shouldn’t plan on talking any serious downtime!