I have just spent the weekend in Barcelona. It was cloudy and overcast. And it rained. Not exactly what I have been looking forward to, but it was a good opportunity to unwind, take some time off, and relax. As I had never before been to Barcelona, I was looking forward to the visit, and the picture I have included here is of me outside the Sagrada Familia church designed by Antonio Gaudi - symbolic of the “modernistic” architecture of Barcelona in the late 1800s. (Photo to follow ...)
Every time I visit a European city I first look at the architecture, and then at the cars. For me, Europe continues to be the heart of the auto industry, and dictates many of the styles that find their way into the cars we all end up driving in the US. The Formula One (F1) event had only just been run the weekend before so memories of the city, as shown on television, were still fresh in my mind. But it turns out, this part of Spain is a motorcycle haven – one of the F1 TV commentators suggested “the fine for speeding in Barcelona was to watch the F1 event” – and I was disappointed in not seeing anything of interest this time!
Having some time to myself, I had the opportunity to catch up on reading material that I had to put to one side. Among the papers I had brought with me were email exchanges with Scott Healy, ITUG Chairman, and Nina Buik, Encompass President. As of the time of writing, we now have the new Connect user community and Scott is the Past President and Nina the new President – and both will be very active through the transition to the new organization, more of which will be unveiled, I’m sure, at the upcoming HP Technical Forum and Expo (HPTF&E) in Las Vegas. Scott and Nina had attended the HP Technology@Work taking place just a few weeks ago in Barcelona, and they reported on presentations made by key HP BCS executives.
The highlight was the presentation of Martin Fink, Senior VP of HP BCS where he talked about the move from Monolithic to Polymorphic computing. Scott emailed me and suggested that, after hearing Martin describing HPs polymorphic approach to provisioning for IT, he felt that this really did pave the way for new opportunities for NonStop and would be a good way to introduce NonStop to the rest of the HP user community. What Scott called, the “Stop crowd!” Nina posted an update on the new HP community blog where she observed “simply put, lower cost of entry (Industry Standard Servers / Components) and higher pricing on software!”
I have been posting my own blog entries on the changes going on within the data center and it looks to me like HP is about to initiate sweeping changes here, if they fully execute on all that’s being described at this high a level. In previous blog postings I have covered the topic of the “greening of the data center” and of how the dual issues of power and cooling we driving many of us to rethink the traditional packaging of servers. I have also covered the issue of security and, as long as a data center remains a cost center, the issues of outsourcing and the added pressures this brought to data center managers. But what Martin talks about, could change the whole way we plan future hardware purchases.
In a follow-up email exchange I had with HP executives, following the event in Barcelona, I had one of them go on to explain to me, in terms of something I could grasp pretty quickly – cars. “What if you went to the store and you purchased a generic vehicle, rather than a specific car. Every time you go out, your vehicle morphs to your need at the specific moment – a sports car to impress your girlfriend, or a pick-up to gather home improvement supplies. This is the power of polymorphism.”
For years I have been watching the pendulum swing back and forth – first, it is centralization, then it’s distributed computing, then back to servers and server farms, and then on to grids combining any server out on the internet. Could we be approaching a time where the pendulum stops swinging completely? Could we be coming to a point where the servers in our data center are just going to adapt and reconfigure according to the workload they encountered? Just because we have a new SAP application that requires a 2-socket computer with 48 Gb of RAM – do we have to call up our supplier and order a new server? Or will the existing servers just dynamically reconfigure and provide sufficient resource for the duration of the SAP requirement?
As the availability of HPs bladed architecture nears, one aspect of this transition to Intel-based “lower cost of entry (Industry Standard Servers / Components)” as Nina mentioned, is that these blades will begin to support everything from Windows to NonStop. Already, the first blades capable of supporting NonStop, are showing up in the data centers of early adopters, and they are beginning to educate us all on how inexpensive yet highly flexible the hardware is becoming. And for some time, I have been speculating on a future where it will be the arriving transaction mix that determines which operating system / middleware configuration will process the transaction. There will always be those transactions paramount to any business that absolutely have to be processed without any loss or corruption of the data, and the option to have a NonStop instance become available for this purpose is going to be a huge advantage for any business running the new HP bladed architecture.
After explaining the car example to me, the HP exectuive went on to add “the idea is that you can disaggregate all computing resources (memory, cpu, I/O, networking, storage, etc.) and have them morph / adapt to whatever your workload needs are at a specific moment in time. So, rather the ‘early-bind’ your computer purchase decisions (I need a 2-socket computer with 48Gb of RAM to run my SAP application) all you do is load up a data center with discreet compute resources that you just throw workloads at. Those resources morph themselves into an aggregate that runs the workload.”
And then it really hit me – will HP deliver the first “cloud” box? After years of talking about grid computing and, more recently, about cloud computing, will we see the option to have a cloud packaged in a single box? For nearly four decades I have been happily drawing clouds on configuration diagrams – whether on whiteboards, overhead foils, or my in latest powerpoint presentation. The first usage I made of these clouds was to simplify what was inside an IBM System Network Architecture (SNA) network – for my purposes, I never needed to include all of the modems, the SDLC lines, and so forth. As the industry turned to TCP/IP then the IP clouds simply replaced the SNA clouds. The storage begun to use clouds until finally, any location with servers (including complete data centers) were represented by clouds.
When data centers first moved from mainframes to distributed computers and began to deploy minicomputers into regional offices, a similar minicomputer was shipped to each office. They were purchased as commodity hardware and we rolled-them-in as, and when, we could manage their deployment. Grid computing, where we assembled computing power – even supercomputing power –to tackle computationally-intensive operations, they were most often made up of multiples of the same computer. Most often, networks of Windows or Linux servers, were the preferred computer. But with cloud computing, any server type and model can be included – whether Windows, Linux, Unix, and even NonStop. Giving the computers within the cloud the ability to adapt to the application mix and to support whatever operating system best matches the requirements, simplifies the whole nature of clouds, and cloud computing enormously.
We aren’t simply talking about clouds these days because we can. The bigger requirement turns out to be Software as a Service (SaaS), and the demands supporting more applications on this basis. There is a growing recognition that many applications can be sourced on a SaaS basis (everything from simple email solutions, to expense reporting, etc.) and whether provided in-house or not, the option to dynamically change the mix of operating systems running to best meet the application’s demands at any time of day, simplifies the whole data center acquisition process.
Barcelona may have been cloudy and overcast and I may have missed sighting any exciting cars, but the fall-out from the presentations made at the recent Barcelona event may have more far-reaching ramifications than many data center managers may have initially thought. Their tasks have never been easy and coming up with the right mix of hardware for the best price has never been simple. But who knows, the pendulum may have finally stopped swinging as wildly as it has been and “the approach to provisioning for IT” may have become truly as easy as inserting another blade! It’s all very much in its infancy, but I see enough here to really convince me that the clouds are here, and here for good, and that they have nothing to do with rain!
Every time I visit a European city I first look at the architecture, and then at the cars. For me, Europe continues to be the heart of the auto industry, and dictates many of the styles that find their way into the cars we all end up driving in the US. The Formula One (F1) event had only just been run the weekend before so memories of the city, as shown on television, were still fresh in my mind. But it turns out, this part of Spain is a motorcycle haven – one of the F1 TV commentators suggested “the fine for speeding in Barcelona was to watch the F1 event” – and I was disappointed in not seeing anything of interest this time!
Having some time to myself, I had the opportunity to catch up on reading material that I had to put to one side. Among the papers I had brought with me were email exchanges with Scott Healy, ITUG Chairman, and Nina Buik, Encompass President. As of the time of writing, we now have the new Connect user community and Scott is the Past President and Nina the new President – and both will be very active through the transition to the new organization, more of which will be unveiled, I’m sure, at the upcoming HP Technical Forum and Expo (HPTF&E) in Las Vegas. Scott and Nina had attended the HP Technology@Work taking place just a few weeks ago in Barcelona, and they reported on presentations made by key HP BCS executives.
The highlight was the presentation of Martin Fink, Senior VP of HP BCS where he talked about the move from Monolithic to Polymorphic computing. Scott emailed me and suggested that, after hearing Martin describing HPs polymorphic approach to provisioning for IT, he felt that this really did pave the way for new opportunities for NonStop and would be a good way to introduce NonStop to the rest of the HP user community. What Scott called, the “Stop crowd!” Nina posted an update on the new HP community blog where she observed “simply put, lower cost of entry (Industry Standard Servers / Components) and higher pricing on software!”
I have been posting my own blog entries on the changes going on within the data center and it looks to me like HP is about to initiate sweeping changes here, if they fully execute on all that’s being described at this high a level. In previous blog postings I have covered the topic of the “greening of the data center” and of how the dual issues of power and cooling we driving many of us to rethink the traditional packaging of servers. I have also covered the issue of security and, as long as a data center remains a cost center, the issues of outsourcing and the added pressures this brought to data center managers. But what Martin talks about, could change the whole way we plan future hardware purchases.
In a follow-up email exchange I had with HP executives, following the event in Barcelona, I had one of them go on to explain to me, in terms of something I could grasp pretty quickly – cars. “What if you went to the store and you purchased a generic vehicle, rather than a specific car. Every time you go out, your vehicle morphs to your need at the specific moment – a sports car to impress your girlfriend, or a pick-up to gather home improvement supplies. This is the power of polymorphism.”
For years I have been watching the pendulum swing back and forth – first, it is centralization, then it’s distributed computing, then back to servers and server farms, and then on to grids combining any server out on the internet. Could we be approaching a time where the pendulum stops swinging completely? Could we be coming to a point where the servers in our data center are just going to adapt and reconfigure according to the workload they encountered? Just because we have a new SAP application that requires a 2-socket computer with 48 Gb of RAM – do we have to call up our supplier and order a new server? Or will the existing servers just dynamically reconfigure and provide sufficient resource for the duration of the SAP requirement?
As the availability of HPs bladed architecture nears, one aspect of this transition to Intel-based “lower cost of entry (Industry Standard Servers / Components)” as Nina mentioned, is that these blades will begin to support everything from Windows to NonStop. Already, the first blades capable of supporting NonStop, are showing up in the data centers of early adopters, and they are beginning to educate us all on how inexpensive yet highly flexible the hardware is becoming. And for some time, I have been speculating on a future where it will be the arriving transaction mix that determines which operating system / middleware configuration will process the transaction. There will always be those transactions paramount to any business that absolutely have to be processed without any loss or corruption of the data, and the option to have a NonStop instance become available for this purpose is going to be a huge advantage for any business running the new HP bladed architecture.
After explaining the car example to me, the HP exectuive went on to add “the idea is that you can disaggregate all computing resources (memory, cpu, I/O, networking, storage, etc.) and have them morph / adapt to whatever your workload needs are at a specific moment in time. So, rather the ‘early-bind’ your computer purchase decisions (I need a 2-socket computer with 48Gb of RAM to run my SAP application) all you do is load up a data center with discreet compute resources that you just throw workloads at. Those resources morph themselves into an aggregate that runs the workload.”
And then it really hit me – will HP deliver the first “cloud” box? After years of talking about grid computing and, more recently, about cloud computing, will we see the option to have a cloud packaged in a single box? For nearly four decades I have been happily drawing clouds on configuration diagrams – whether on whiteboards, overhead foils, or my in latest powerpoint presentation. The first usage I made of these clouds was to simplify what was inside an IBM System Network Architecture (SNA) network – for my purposes, I never needed to include all of the modems, the SDLC lines, and so forth. As the industry turned to TCP/IP then the IP clouds simply replaced the SNA clouds. The storage begun to use clouds until finally, any location with servers (including complete data centers) were represented by clouds.
When data centers first moved from mainframes to distributed computers and began to deploy minicomputers into regional offices, a similar minicomputer was shipped to each office. They were purchased as commodity hardware and we rolled-them-in as, and when, we could manage their deployment. Grid computing, where we assembled computing power – even supercomputing power –to tackle computationally-intensive operations, they were most often made up of multiples of the same computer. Most often, networks of Windows or Linux servers, were the preferred computer. But with cloud computing, any server type and model can be included – whether Windows, Linux, Unix, and even NonStop. Giving the computers within the cloud the ability to adapt to the application mix and to support whatever operating system best matches the requirements, simplifies the whole nature of clouds, and cloud computing enormously.
We aren’t simply talking about clouds these days because we can. The bigger requirement turns out to be Software as a Service (SaaS), and the demands supporting more applications on this basis. There is a growing recognition that many applications can be sourced on a SaaS basis (everything from simple email solutions, to expense reporting, etc.) and whether provided in-house or not, the option to dynamically change the mix of operating systems running to best meet the application’s demands at any time of day, simplifies the whole data center acquisition process.
Barcelona may have been cloudy and overcast and I may have missed sighting any exciting cars, but the fall-out from the presentations made at the recent Barcelona event may have more far-reaching ramifications than many data center managers may have initially thought. Their tasks have never been easy and coming up with the right mix of hardware for the best price has never been simple. But who knows, the pendulum may have finally stopped swinging as wildly as it has been and “the approach to provisioning for IT” may have become truly as easy as inserting another blade! It’s all very much in its infancy, but I see enough here to really convince me that the clouds are here, and here for good, and that they have nothing to do with rain!
Comments