I am now back in Simi Valley having spent another working weekend in Colorado. This time, I was in Denver with a group of IBM Mainframe colleagues and we spent a lot of time discussing a variety of topics from systems, what we used to work with back in the ‘60s and ‘70s when it was all punch cards and single digit megabyte disk drives, to a little sports and the occasional politics. If you couldn’t recall what a typical ‘70s programmer looked like, check out the photo!
And talking of politics, today I listened to the Sting song “Englishman in New York” where he sings “I’m an alien, I’m a legal alien, I’m an Englishman in New York”, and it reminded me that this is an election year in the United States. While I happen to be a legal alien, I’m an Australian living in California and as far from New York as you can get on the mainland. The predicament I face is that, along with many others, as legal aliens in the US and remaining Australian citizens we have no vote. Anywhere. We cannot vote in Australia as we are no longer Australian residents, and we cannot vote in the US as we are not US citizens. We have no opportunity to influence decisions anywhere in the world!
Essentially stateless, with only minimal acknowledgement by Australia (yes, we can enter the country via the Australian-only passport lanes but that’s about it), we develop no particular ties with any country. I have developed a network of friends from different countries through business as well as through my extended family, and see little value in identifying with one country over another. Yes, I get extremely loyal and vocal when it comes to global sporting events where my Australian heritage comes through, but at most other times I live in a kind of virtual world connected, as it were, by the email messages that are routinely exchanged. My country is not something I can look out onto anymore, but rather, is populated with experiences and memories spread across time and distance.
It is against this background that I had a lively conversation with my IBM colleagues about virtualization. According to an e-newsletter I received, Unisphere’s “5 Minute Briefing” (January 7, 2008), there were seven trends to watch in 2008 that would have the most influence in the shaping of the data center. Top of the list? More server and storage virtualization!
The systems I first worked on in the data center were mainframes – the IBM System 360 and then later, the IBM System 370. For many years I kept my original System 360 green card, as well as the System 370 yellow card that followed. In true Dilbert fashion, I had them inside my shirt’s pocket protector. I was also one of the first to buy a TI calculator with hexadecimal support. But two technologies really impressed me back in those days – Virtual Machine (VM) and Systems Network Architecture (SNA).
I have spent a lot of my life in and around SNA. In the mid 80’s I was working for Netlink, an Australian start-up company that built SNA Protocol Converters and SNA Concentration Hubs. I later joined Tandem Computers where I became part of Product Management with responsibility for SNAX. And then, in the late ‘90s I was deeply involved with Insession’s ICE implementation of SNA on NonStop.
SNA gave the networking community a way to isolate an ever-changing physical network from the mission critical online applications of the day. Network managers gained a simpler way to administer all the network resources as SNA overlaid an easy-to-manage logical “virtual” network on top of the typically highly-volatile physical “real” network.
When SNA was first introduced I recall many of these network managers were concerned about the overhead that the separation of logical and physical networks would bring and how this added latency would prove to be unacceptable. But by the mid ‘80s, nearly every major network had switched to the new architecture as the value provided by such virtual and real isolation outweighed any perceived latency aspects.
“In its ultimate manifestation”, according to PCMagazine, “network virtualization treats all servers and services in the network as a single pool of resources that can be rearranged and redeployed in real-time to meet changing user and transaction requirements”. Today we rely less on SNA and more on TCP/IP but the results are the same – network managers can manipulate every resource (IP addressable) to rearrange or redeploy to best suite incoming transaction mixes.
VM, on the other hand, was as close to magic as anything I had seen up until then. With only a few commands, system programmers could initialize another “guest” machine. With other commands, they could dynamically add a string of disk drives and give guest machines almost instant access. It showed me that there really didn’t have to be any relationship between the hardware, and what operating systems I ran.
The enormous flexibility that came from the managed separation between each guest machine allowed many corporations to configure a number of test and development configurations right alongside of production configurations. But again, just as with SNA, many system managers had concerns about the overhead that came with the VM hypervisor but quickly put such issues aside as the systems became more powerful and as users valued the flexibility.
“With virtualization, you can dynamically fire up and take down virtual servers (also known as virtual machines)”, according to Infoworld in a February 12, 2007 article, “each of which basically fools an operating system (and any applications that run on top of it) into thinking the virtual machine is actual hardware. Running multiple virtual machines can fully exploit a physical server’s compute potential — and provide a rapid response to shifting datacenter demands”. With the enormous increases in processor power we are seeing, it only makes sense to me to cram more virtual machines onto them to fully exploit them. It makes little sense to map just a single operating system instance to today’s multi-core microprocessors.
For HP and the HP NonStop community, there is a lot of excitement building around the much-anticipated “bladed architecture” product suite. While the technical specifications of this new server have not been released, for anyone who has sat through any public product roadmap presentations it’s looking likely we will see real product sometime in 2008. And it would have to be a pretty safe bet to assume the underlying chip technology will be something a lot better than a single-core Itanium chip. And while I have no specific knowledge of HP’s virtualization plans, knowing the heritage of the BCS executives, it wouldn’t surprise me in the least to see some leverage being made of open source projects such as Xen.
Unisphere’s special report “Seven Trends Shaping the Data Center of 2008” identified Virtualization, SOA, Mashups and Web 2.0, Software as a Service, More IT Governance, More Vendor Consolidation, and More “More with Less” as the trends making the most impact on the data center as “end users and vendors alike seek to better integrate solutions that have greater impact on the business”. And as for me, I totally agree with Unisphere that virtualization will become the most important consideration in future data center decisions.
As with previous break-through technologies and architectures, there will be road-bumps along the way and, as in the past, I have to believe there will be concerns over latency. Early adopters will need to be wary about under-configuring their systems and will take time to optimize their configurations as “your mileage will vary based on driving conditions!” And it will not hurt either to remember that, just as we found when we first used Virtual Storage (VS) systems, “to get good performance from virtual memory, make sure you give it lots of real memory!”
Reminiscing with my IBM friends certainly highlighted how far our technology has come. Virtualization, like everything before it, will not be a free ride. However, the sheer breadth of its capabilities, as it exploits today’s microprocessor technologies, as well as the flexibility it provides as it helps us consolidate our servers, makes it a very real bargain at any price.
Furthermore, it has opened my mind to the reality that any day now, we may be able to select applications completely independent of operating systems and hardware platforms. We could select the best solution that met our business requirements and deploy them on platforms best suited to the experience, knowledge, and vendor relationships that we had invested decades in nurturing.
And talking of politics, today I listened to the Sting song “Englishman in New York” where he sings “I’m an alien, I’m a legal alien, I’m an Englishman in New York”, and it reminded me that this is an election year in the United States. While I happen to be a legal alien, I’m an Australian living in California and as far from New York as you can get on the mainland. The predicament I face is that, along with many others, as legal aliens in the US and remaining Australian citizens we have no vote. Anywhere. We cannot vote in Australia as we are no longer Australian residents, and we cannot vote in the US as we are not US citizens. We have no opportunity to influence decisions anywhere in the world!
Essentially stateless, with only minimal acknowledgement by Australia (yes, we can enter the country via the Australian-only passport lanes but that’s about it), we develop no particular ties with any country. I have developed a network of friends from different countries through business as well as through my extended family, and see little value in identifying with one country over another. Yes, I get extremely loyal and vocal when it comes to global sporting events where my Australian heritage comes through, but at most other times I live in a kind of virtual world connected, as it were, by the email messages that are routinely exchanged. My country is not something I can look out onto anymore, but rather, is populated with experiences and memories spread across time and distance.
It is against this background that I had a lively conversation with my IBM colleagues about virtualization. According to an e-newsletter I received, Unisphere’s “5 Minute Briefing” (January 7, 2008), there were seven trends to watch in 2008 that would have the most influence in the shaping of the data center. Top of the list? More server and storage virtualization!
The systems I first worked on in the data center were mainframes – the IBM System 360 and then later, the IBM System 370. For many years I kept my original System 360 green card, as well as the System 370 yellow card that followed. In true Dilbert fashion, I had them inside my shirt’s pocket protector. I was also one of the first to buy a TI calculator with hexadecimal support. But two technologies really impressed me back in those days – Virtual Machine (VM) and Systems Network Architecture (SNA).
I have spent a lot of my life in and around SNA. In the mid 80’s I was working for Netlink, an Australian start-up company that built SNA Protocol Converters and SNA Concentration Hubs. I later joined Tandem Computers where I became part of Product Management with responsibility for SNAX. And then, in the late ‘90s I was deeply involved with Insession’s ICE implementation of SNA on NonStop.
SNA gave the networking community a way to isolate an ever-changing physical network from the mission critical online applications of the day. Network managers gained a simpler way to administer all the network resources as SNA overlaid an easy-to-manage logical “virtual” network on top of the typically highly-volatile physical “real” network.
When SNA was first introduced I recall many of these network managers were concerned about the overhead that the separation of logical and physical networks would bring and how this added latency would prove to be unacceptable. But by the mid ‘80s, nearly every major network had switched to the new architecture as the value provided by such virtual and real isolation outweighed any perceived latency aspects.
“In its ultimate manifestation”, according to PCMagazine, “network virtualization treats all servers and services in the network as a single pool of resources that can be rearranged and redeployed in real-time to meet changing user and transaction requirements”. Today we rely less on SNA and more on TCP/IP but the results are the same – network managers can manipulate every resource (IP addressable) to rearrange or redeploy to best suite incoming transaction mixes.
VM, on the other hand, was as close to magic as anything I had seen up until then. With only a few commands, system programmers could initialize another “guest” machine. With other commands, they could dynamically add a string of disk drives and give guest machines almost instant access. It showed me that there really didn’t have to be any relationship between the hardware, and what operating systems I ran.
The enormous flexibility that came from the managed separation between each guest machine allowed many corporations to configure a number of test and development configurations right alongside of production configurations. But again, just as with SNA, many system managers had concerns about the overhead that came with the VM hypervisor but quickly put such issues aside as the systems became more powerful and as users valued the flexibility.
“With virtualization, you can dynamically fire up and take down virtual servers (also known as virtual machines)”, according to Infoworld in a February 12, 2007 article, “each of which basically fools an operating system (and any applications that run on top of it) into thinking the virtual machine is actual hardware. Running multiple virtual machines can fully exploit a physical server’s compute potential — and provide a rapid response to shifting datacenter demands”. With the enormous increases in processor power we are seeing, it only makes sense to me to cram more virtual machines onto them to fully exploit them. It makes little sense to map just a single operating system instance to today’s multi-core microprocessors.
For HP and the HP NonStop community, there is a lot of excitement building around the much-anticipated “bladed architecture” product suite. While the technical specifications of this new server have not been released, for anyone who has sat through any public product roadmap presentations it’s looking likely we will see real product sometime in 2008. And it would have to be a pretty safe bet to assume the underlying chip technology will be something a lot better than a single-core Itanium chip. And while I have no specific knowledge of HP’s virtualization plans, knowing the heritage of the BCS executives, it wouldn’t surprise me in the least to see some leverage being made of open source projects such as Xen.
Unisphere’s special report “Seven Trends Shaping the Data Center of 2008” identified Virtualization, SOA, Mashups and Web 2.0, Software as a Service, More IT Governance, More Vendor Consolidation, and More “More with Less” as the trends making the most impact on the data center as “end users and vendors alike seek to better integrate solutions that have greater impact on the business”. And as for me, I totally agree with Unisphere that virtualization will become the most important consideration in future data center decisions.
As with previous break-through technologies and architectures, there will be road-bumps along the way and, as in the past, I have to believe there will be concerns over latency. Early adopters will need to be wary about under-configuring their systems and will take time to optimize their configurations as “your mileage will vary based on driving conditions!” And it will not hurt either to remember that, just as we found when we first used Virtual Storage (VS) systems, “to get good performance from virtual memory, make sure you give it lots of real memory!”
Reminiscing with my IBM friends certainly highlighted how far our technology has come. Virtualization, like everything before it, will not be a free ride. However, the sheer breadth of its capabilities, as it exploits today’s microprocessor technologies, as well as the flexibility it provides as it helps us consolidate our servers, makes it a very real bargain at any price.
Furthermore, it has opened my mind to the reality that any day now, we may be able to select applications completely independent of operating systems and hardware platforms. We could select the best solution that met our business requirements and deploy them on platforms best suited to the experience, knowledge, and vendor relationships that we had invested decades in nurturing.
Comments