Skip to main content

Virtualization? A bargain at any price!

I am now back in Simi Valley having spent another working weekend in Colorado. This time, I was in Denver with a group of IBM Mainframe colleagues and we spent a lot of time discussing a variety of topics from systems, what we used to work with back in the ‘60s and ‘70s when it was all punch cards and single digit megabyte disk drives, to a little sports and the occasional politics. If you couldn’t recall what a typical ‘70s programmer looked like, check out the photo!

And talking of politics, today I listened to the Sting song “Englishman in New York” where he sings “I’m an alien, I’m a legal alien, I’m an Englishman in New York”, and it reminded me that this is an election year in the United States. While I happen to be a legal alien, I’m an Australian living in California and as far from New York as you can get on the mainland. The predicament I face is that, along with many others, as legal aliens in the US and remaining Australian citizens we have no vote. Anywhere. We cannot vote in Australia as we are no longer Australian residents, and we cannot vote in the US as we are not US citizens. We have no opportunity to influence decisions anywhere in the world!

Essentially stateless, with only minimal acknowledgement by Australia (yes, we can enter the country via the Australian-only passport lanes but that’s about it), we develop no particular ties with any country. I have developed a network of friends from different countries through business as well as through my extended family, and see little value in identifying with one country over another. Yes, I get extremely loyal and vocal when it comes to global sporting events where my Australian heritage comes through, but at most other times I live in a kind of virtual world connected, as it were, by the email messages that are routinely exchanged. My country is not something I can look out onto anymore, but rather, is populated with experiences and memories spread across time and distance.

It is against this background that I had a lively conversation with my IBM colleagues about virtualization. According to an e-newsletter I received, Unisphere’s “5 Minute Briefing” (January 7, 2008), there were seven trends to watch in 2008 that would have the most influence in the shaping of the data center. Top of the list? More server and storage virtualization!

The systems I first worked on in the data center were mainframes – the IBM System 360 and then later, the IBM System 370. For many years I kept my original System 360 green card, as well as the System 370 yellow card that followed. In true Dilbert fashion, I had them inside my shirt’s pocket protector. I was also one of the first to buy a TI calculator with hexadecimal support. But two technologies really impressed me back in those days – Virtual Machine (VM) and Systems Network Architecture (SNA).

I have spent a lot of my life in and around SNA. In the mid 80’s I was working for Netlink, an Australian start-up company that built SNA Protocol Converters and SNA Concentration Hubs. I later joined Tandem Computers where I became part of Product Management with responsibility for SNAX. And then, in the late ‘90s I was deeply involved with Insession’s ICE implementation of SNA on NonStop.

SNA gave the networking community a way to isolate an ever-changing physical network from the mission critical online applications of the day. Network managers gained a simpler way to administer all the network resources as SNA overlaid an easy-to-manage logical “virtual” network on top of the typically highly-volatile physical “real” network.

When SNA was first introduced I recall many of these network managers were concerned about the overhead that the separation of logical and physical networks would bring and how this added latency would prove to be unacceptable. But by the mid ‘80s, nearly every major network had switched to the new architecture as the value provided by such virtual and real isolation outweighed any perceived latency aspects.

“In its ultimate manifestation”, according to PCMagazine, “network virtualization treats all servers and services in the network as a single pool of resources that can be rearranged and redeployed in real-time to meet changing user and transaction requirements”. Today we rely less on SNA and more on TCP/IP but the results are the same – network managers can manipulate every resource (IP addressable) to rearrange or redeploy to best suite incoming transaction mixes.

VM, on the other hand, was as close to magic as anything I had seen up until then. With only a few commands, system programmers could initialize another “guest” machine. With other commands, they could dynamically add a string of disk drives and give guest machines almost instant access. It showed me that there really didn’t have to be any relationship between the hardware, and what operating systems I ran.

The enormous flexibility that came from the managed separation between each guest machine allowed many corporations to configure a number of test and development configurations right alongside of production configurations. But again, just as with SNA, many system managers had concerns about the overhead that came with the VM hypervisor but quickly put such issues aside as the systems became more powerful and as users valued the flexibility.

“With virtualization, you can dynamically fire up and take down virtual servers (also known as virtual machines)”, according to Infoworld in a February 12, 2007 article, “each of which basically fools an operating system (and any applications that run on top of it) into thinking the virtual machine is actual hardware. Running multiple virtual machines can fully exploit a physical server’s compute potential — and provide a rapid response to shifting datacenter demands”. With the enormous increases in processor power we are seeing, it only makes sense to me to cram more virtual machines onto them to fully exploit them. It makes little sense to map just a single operating system instance to today’s multi-core microprocessors.

For HP and the HP NonStop community, there is a lot of excitement building around the much-anticipated “bladed architecture” product suite. While the technical specifications of this new server have not been released, for anyone who has sat through any public product roadmap presentations it’s looking likely we will see real product sometime in 2008. And it would have to be a pretty safe bet to assume the underlying chip technology will be something a lot better than a single-core Itanium chip. And while I have no specific knowledge of HP’s virtualization plans, knowing the heritage of the BCS executives, it wouldn’t surprise me in the least to see some leverage being made of open source projects such as Xen.

Unisphere’s special report “Seven Trends Shaping the Data Center of 2008” identified Virtualization, SOA, Mashups and Web 2.0, Software as a Service, More IT Governance, More Vendor Consolidation, and More “More with Less” as the trends making the most impact on the data center as “end users and vendors alike seek to better integrate solutions that have greater impact on the business”. And as for me, I totally agree with Unisphere that virtualization will become the most important consideration in future data center decisions.

As with previous break-through technologies and architectures, there will be road-bumps along the way and, as in the past, I have to believe there will be concerns over latency. Early adopters will need to be wary about under-configuring their systems and will take time to optimize their configurations as “your mileage will vary based on driving conditions!” And it will not hurt either to remember that, just as we found when we first used Virtual Storage (VS) systems, “to get good performance from virtual memory, make sure you give it lots of real memory!”

Reminiscing with my IBM friends certainly highlighted how far our technology has come. Virtualization, like everything before it, will not be a free ride. However, the sheer breadth of its capabilities, as it exploits today’s microprocessor technologies, as well as the flexibility it provides as it helps us consolidate our servers, makes it a very real bargain at any price.


Furthermore, it has opened my mind to the reality that any day now, we may be able to select applications completely independent of operating systems and hardware platforms. We could select the best solution that met our business requirements and deploy them on platforms best suited to the experience, knowledge, and vendor relationships that we had invested decades in nurturing.

Comments

Popular posts from this blog

If it’s June then it’s time for HPE Discover 2021.

  For the NonStop community there has always been an annual event that proved hard to resist; with changing times these events are virtual – but can we anticipate change down the road? Just recently Margo and I chose to return home via US Highway 129. It may not ring any bells, but for those who prefer to call it the Tail of the Dragon – 318 curves in 11 miles – it represents the epitome of mountain excitement. For Margo and me, having now driven the tail in both directions, driving hard through all these turns never gets old. Business took us to Florida for an extended week of meetings that were mostly conversations. Not everything went to plan and we didn’t get to see some folks, but just to have an opportunity to hit the road and meet in person certainly made the 4,500 miles excursion worthwhile. The mere fact that we made touring in a roadster work for us and we were comfortable in doing so, well, that was a real trick with a car better suited to day trips. This is all just a p

The folly that was Tandem Computers and the path that led me to NonStop ...

With the arrival of 2018 I am celebrating thirty years of association with NonStop and before that, Tandem Computers. And yes, a lot has changed but the fundamentals are still very much intact! The arrival of 2018 has a lot of meaning for me, but perhaps nothing more significant than my journey with Tandem and later NonStop can be traced all the way back to 1988 – yes, some thirty years ago. But I am getting a little ahead of myself and there is much to tell before that eventful year came around. And a lot was happening well before 1988. For nearly ten years I had really enjoyed working with Nixdorf Computers and before that, with The Computer Software Company (TCSC) out of Richmond Virginia. It was back in 1979 that I first heard about Nixdorf’s interests in acquiring TCSC which they eventually did and in so doing, thrust me headlong into a turbulent period where I was barely at home – flying to meetings after meetings in Europe and the US. All those years ago there was

An era ends!

I have just spent a couple of days back on the old Tandem Computers Cupertino campus. Staying at a nearby hotel, this offered me an opportunity to take an early morning walk around the streets once so densely populated with Tandem Computers buildings – and it was kind of sad to see so many of them empty. It was also a little amusing to see many of them now adorned with Apple tombstone markers and with the Apple logo splashed liberally around. The photo at the top of this posting is of Tandem Way – the exit off Tantau Avenue that leads to what was once Jimmy’s headquarters building. I looked for the Tandem flag flying from the flagpole – but that one has been absent for many years now. When I arrived at Tandem in late ’88 I have just missed the “Billion Dollar Party” but everyone continued to talk about it. There was hardly an employee on the campus not wearing the black sweatshirt given to everyone at the party. And it wasn’t too long before the obelisk, with every employee’s signature