Containers
certainly transformed the transportation industry and even today, are being re-purposed in interesting ways. But for mission critical solutions running on
NonStop, virtualization beats containerization every time!
Moving house and resorting to temporary storage
facilities made this past year a time where total disruption prevailed. While
there was some anticipation over what was about to become our new home there
was still many anxious moments in the lead up to that eventful day. However,
the move itself was rather painless and when the van finally showed up, inside
were three or four fiberboard containers holding all of our much-treasured
belongings. Turned out, as the movers held onto our belongings over the summer
it was easier for all parties to simply package everything up into a container
to better facilitate the multiple transfers that took place.
You may recall seeing this picture before. That is, the
one at the top of the page that features two 40’ containers repurposed as a bar
here in Boulder. It follows on from the success enjoyed by another pair of 40’
containers deployed for similar purposes in Estes Park, just a little further
up Highway 36. It opened for business even as we were in between homes and over
the summer months, there was more than one afternoon when you could have found
us astride stools sipping on a local microbrewery libation.
At the time when I referenced these containers I was focused on the message of standardization and you can read about the numerous ways I referenced containers and standardization if you tab to the label container and click on the link. Through the years, I have referenced containers and standardization in half a dozen posts, but now, more recently, I have turned my attention to containers in the world of technology where so much is being discussed about containers and clouds. In so doing, I have embraced a theme for containers that is up for a lot more discussion than ever there was surrounding standardization.
In recent presentations given by major vendors it is hard to ignore charts depicting traditional IT and virtualized IT inside the one box, as these charts then place containerized IT in another box before labelling other boxes with the names of public clouds. Furthermore, it is the assumption here that containerized IT is all that we will find in private clouds and conversely, private clouds will be created whenever containerized IT is deployed. I guess I have to admit I missed the memo about virtualized IT being legacy.
When I look at how enterprise IT is progressing there is a long way to go before anyone I know inside enterprise IT is going to step up and admit to their CIO that well, yes, we are embracing HyperConverged Infrastructure (HCI) as we virtualize the workloads that, to the outside world, turns out to be a transformation from one legacy environment to another! Ummm …
On the other hand, what I can admit is the PR machines representing the container lobby are working overtime to depict a future based on containerization, but not all applications benefit from containers! I am fully aware as well that not all applications benefit from virtual machines but mission-critical applications we have running on NonStop suggests that the PR underway might be underselling the value of virtualized IT. Just another iteration of legacy or traditional IT? I don’t think so! Whenever you have a middleware approximating what was once called a Transaction Processing Monitor (TP Monitor) such as IBM’s CICS, HPE NonStop’s Pathway that has been renamed, TS/MP, and even Tuxedo then the benefits from embracing virtualized IT quickly becomes apparent.
To the contrary, try to run CICS or Pathway in a container and look to monitor thousands of instances of each - it soon becomes apparent that little is to be gained from containers for this purpose. Likewise, for those running Java and node.js supporting apps on mobile devices then it’s a whole different story. Even if the JAVA JVM sounds like a virtual machine (with an underlying hypervisor) it is better described as containerization as there is no hypervisor involved.
What hypervisors are very good at is allowing virtual machines to run virtualized workloads each with different OS requirements – yes, you can run a HPE NonStop workload, a Linux workload and even a Windows workload in three different virtual machines and the hypervisor isn’t at all bothered. Not surprisingly, this is what you often encounter within enterprise IT. Hypervisors can run on host machines as they support guest machines where each guest machine includes everything an application may need, including the supporting OS.
Furthermore, hypervisors can run as bare metal hypervisors or as hosted hypervisors and it is the hosted environment that we are encountering more often these days. When it comes to OpenStack / KVM the hypervisor is a hosted hypervisor implementation atop of a Linux kernel. The same can be said about Hyper V as it is yet another hosted hypervisor but this time, atop a Windows kernel.
Point is, containers don’t let you initiate different OSs as part of the container – all containers run atop an OS which, in one sense, makes them a much lighter-weight proposition and, in so doing, have the potential to offer better performance. So yes, there are limitations when it comes to containerization. And yes, when it comes to supporting applications by vendors providing public clouds the preferred way is to support containers as their task becomes simpler, but it’s all a trade-off. HPE has recently added support for HPE NonStop as a virtualized workload. When configured across two or more physical machines, the same level of availability can be achieved as was provided in the past on traditional systems.
However, no such availability profile can be achieved with containers as the requirements for running NonStop in a true 24 x 7 x 365 environment, with NonStop as it is engineered today, cannot be realized no matter how you look at containers. Virtualized NonStop workloads must have access to the NonStop OS and this is only possible when running the complete stack in a guest machine. And yes, those guest machines need to be able to communicate with each other via RDMA over Converged Ethernet (RoCE) which is becoming much easier to find.
In a November 28, 2017, interview published in The Register, Right, HPE. You've eaten your hyperconverged Simplivity breakfast. Will it blend? Paul Miller, VP of marketing in its software-defined and cloud group, said, “We see customers wanting to run both virtual machines and containers within their HCI environments.” Equally as important, Miller also highlighted how, “HCI and composable are part of the software-defined infrastructure category. HCI provides simplicity, agility, elasticity, security and predictability for virtualized environments and composable provides the same for bare metal, virtual and containers.”
In other words, when it comes to HPE enterprise customers embracing HCI no longer have to fear their CIOs. HCI and virtualized environments along with composable (and container environments) represent just two sides of the same coin – the move to software-defined everything. And virtualization is definitely not a part of traditional IT, not in the way we have historically viewed running virtual machines.
On September 5, 2017, HPE published a paper, Comparing composable infrastructure and hyperconverged systems where the authors state that, “‘hyperconverged’ means any hardware solution that uses direct attached storage (DAS) and local compute plus clustering to implement resiliency of processing and data. Virtualization is assumed as the primary means of moving a computing workload from one host platform to another.” No indication here that virtualized IT is part of traditional IT.
In fact, quite the contrary, as HPE portrays virtualization “as the primary means” and this is pulling NonStop into the picture. Any hardware / software infrastructure described as HCI, as long as it is based on the Intel x86 architecture, supports RoCE and where there is a hypervisor present (such as KVM, VMware and potentially, Hyper V), arguably can become a potential target for running future virtualized NonStop workloads. Returning to the opening observation, it simply isn’t accurate to bundle virtualized IT with traditional IT, suggesting virtualized IT is little more than a variation of traditional IT.
In a world that is giving so much attention to transformation and hybrid IT, HCI, and virtualization, there is a need to recognize the important role both virtualized IT and containerized IT play. Failure to do so will most definitely lead to some spectacular “unintended consequences” as expectations aren’t realized and, quite possibly, lead to escalating hardware costs and manageability uncertainty. Yes, there are reasons to run either virtualized IT or containerized IT. When the topic of mission critical applications supporting online applications built atop a TP Monitor comes up then CIOs everywhere can relax; virtualized IT is the best option and the news that virtualized NonStop is now coming to market will prove to be good news for all those who continue to demand true 24 x 7 x 365 operation.
Containerization has transformed the freight business. It proved disruptive to the shipping industry even as it disrupted the stevedoring services needed in former times to work the docks. It brought a level of standardization (yes, like everything we touch, there are more than one standard when you look more closely at the container industry), and when it comes to technology, the containers we now reference provide much the same degree of isolation and standardization – you want a truck to carry your container? How about a train or a ship? However, for mission critical online systems where the scale-out is already well addressed, the option for virtualized IT prevails and will do so for many years to come.
And remember, if you still want to run containers than load a virtualized Linux as a guest and run containers atop virtualized anywhere – there really isn’t any limits to either your imagination or creativity / optimization when you have transformed to virtualized IT and with that, the future for virtualized NonStop is assured even as it opens doors to rafts of new solutions coming to its fault tolerant platform!
At the time when I referenced these containers I was focused on the message of standardization and you can read about the numerous ways I referenced containers and standardization if you tab to the label container and click on the link. Through the years, I have referenced containers and standardization in half a dozen posts, but now, more recently, I have turned my attention to containers in the world of technology where so much is being discussed about containers and clouds. In so doing, I have embraced a theme for containers that is up for a lot more discussion than ever there was surrounding standardization.
In recent presentations given by major vendors it is hard to ignore charts depicting traditional IT and virtualized IT inside the one box, as these charts then place containerized IT in another box before labelling other boxes with the names of public clouds. Furthermore, it is the assumption here that containerized IT is all that we will find in private clouds and conversely, private clouds will be created whenever containerized IT is deployed. I guess I have to admit I missed the memo about virtualized IT being legacy.
When I look at how enterprise IT is progressing there is a long way to go before anyone I know inside enterprise IT is going to step up and admit to their CIO that well, yes, we are embracing HyperConverged Infrastructure (HCI) as we virtualize the workloads that, to the outside world, turns out to be a transformation from one legacy environment to another! Ummm …
On the other hand, what I can admit is the PR machines representing the container lobby are working overtime to depict a future based on containerization, but not all applications benefit from containers! I am fully aware as well that not all applications benefit from virtual machines but mission-critical applications we have running on NonStop suggests that the PR underway might be underselling the value of virtualized IT. Just another iteration of legacy or traditional IT? I don’t think so! Whenever you have a middleware approximating what was once called a Transaction Processing Monitor (TP Monitor) such as IBM’s CICS, HPE NonStop’s Pathway that has been renamed, TS/MP, and even Tuxedo then the benefits from embracing virtualized IT quickly becomes apparent.
To the contrary, try to run CICS or Pathway in a container and look to monitor thousands of instances of each - it soon becomes apparent that little is to be gained from containers for this purpose. Likewise, for those running Java and node.js supporting apps on mobile devices then it’s a whole different story. Even if the JAVA JVM sounds like a virtual machine (with an underlying hypervisor) it is better described as containerization as there is no hypervisor involved.
What hypervisors are very good at is allowing virtual machines to run virtualized workloads each with different OS requirements – yes, you can run a HPE NonStop workload, a Linux workload and even a Windows workload in three different virtual machines and the hypervisor isn’t at all bothered. Not surprisingly, this is what you often encounter within enterprise IT. Hypervisors can run on host machines as they support guest machines where each guest machine includes everything an application may need, including the supporting OS.
Furthermore, hypervisors can run as bare metal hypervisors or as hosted hypervisors and it is the hosted environment that we are encountering more often these days. When it comes to OpenStack / KVM the hypervisor is a hosted hypervisor implementation atop of a Linux kernel. The same can be said about Hyper V as it is yet another hosted hypervisor but this time, atop a Windows kernel.
Point is, containers don’t let you initiate different OSs as part of the container – all containers run atop an OS which, in one sense, makes them a much lighter-weight proposition and, in so doing, have the potential to offer better performance. So yes, there are limitations when it comes to containerization. And yes, when it comes to supporting applications by vendors providing public clouds the preferred way is to support containers as their task becomes simpler, but it’s all a trade-off. HPE has recently added support for HPE NonStop as a virtualized workload. When configured across two or more physical machines, the same level of availability can be achieved as was provided in the past on traditional systems.
However, no such availability profile can be achieved with containers as the requirements for running NonStop in a true 24 x 7 x 365 environment, with NonStop as it is engineered today, cannot be realized no matter how you look at containers. Virtualized NonStop workloads must have access to the NonStop OS and this is only possible when running the complete stack in a guest machine. And yes, those guest machines need to be able to communicate with each other via RDMA over Converged Ethernet (RoCE) which is becoming much easier to find.
In a November 28, 2017, interview published in The Register, Right, HPE. You've eaten your hyperconverged Simplivity breakfast. Will it blend? Paul Miller, VP of marketing in its software-defined and cloud group, said, “We see customers wanting to run both virtual machines and containers within their HCI environments.” Equally as important, Miller also highlighted how, “HCI and composable are part of the software-defined infrastructure category. HCI provides simplicity, agility, elasticity, security and predictability for virtualized environments and composable provides the same for bare metal, virtual and containers.”
In other words, when it comes to HPE enterprise customers embracing HCI no longer have to fear their CIOs. HCI and virtualized environments along with composable (and container environments) represent just two sides of the same coin – the move to software-defined everything. And virtualization is definitely not a part of traditional IT, not in the way we have historically viewed running virtual machines.
On September 5, 2017, HPE published a paper, Comparing composable infrastructure and hyperconverged systems where the authors state that, “‘hyperconverged’ means any hardware solution that uses direct attached storage (DAS) and local compute plus clustering to implement resiliency of processing and data. Virtualization is assumed as the primary means of moving a computing workload from one host platform to another.” No indication here that virtualized IT is part of traditional IT.
In fact, quite the contrary, as HPE portrays virtualization “as the primary means” and this is pulling NonStop into the picture. Any hardware / software infrastructure described as HCI, as long as it is based on the Intel x86 architecture, supports RoCE and where there is a hypervisor present (such as KVM, VMware and potentially, Hyper V), arguably can become a potential target for running future virtualized NonStop workloads. Returning to the opening observation, it simply isn’t accurate to bundle virtualized IT with traditional IT, suggesting virtualized IT is little more than a variation of traditional IT.
In a world that is giving so much attention to transformation and hybrid IT, HCI, and virtualization, there is a need to recognize the important role both virtualized IT and containerized IT play. Failure to do so will most definitely lead to some spectacular “unintended consequences” as expectations aren’t realized and, quite possibly, lead to escalating hardware costs and manageability uncertainty. Yes, there are reasons to run either virtualized IT or containerized IT. When the topic of mission critical applications supporting online applications built atop a TP Monitor comes up then CIOs everywhere can relax; virtualized IT is the best option and the news that virtualized NonStop is now coming to market will prove to be good news for all those who continue to demand true 24 x 7 x 365 operation.
Containerization has transformed the freight business. It proved disruptive to the shipping industry even as it disrupted the stevedoring services needed in former times to work the docks. It brought a level of standardization (yes, like everything we touch, there are more than one standard when you look more closely at the container industry), and when it comes to technology, the containers we now reference provide much the same degree of isolation and standardization – you want a truck to carry your container? How about a train or a ship? However, for mission critical online systems where the scale-out is already well addressed, the option for virtualized IT prevails and will do so for many years to come.
And remember, if you still want to run containers than load a virtualized Linux as a guest and run containers atop virtualized anywhere – there really isn’t any limits to either your imagination or creativity / optimization when you have transformed to virtualized IT and with that, the future for virtualized NonStop is assured even as it opens doors to rafts of new solutions coming to its fault tolerant platform!
Comments
https://www.linkedin.com/pulse/containers-vs-virtualized-machines-how-clusters-frans-jongma/