I have been spending a lot of time lately inside car garages. And I don’t mind admitting that it’s not all been as a happy observer, but the picture here is of the C6 Corvette having its exhaust system improved for better performance. But I have been fascinated by the auto industry all my life and find the subject of cars, whether covered in films, books, or magazines, to be endlessly fascinating. As a youngster, my greatest joy at Christmas would be to get a model car, and I watched the film Grand Prix six times over one Christmas holiday period.
Readers who have been reading this blog on a routine basis have watched as I took my first steps at driving school and the enjoyment that spending a weekend on a race track brought. Readers who have also read my social blog http://www.buckle-up-travel.blogspot.com/ will have caught up with the news that I have now Supercharged the C6 Corvette and begun the process of learning how to drive, all over again. After all, if General Motors is adding a Supercharger to their latest iteration of the C6 Corvette, why not go ahead and do the same myself?
It’s all about performance, and it’s a bit of a paradox that at a time when gas prices are going through the roof, many major car manufacturers are producing newer, more powerful models. In fact, there seems to be a power war raging as manufacturer after manufacturer up the ante. Anyone following the fortunes of MercedesBenz will have read by now that a special edition of their popular SL sportscar, will be available in 2009, as the SL65 AMG Black Series, and where, for the princely sum of US$235,775, you can get a two-seat roadster with 660 bhp (and 738 lb.-ft of torque). Personally, I view anything much over 650 bhp a tad excessive. Just a tad.
Performance, and the drive by manufacturers to give us more power, has been an essential part of the auto industry for decades now. And it’s not just MercedesBenz, the new Corvette General Motors is releasing shortly, has been restricted. Not an uncommon practice for German manufacturers where speeds have been limited to 155mph, but unheard of in America.
And the new, restricted, speed-limit on the Corvette? After due deliberation, the Chevrolet division, and owner of the Corvette brand, has decided to limit top speed, and issued a public statement on their decision. According to this month’s issue (October, 2008) of Road and Track, they reported “it must be noted that for ‘social responsibility’ said Chief Engineer Tadge Juechter, there is an electronic limiter set for 210 mph!”
Without proper instruction, and the patience to progress in degrees and to take time correlating speed and distances, this has to be an invitation to disaster! You could suggest that these speeds can only be attained on a race track – but which race track? I have driven on a few race tracks and without exception, getting past 120 mph on these tracks does take considerable skill. But on public roads, and for anyone to drive, will we be seeing the return of blood and corpses everywhere as recklessness takes over?
On this last point, I was paging through a recent issue of Fortune magazine (September 1, 2008) and looking at the stories in the Technology section – and the lead story, “A Chip too Far?” caught my eye. The writer comments “but the latest generation of chips, known as multicore, are so complex and so qualitatively different from their predecessors that they have flummoxed software developers.” He goes on to suggest that “if they want to get the full oomph out of multicore chips, their applications need to break tasks apart into chunks for each core to work on, a process known as parallel computing.” And then he looks back to 2000 when "the videogame industry faced a similar challenge when Sony's then-dominant PlayStstion shifted to chips with multiple, different processors for the PS2 (and) the result, according to Neal Robison, Director of Software Vendor Relations at AMD, was 'blood and corpses everywhere!'"
Dah? So here we have it – the opening shot in the campaign to stop the development of fast chips! Are we going to see blood and corpses on development floors? “But programming in parallel is simply too complex for the average code writer, who has been trained in a very linear fashion,” added Fortune magazine. Kunle Olukotun, a computer science professor at Stanford’s Pervasive Parallelism Lab, told Fortune that “if I were the computer industry, I would be panicked, because it’s not obvious what the solution is going to look like and whether we will get there in time for these new machines!”
Oh really? By now, most readers can see that this article is aimed at the broader client industry where Microsoft resides. Perhaps developing a new game for a single user is going to present challenges for the developer but even here, I am not all that sure. There will be new generation of applications appear that fully tap into the power of multicore chip technology. However, as we move up the food chain and look at the server side of the industry, this technology becomes extremely compelling.
New Integrity servers, with dual-core chips (and with quad and multicore just on the horizon as Intel continues with its’ roadmap for Itanium) are already shipping. And as I talk with users – not much has changed since the first Tandem Computer began shipping in the late ‘70s. For anyone living with NonStop for any period of time, getting the best from many processors, or CPUs, operating in parallel became an issue for the infrastructure, and with the right infrastructure, processes could be cloned and let execute as many times as you need and the ability to scale up become a significant advantage.
For those attending HPTF&E in June, you may have seen the stand on one corner that was simply labeled “Pathway is thriving in the 21st Century”. I have included a picture of it here – as I found it remarkable to run into Pathway after all this time. But it holds a key to how much advantage you may be able to take of multicore chip technology arriving on HP Integrity NonStop servers. While other manufacturers are calling for Virtualization down to the metal, or chip, this is beginning to appear unnecessary for the NonStop user.
“NonStop is already fully virtualized,” one senior manager pointed out to me recently, “from the bottom up.” You are shielded from what’s actually going on at the metal and OS interaction level. Indeed, it was another executive who told me that “putting a virtual machine (VM) under a Massively Parallel Processor (MPP) machine (as we have with NonStop), is actually not helpful. You start creating havoc with the definition of fault zones tied to a logical processor.”
And here’s the advantage for everyone with NonStop server – programming it doesn’t change. We can benefit from this technology quickly and transparently at the application level. The same executive went on to suggest that at HP, “we view the definition of virtualization in the Nonstop context as "Application Virtualization", which is somewhat the reverse of machine virtualization traditionally associated with (other approaches, such as with) Integrity VM and VMWare.”
Typically, VM implementations span all the hardware and hide how many processors may be pooled under its management. In so doing, they can then “fire up” as many OS’s – one per Virtual Machine – and so exploit the power of all the pooled processing resources. But this is what the NonStop Kernel (NSK) does today and interacting with something like Pathway, it fully delivers on the promise of Application Virtualization!
And in case you think the programming model supported by Pathway may be antiquated, take a look at the technology being deployed within Pathway. As I was to find out at HPTF&E this year, “Pathway will continue to play a large role in the future as a strategic part of the NonStop application infrastructure, and not just for customer’s applications since it's Pathway that also provides much of the "NonStop-ness" for some of our other middleware products (iTPWS, NSSOAP, NSJSP, NSCORBA, NSTuxedo, ...).”
There will never be a time when chip researchers announce that they have reached the limits of computing power! The graphs tracing Moore’s Law don’t top out anywhere! And there’s no emerging social discontent with computing that has chip manufacturers planning some voluntarily limits to their performance. Virtualization will be part of the new landscape on NonStop. It will be at a different level, and achieved through different infrastructure, but the benefits from virtualization will be real enough for every NonStop user.
For me, there are many parallels between my interest in cars and my history with NonStop. When Fortune interviewed Sean Maloney, Intel’s Executive Vice President of Sales and Marketing, he said “I accept parallel computing is a big problem. He then asserts the problem will be solved “because economic benefits are huge,” adding “whoever figures out how to take advantage of multicore first could wreak some serious economic damage on their competition.”
Performance, and the harnessing of performance, remains as important to the field of computing as it always been for cars. It’s just great to realize that with NonStop, there are not the problems others have to deal with. The infrastructure is all in place, and NonStop user have been taking advantage of it for years! No blood on the floor! No bodies scattered around the data center! No recklessness visible at all, just businesses quietly imposing damage on the competition.
Readers who have been reading this blog on a routine basis have watched as I took my first steps at driving school and the enjoyment that spending a weekend on a race track brought. Readers who have also read my social blog http://www.buckle-up-travel.blogspot.com/ will have caught up with the news that I have now Supercharged the C6 Corvette and begun the process of learning how to drive, all over again. After all, if General Motors is adding a Supercharger to their latest iteration of the C6 Corvette, why not go ahead and do the same myself?
It’s all about performance, and it’s a bit of a paradox that at a time when gas prices are going through the roof, many major car manufacturers are producing newer, more powerful models. In fact, there seems to be a power war raging as manufacturer after manufacturer up the ante. Anyone following the fortunes of MercedesBenz will have read by now that a special edition of their popular SL sportscar, will be available in 2009, as the SL65 AMG Black Series, and where, for the princely sum of US$235,775, you can get a two-seat roadster with 660 bhp (and 738 lb.-ft of torque). Personally, I view anything much over 650 bhp a tad excessive. Just a tad.
Performance, and the drive by manufacturers to give us more power, has been an essential part of the auto industry for decades now. And it’s not just MercedesBenz, the new Corvette General Motors is releasing shortly, has been restricted. Not an uncommon practice for German manufacturers where speeds have been limited to 155mph, but unheard of in America.
And the new, restricted, speed-limit on the Corvette? After due deliberation, the Chevrolet division, and owner of the Corvette brand, has decided to limit top speed, and issued a public statement on their decision. According to this month’s issue (October, 2008) of Road and Track, they reported “it must be noted that for ‘social responsibility’ said Chief Engineer Tadge Juechter, there is an electronic limiter set for 210 mph!”
Without proper instruction, and the patience to progress in degrees and to take time correlating speed and distances, this has to be an invitation to disaster! You could suggest that these speeds can only be attained on a race track – but which race track? I have driven on a few race tracks and without exception, getting past 120 mph on these tracks does take considerable skill. But on public roads, and for anyone to drive, will we be seeing the return of blood and corpses everywhere as recklessness takes over?
On this last point, I was paging through a recent issue of Fortune magazine (September 1, 2008) and looking at the stories in the Technology section – and the lead story, “A Chip too Far?” caught my eye. The writer comments “but the latest generation of chips, known as multicore, are so complex and so qualitatively different from their predecessors that they have flummoxed software developers.” He goes on to suggest that “if they want to get the full oomph out of multicore chips, their applications need to break tasks apart into chunks for each core to work on, a process known as parallel computing.” And then he looks back to 2000 when "the videogame industry faced a similar challenge when Sony's then-dominant PlayStstion shifted to chips with multiple, different processors for the PS2 (and) the result, according to Neal Robison, Director of Software Vendor Relations at AMD, was 'blood and corpses everywhere!'"
Dah? So here we have it – the opening shot in the campaign to stop the development of fast chips! Are we going to see blood and corpses on development floors? “But programming in parallel is simply too complex for the average code writer, who has been trained in a very linear fashion,” added Fortune magazine. Kunle Olukotun, a computer science professor at Stanford’s Pervasive Parallelism Lab, told Fortune that “if I were the computer industry, I would be panicked, because it’s not obvious what the solution is going to look like and whether we will get there in time for these new machines!”
Oh really? By now, most readers can see that this article is aimed at the broader client industry where Microsoft resides. Perhaps developing a new game for a single user is going to present challenges for the developer but even here, I am not all that sure. There will be new generation of applications appear that fully tap into the power of multicore chip technology. However, as we move up the food chain and look at the server side of the industry, this technology becomes extremely compelling.
New Integrity servers, with dual-core chips (and with quad and multicore just on the horizon as Intel continues with its’ roadmap for Itanium) are already shipping. And as I talk with users – not much has changed since the first Tandem Computer began shipping in the late ‘70s. For anyone living with NonStop for any period of time, getting the best from many processors, or CPUs, operating in parallel became an issue for the infrastructure, and with the right infrastructure, processes could be cloned and let execute as many times as you need and the ability to scale up become a significant advantage.
For those attending HPTF&E in June, you may have seen the stand on one corner that was simply labeled “Pathway is thriving in the 21st Century”. I have included a picture of it here – as I found it remarkable to run into Pathway after all this time. But it holds a key to how much advantage you may be able to take of multicore chip technology arriving on HP Integrity NonStop servers. While other manufacturers are calling for Virtualization down to the metal, or chip, this is beginning to appear unnecessary for the NonStop user.
“NonStop is already fully virtualized,” one senior manager pointed out to me recently, “from the bottom up.” You are shielded from what’s actually going on at the metal and OS interaction level. Indeed, it was another executive who told me that “putting a virtual machine (VM) under a Massively Parallel Processor (MPP) machine (as we have with NonStop), is actually not helpful. You start creating havoc with the definition of fault zones tied to a logical processor.”
And here’s the advantage for everyone with NonStop server – programming it doesn’t change. We can benefit from this technology quickly and transparently at the application level. The same executive went on to suggest that at HP, “we view the definition of virtualization in the Nonstop context as "Application Virtualization", which is somewhat the reverse of machine virtualization traditionally associated with (other approaches, such as with) Integrity VM and VMWare.”
Typically, VM implementations span all the hardware and hide how many processors may be pooled under its management. In so doing, they can then “fire up” as many OS’s – one per Virtual Machine – and so exploit the power of all the pooled processing resources. But this is what the NonStop Kernel (NSK) does today and interacting with something like Pathway, it fully delivers on the promise of Application Virtualization!
And in case you think the programming model supported by Pathway may be antiquated, take a look at the technology being deployed within Pathway. As I was to find out at HPTF&E this year, “Pathway will continue to play a large role in the future as a strategic part of the NonStop application infrastructure, and not just for customer’s applications since it's Pathway that also provides much of the "NonStop-ness" for some of our other middleware products (iTPWS, NSSOAP, NSJSP, NSCORBA, NSTuxedo, ...).”
There will never be a time when chip researchers announce that they have reached the limits of computing power! The graphs tracing Moore’s Law don’t top out anywhere! And there’s no emerging social discontent with computing that has chip manufacturers planning some voluntarily limits to their performance. Virtualization will be part of the new landscape on NonStop. It will be at a different level, and achieved through different infrastructure, but the benefits from virtualization will be real enough for every NonStop user.
For me, there are many parallels between my interest in cars and my history with NonStop. When Fortune interviewed Sean Maloney, Intel’s Executive Vice President of Sales and Marketing, he said “I accept parallel computing is a big problem. He then asserts the problem will be solved “because economic benefits are huge,” adding “whoever figures out how to take advantage of multicore first could wreak some serious economic damage on their competition.”
Performance, and the harnessing of performance, remains as important to the field of computing as it always been for cars. It’s just great to realize that with NonStop, there are not the problems others have to deal with. The infrastructure is all in place, and NonStop user have been taking advantage of it for years! No blood on the floor! No bodies scattered around the data center! No recklessness visible at all, just businesses quietly imposing damage on the competition.
Comments