Many businesses are already reaping the benefits of server virtualisation - disaster recovery and centralised administration being two major advantages. The cost, energy and work required to maintain your business network are also significantly reduced, making server virtualisation a no-brainer for most businesses and educational establishments.
So what is holding some CIOs and CTOs back from moving to a fully virtualised desktop infrastructure?
Virtual desktop infrastructure (VDI) seems like the logical next step in the process of infrastructure virtualisation and consolidation, yet according to research firm TechTarget only 13% of businesses polled at the end of 2011 had a VDI solution in place.
While full of benefits, VDI still has a threshold to breach before it can provide a significant ROI for all businesses, not to mention deliver the same kind of low-latency stable environment as local infrastructure. Some of the key issues to be resolved are:
• Boot Storms, which occur when a high volume of users attempt to boot their devices at the same time, and, let's face it, that's 8-9 every morning.
• Random I/O Operations, generated in Windows environments can be up to 90% write operations. Due to the huge amount of input/output operations per second (IOPS) generated by Windows services these I/O operations can severely impact disk performance when performed by a large number of users, such as in a virtualised environment.
• Desktop Re-composition, such as updates and patches, can be very disruptive, especially if it contains changes to the gold image or master template and is deployed to a large number of users simultaneously.
Yet there is a common thread that ties these issues to one another: disk storage.
Unfortunately conventional disk storage is one technology to which Moore's Law has failed to apply - performance has certainly not doubled every 18 months like processor speeds and memory capacity have.
Capacity has increased dramatically with various disk innovations, yet performance remains stunted.
What is needed is a completely re-engineered storage solution that can deal with the demands of a modern-day virtualised infrastructure to deliver a high IOPS, low-latency and consistent performance.
Fortunately, that is exactly what one company have done with their innovative new Flash Memory arrays.
Flash has long been a promising solution to the problems with conventional storage disks.
However, given that most flash vendors use the HDD form factor (SSD) in a traditional storage array, legacy systems retro-fitted with SSD drives cannot scale to high workloads that are generated by large-scale VDI installations.
The New Flash Array concept re-engineers storage subsystems and optimises them for flash storage.
This new architecture allows thousands of flash devices to operate as a Flash Array - masking device-level issues and delivering reliable and sustained system level performance.
This new concept has worked amazingly well: random writes at a 60-100 microsecond latency and 1,000,000 sustained IOPS.
This delivers a simple and cost-effective solution to all the problems posed by implementing VDI on conventional disk storage.
The key benefits of these Flash Arrays to VDI implementation are:
• Smooth boot, login and logoff even at peak times, regardless of the size of the VDI deployment by delivering 1 million IOPS
• Fast boot up and login due to the <200 micro-second latency
• Consistent end user experience thanks to performance improvements and spike-free latency
• By virtually eliminating I/O wait times caused by latency, Violin increases the per processor density from 6-8 users per processor core to 12-15 users per processor core, reducing the number of processors required and total cost by 50%.
No comments:
Post a Comment