Saturday, December 31, 2011

Array-based Thin Provisioning over VMware's Thin Provisioning

Over the last few years thin provisioning has steadily moved into the main stream of storage management - so much so that not only has it found its way onto many leading storage systems but into operating systems as well. Clearly one of the largest endorsements of using thin provisioning at the operating system level came last year when VMware announced its inclusion of thin provisioning as an option within vSphere 4.

But with thin provisioning now available at both the OS and storage system levels, organizations need to quantify at which of these two levels that they can derive the greatest number of benefits from thin provisioning or if, in fact, there are sufficient benefits at both levels to implement both storage and VMware thin provisioning .

A place that is as good as any to start in trying to understand the benefits for each is a video that was embedded in a blog on the StorageRap website on September 1, 2009. This video contrasts the benefits of using VMware's thin provisioning with using the thin provisioning feature found on the 3PAR InServ Storage Server.

The initial scenario that the presenter, 3PAR's Michael Haag, illustrates is when the storage allocated to VMware is a fat volume (i.e. NOT thinly provisioned) from traditional storage. All of this storage capacity is discovered and allocated by the VMware file system (VMFS) and brought under its management.

It is in this scenario that the primary value of VMware's new thin provisioning feature comes into play. As new virtual machines and their associated VMDK files are created, VMFS will only allocate as much physical space as each individual VMDK file actually needs, reserving only enough space for the VMDK's data. This results in storage savings and enables vSphere to more efficiently meet the storage requirements of the virtual machines it hosts as well as open the door to hosting more VMs.

However there is still a very real up front storage cost with VMware's thin provisioning implementation: all VMFS capacity must be provisioned (physically) up front instead of as it is needed for each VM and VMDK.

So while VMware vSphere's thin provisioning feature certainly helps organizations get more value from "fat" storage with a thin methodology from VMFS to the VM, it still leaves the door open to increase storage utilization with array-based thin provisioning. Array-based thin provisioning not only associates physical capacity with the VMDK only as writes occur, but does so with VMFS as well so there is no up front allocation or waste.

Consider this scenario that occurs every day in organizations of all sizes. It is only natural for an application administrator to request more storage capacity than the application needs. In this case, VMware's thin provisioning helps alleviate this over provisioning request by enabling storage administrators to thinly provision VMFS' allocated storage capacity to the VM-based application as is written.

However the storage administrator is still forced to hedge his bets. While he may suspect the application administer has over estimated how much storage capacity he needs, the storage administrator needs to over provision the networked storage assigned to that vSphere server to account for the situations where the application owner's estimates are correct or even too low.

It is for this reason that the presenter in this video as well as other virtualization experts not affiliated with 3PAR recommend, at a minimum, using thin provisioning at the array level. VMware thin provisioning can be used in conjunction with array-based thin provisioning; however, thin provisioning at the array level should be used at minimum because it provides the most efficient use of storage capacity for three reasons:
  • Storage capacity is only allocated and consumed (to VMFS and the VMDK) when data is actually written by the application on the VM to the attached storage system.
  • Storage capacity does not need to be over-provisioned
  • Storage management can remain a function of the storage team for those enterprise organizations that have this separation of duties. However because of 3PAR's integration with VMware's vCenter, organizations can also monitor, manage and report on storage through this management console.
Finally there are cases where it makes sense to use the two thin provisioning solutions together. The additional overhead of adding VMware TP on top of array-based TP can be beneficial in cases where separate Server and Storage Administrators want the flexibility to over-provision at their respective layer.

The growing acceptance of thin provisioning in the data center is leading to increased cost savings and efficiencies but, as this example with VMware vSphere illustrates, sometimes it is not always clear which is the best way to implement it.
 
The short answer is that if the storage system you are using now does not offer thin provisioning, using vSphere's thin provisioning feature is clearly the way to go. But in circumstances where you have both VMware and a storage system from 3PAR available to you and if thin provisioning can only be used in one place, then it should absolutely be done at the array-based layer to maximize utilization and minimize upfront costs.
 
Administrators can then decide to deploy VMware thin provisioning on top of 3PAR thin provisioning if it provides enough management benefits to the VMware administrator to outweigh the need to manage thin provisioning in two places. 

Saturday, October 08, 2011

My Out of Office Message

I am currently out of the office on vacation.

I know I'm supposed to say that I'll have limited access to email and won't be able to respond until I return - but that's not true. I will be carrying my smartphone with me, which is connected 24/7 with the newly implemented email system and I can respond if I need to. And I recognize that I'll probably need to interrupt my vacation from time to time to deal with something urgent.
That said, I promised my wife that I am going to try to disconnect, get away and enjoy our vacation as much as possible. So, I'm going to experiment with something new. I'm going to leave the decision in your hands:

If your email truly is urgent and you need a response while I'm on vacation, please resend it to interruptmyvacation_@_qadri.me and I'll try to respond to it promptly.

If you think someone else from my team might be able to help you, feel free to email my subordinates at the IT Infrastructure Department (HSAITInfrastructureTeam_@_felix-sap.com) and they will try to point you in the right direction.

Otherwise, I'll respond when I return…

Warm regards,
Mohiuddin Qadri

Wednesday, June 29, 2011

LUN configuration to boost virtual machine performance

Advanced virtual machine (VM) storage options can improve performance, but their benefits will go only so far if your physical logical unit numbers (LUNs) are not configured with best practices in mind.

Only when a LUN configuration meets the needs of your VM workloads can you significantly improve virtual machine performance. When it comes to LUN configuration, hardware choices, I/O optimization and VM placement are all important considerations.

Hardware and LUN configuration
The hardware on which you house your LUNs can make all the difference in VM performance. To avoid an overtaxed disk subsystem, choose hardware with similar resource levels as your host systems. It does no good to design a cluster of servers with two six-core processors and 128 GB of RAM and attach it to an iSCSI Serial-Attached Technology Advancement (SATA) storage area network (SAN) over a 1 GB link. That arrangement can create a storage bottleneck at either the transport or disk-latency level.

As you set up the LUN configuration, correctly sizing your disk subsystem is the key to ensuring acceptable performance. Going cheaper on one component may save you money up front; but if a resulting bottleneck reduces overall VM storage capacity or stability, it could ultimately cost you much more.

Disk type
To improve virtual machine performance, choose disk types for VM storage based on workload. Lower speed, lower duty cycle and higher latency drives such as SATA/FATA may be good for development environments. These drives usually range from 7,200 RPM to 10,000 RPM. For production workloads, or those with low latency needs, various SCSI/SAS alternatives give a good balance of VM performance, cost and resiliency. These drives range from 10,000 RPM to 15,000 RPM.

Solid-state drives are also a realistic option. For most workloads, these kinds of drives may be overkill technically and financially, but they provide low latency I/O response.

I/O optimization
To ensure a stable and consistent I/O response, maximize the number of VM storage disks available. You can maximize the disk number in your LUN configuration whether you use local disks or SAN-based (iSCSI or Fibre Channel) disks. This strategy enables you to spread disk reads and writes across multiple disks at once, which reduces the strain on a smaller number of drives and allows for greater throughput and response times. Controller and transport speeds affect VM performance, but maximizing the number of disks allows for faster reads and resource-intensive writes.

RAID level
The RAID level you choose for your LUN configuration can further optimize VM performance. But there’s a cost-vs.-functionality component to consider. RAID 0+1 and 1+0 will give you the best virtual machine performance but will come at a higher cost, because they utilize only 50% of all allocated disks.

RAID 5 will give you more gigabytes per dollar, but it requires you to write parity bits across drives. On large SANs, any VM performance drawback will often go unnoticed because of powerful controllers and large cache sizes. But in less-powerful SANs or on local VM storage, this resource deficit with RAID 5 can create a bottleneck.

Still, on many modern SANs, you can change RAID levels for a particular LUN configuration. This capability is a great fallback if you’ve over-spec’d or under-spec’d the performance levels your VMs require.

Transport
Whether the connectivity between host servers and LUNs is local, iSCSI or Fibre Channel, it can create resource contention. The specific protocol determines how quickly data can traverse between the host and disk subsystem. Fibre Channel and iSCSI are the most common transports used in virtual infrastructures, but even within these designations there are different classes, for example 1/10 Gb iSCSI and 4/8 Gb Fibre Channel.

Thin provisioning
Thin provisioning technologies do not necessarily increase virtual machine performance, but they allow for more efficient use of SANs, because only the data on each LUN counts toward total utilization. This method treats total disk space as a pool that’s available to all LUNs, allowing for greater space utilization on the SAN. With greater utilization comes greater cost savings.

Block-level deduplication
Block-level deduplication is still an emerging technology among most mainstream SAN vendors. Again, this technology does not improve virtual machine performance through the LUN configuration, but it does allow data to be stored only once on the physical disk. That means large virtual infrastructures can save many terabytes of data because of similarities in VM workloads and the amount of blank space inherent with fixed-size virtual hard disks.

Number of VMs on a LUN
With the best LUN configuration for your infrastructure, you can improve virtual machine (VM) performance. Keep in mind disk types, I/O optimization, hardware, RAID level and more as you configure LUNs.

But the number of VMs you put on those LUNs depends on the size of your infrastructure and whether your environment is for testing and development.

Large number of VMs in production implementations
In medium or large infrastructures, with anywhere from 100 to 1,000 VMs, there are a few ways to glean the best virtual machine performance. In most cases, you can get away with a RAID 5 configuration if you have a SAN with two to four controllers, a larger disk cache and a 10 Gb iSCSI or at least a 4 Gb Fibre Channel transport.

This VM storage strategy has proven to provide a good balance of virtual machine performance and cost for production workloads.

In my own infrastructure, which uses 4 Gb Fibre Channel to a HP EVA 8400 SAN with 300 GB 15K SCSI drives, I can get 20 to 25 VMs per 1 TB RAID 5 LUN. The VMs range in size and I/O resource demands, but in general they have one or two processors, 2 GB to 4 GB of RAM and between 25 and 60 GB of virtual disk.

As the number of VMs grows, the LUN configuration tends to scale out to many LUNs on multiple SANs. That improves virtual machine performance and provides greater resiliency.

Small number of VMs or test/development environments
Small or test environments, with fewer than 100 VMs and low I/O utilization, are where many admins tend to struggle with LUN configuration. Many fledgling virtual infrastructures grow faster than anticipated, or admins have unrealistic I/O performance expectations.

That said, any VM storage method -- local storage, a lower-end direct-attached SCSI or even an iSCSI SAN with SATA drives -- can work well. In small infrastructures, where controller CPUs and disk caches have lower resource usage, look to RAID 10 to provide an extra boost in virtual machine performance.

The ideal number of VMs for a LUN configuration in smaller infrastructures varies greatly. Anywhere from four to 15 VMs per LUN is possible, but if you assign more than your infrastructure can handle, disk I/O could be saturated quickly.

It’s also important to appropriately size your host resources to meet the expected performance of your VM storage subsystem. If you buy an excessive number of CPU cores or large amounts of memory, for instance, these resources will go to waste because disk I/O will be exhausted well before the others.

So what does this all mean? For optimum VM performance and cost savings, use a healthy combination of the previously mentioned options. Using the best possible resources and LUN configuration is ideal, but it’s not practical or necessary for the majority of virtual infrastructures.

Tuesday, May 03, 2011

Filling VMware vCenter Server management vOids

I love VMware vCenter and use it every day. But it isn't perfect.

VMware vCenter Server provides a centralized management console for vSphere hosts and virtual machines (VMs). Most of vSphere's advanced features, such as vMotion, Distributed Resource Scheduler and Fault Tolerance, require vCenter Server.

But some vCenter Server management features need improvement. And some vCenter Server management gaps are larger than others. So what should a VMware administrator do? How do you patch these virtual infrastructure cracks?

Every organization is different, as is every virtual infrastructure, so you may see different holes in vCenter than I do in mine. That said, here is my list of vCenter Server management vOids (get it?) and how to fill them:

vCenter Server management hole No. 1: Backup and recovery
You need a backup application that recognizes the virtual infrastructure and can interact with vCenter to determine the locations of VMs. Every edition of vSphere, starting with Essentials Plus, includes VMware Data Recovery (VDR), which is a good backup tool for virtual infrastructures with up to 100 VMs. (You can overcome this limitation with multiple VDR appliances.)

But VMware just doesn't promote VDR, nor does the company offer many new VDR revisions or features. And most use third-party virtualization backup tools, regardless of how many VMs an organization has. These tools scale to thousands of VMs, and you'll never have to replace your VM backup tool because it ran out of gas at 100 VMs. Plus, these tools may offer additional features, such data replication between data centers, instant restore and advanced verification. Two such virtualization backup tools are Veeam Backup and Replication 5 and Quest vRanger.

vCenter Server management hole No. 2: Mass changes
If you have only a handful of VMs, for example, making a modification to the VM properties isn't too difficult. But what if you want to disconnect the CD drive on 500 VMs? That's a problem. VCenter doesn't offer a way to automate this process.

Instead of performing mass changes in the vSphere Client or vCenter, VMware recommends PowerCLI, which is PowerShell interface with vSphere-specific additions. On its own, PowerCLI alone isn't terribly enjoyable to use. But it can be enhanced with a nice toolset and a library of preconfigured scripts to jump-start your mass changes, which is possible with PowerGUI and the VMware Community PowerPack.

vCenter Server management hole No. 3: Alarm alerting
vCenter offers 42 default alarms. But if you aren't in the vSphere Client when an alarm goes off, it's likely that you won't find out about it until much later -- or when the CIO chews you out. And manually configuring an action to send an email on all 42 alarms is a pain.

Instead of configuring actions on the default alarms, you can exploit two quick, easy and free alarm tools that can send customizable alerts, whether or not the vSphere Client is running. XtraVirt's vAlarm and Nick Weaver's vSphere Mini Monitor solve this vCenter Server management issue.

vCenter Server management hole No. 4: Capacity planning
vCenter does a decent job with performance charts and tracking history, such as logs, events and metrics. But performance monitoring could be easier. Plus, it has no built-in method for capacity planning or for easily identifying capacity bottlenecks.

VMware's vCenter Capacity and third party tools, such as vKernel's vOperations Suite, simplify capacity planning and answer the following capacity questions: How much excess VM capacity do you have? Where are the bottlenecks in the infrastructure? At the current growth rate, how long before you'll need to upgrade a server's RAM?

vCenter Server management hole No. 5: VM guest processes and files
vCenter does well in managing hosts and VMs, but it doesn't go into the guest operating system. In the vSphere Client, you can view all the running processes inside every VM, edit a VM guest OS file or run a script across all VMs.

The free VMware Guest Console, an experimental application created by VMware Labs, is a great tool for managing vSphere VM processes and files. You can view, sort and kill processes across all VMs, and you can also run a script on all Windows or Linux VMs. I hope VMware includes these experimental features in the vSphere Client soon!

vCenter Server management hole No. 6: Virtual network insight
The vSphere Client can indicate that VM network traffic is causing a 1 GB Ethernet adapter to have a 99% utilization rate. But strangely, it doesn't display which kind of traffic is going across the virtual networks, where it came from or where it's going.

To learn which traffic is going across a virtual network, there's another free tool for vSphere: Xangati for ESX, a virtual appliance that tracks conversations on the virtual network. It's great for troubleshooting any virtual network issue, analyzing virtual desktop infrastructure and correlating vCenter performance stats with virtual network stats.

Tuesday, April 12, 2011

Virtualizing Exchange Server on Hyper-V versus VMware

Some experts turn the virtualization debate between Microsoft Hyper-V versus VMware vSphere into an emotional, almost religious crusade. However, the discussion of which platform is best for an organization can be boiled down to a logical list of pros and cons.

No matter which platform you choose, virtualizing Exchange offers hardware independence and improved disaster recovery options. For this mission-critical application, being able to swap out a server on lease or quickly fail over to a compatible server when a disaster occurs makes virtualizing Exchange on any platform a wise decision.

Before directly comparing the pros and cons of Microsoft Hyper-V versus VMware vSphere, let's consider a few points:

  1. You are going to virtualize Exchange 2007 or Exchange 2010 and move the server from physical to virtual hardware.
  2. You plan to run virtualized Exchange on Windows Server 2008 SP2 or Windows Server 2008 R2.
  3. You will virtualize all Exchange server roles, except for unified messaging.
  4. A dedicated server will run only Exchange and support either virtualization Hyper-V or vSphere.
  5. You have no bias toward either Microsoft Hyper-V or VMware vSphere. While this point may not seem that important, every organization has some pre-disposition about Microsoft versus VMware, especially if it already has virtualized applications from one vendor. For most companies, it makes sense to virtualize Exchange on the same virtualization platform as other enterprise servers.

Exchange Server virtualization: Hyper-V or vSphere?
There are a few questions to ask when weighing the pros and cons of virtualizing Exchange Server on either Hyper-V and vSphere. When trying to decide with product is best for your organization, consider these points:

Cost -- What costs are associated with each virtualization platform?

Microsoft Hyper-V -- You have two choices for virtualizing Exchange Server on Microsoft Hyper-V: Hyper-V Server 2008 R2, which is free, or Windows Server 2008 R2, which costs $1,029 with System Center Virtual Machine Manager (SCVMM) Workgroup, which costs $505.

Windows Server’s virtual instance license states that you must use the OS on one virtual machine (VM) with the Standard edition and four VMs with Enterprise edition. If you already have a Windows Server 2008 R2 license for Exchange, you can use that as the Hyper-V host license in your Exchange guest VM. However, that would be the only VM you could run on that server without having to purchase another license.

VMware vSphere -- There are also two options for virtualizing Exchange Server on VMware vSphere: use ESXi, VMware’s free hypervisor or purchase vSphere Essentials, which includes vCenter Essentials, for $611.

Exchange Server is an enterprise application, so you should research a platform with robust features and support. Therefore, you may want to consider higher-end editions of Hyper-V and vSphere.

A Windows Server Enterprise 2008 R2 license lets you run four Windows VMs at no additional cost, but lacks the advanced features that vSphere offers. vSphere Enterprise Plus doesn't include any Windows guest OS licenses, though it does include several advanced features, such as hot-add virtual CPU (vCPU) and RAM, 8-way virtual symmetric multiprocessing (vSMP), storage and network I/O control, as well as several advanced-memory management techniques.

  • Check out these additional resources for specific, feature-by-feature comparisons of VMware vSphere and Microsoft Hyper-V:

    Windows Enterprise 2008 R2 costs $3,999 and vSphere Enterprise Plus costs $3,495. When you include the features and options in each package, the pricing is very similar.

  • Feature set -- Which virtualization platform offers the most, and best, features?
  • Hyper-V is missing a few key features. It lacks memory optimization capabilities such as transparent page sharing, memory compression and memory ballooning. However, when run on Windows Server 2008 R2 SP1, Hyper-V has dynamic memory that competes with VMware’s memory over commitment -- although the two work very differently.

    Additionally, Microsoft will not fully support a virtualized Exchange server running on Hyper-V if you want to:

    • Use snapshots;
    • Use VMotion or Quick or Live Migration, as VMotion prevents you from using VMware’s DRS clustering.
    • Use dynamic disks or thin-provisioned disks.
    • Have any other software installed on the host server except for backup, antivirus and virtualization management software.
    • Exceed a 2:1 virtual-to-physical (V2P) processor ratio.

    In my opinion, these limitations prevent organizations from benefitting from some of the best virtualization features -- snapshots, VMotion and Quick or Live Migration.

    vSphere offers numerous advanced features that you may want to take advantage of for other VMs running on the same server, like distributed resource scheduler (DRS), distributed power management (DPM), VMotion, Storage VMotion and VMware high availability (HA).

    You may not be able to use all of VMware's features on your Exchange VM; however, you generally can use them for different VMs running on the same server. In doing this, you'll be able to place more VMs on a single vSphere server. Don’t overdo this, though, as VM sprawl can negatively affect performance.

  • Note: The cost and features comparisons above are not full comparisons of these two platforms. For more cost and feature comparisons, I recommend the following resources:
  • vSphere vs. Hyper-V Comparison
    Choosing vSphere vs. Hyper-V vs. XenServer
    How to run Exchange 2010 on Hyper-V R2
  • Resources and educational options -- How much information and support is readily available online about each platform? When you need help with your virtualized Exchange infrastructure, which platform is there more information on?

    In my opinion, the VMware community offers more educational options, innumerable blog posts, certification options and additional guides. There also are more conferences and resources available that focus on VMware.

  • Third-party support and tools -- How many third-party tools are on the market for each virtualization platform? How many new tools are released, on average, in a month or a year? While there are quite a few free and paid tools out there for Hyper- V, it seems that there are more options for companies working with vSphere.
  • Scope of product line -- It’s important to consider product maturity, how much each vendor has invested in its virtualization platform and its plans for future growth.

    Because Exchange Server and Hyper-V are both Microsoft products, you can use a single product suite -- System Center -- to manage them both (in one way or another). Although the suite has fewer features than VMware’s tools, there may be benefits to having your management layer all from the same vendor.

VMware offers more than 20 different virtualization pieces that fit into its overall vSphere product line. This is important because you may want to expand your company’s virtual infrastructure and will need more than what is offered natively.

VCAP-DCA (VDCA550) - FINALLY NAILED IT

I feel proud to inform you that I have passed my VMware Certified Advanced Professional - Data Centre Design (VCAP-DCD) certification exam s...