Friday, August 21, 2015

VCAP-DCA (VDCA550) - FINALLY NAILED IT

I feel proud to inform you that I have passed my VMware Certified Advanced Professional - Data Centre Design (VCAP-DCD) certification exam successfully. It was my first attempt and I can say I was lucky.

The VMware Certified Advanced Professional – Data Center Design Exam tests certification candidates on their skills and abilities designing and integrating multi-site, large enterprise virtualized environments. The VCAP-DCD Exam is a core component of the VCAP-DCD certification.

The certification requires a passing score on the exam, and the candidate must be a VCP-DCV (VMware Certified Professional – Data Center Virtualization) to obtain this certification.

I have fulfilled all the pre-requisites to attain this certification. It gives me strength to move to the next step, i.e. VMware Certified Advanced Professional – Data Centre Administrator (VCAP-DCA).

I had set a milestone for myself which is achieving VMware Certified Design Expert (VCDX). There are currently only 202 people worldwide who have been able to achieve this feat so far.

VCP (achieved on 18-Sep-14) —> VCAP-DCD (achieved on 05-Aug-15) —> VCAP-DCA (preparing) —> VCDX (target)

My Experience
Not a single multiple choice question came. The blueprint mentioned that there would be  46 questions, but I got only 22 questions. I had briefly gone thru http://www.elasticsky.co.uk. This really helped I think. However, I was lucky to score 380, and it was only by experience that I managed to pass. I had 30 min left to review after answering all the questions, but I was so much exhausted that I did not have the energy to review any design questions after reaching the end. I just pressed the end button with 30 min to spare!!!

All I can say:

  • Attempt the design questions first, and mark the others to answer later in review. 
  • Go thru the site which has a lot of important links for preparation: http://www.elasticsky.co.uk/vcap5-dcd/ 
  • There are few practice designs available on http://www.elasticsky.co.uk/practice-questions/.
  • You may already know this, but the design questions are drag and drop and it is not like Visio. You need to practice, so try getting used to it using the demo from VMware: VCAP5-DCD Exam Demo [149330] at http://mylearn.vmware.com
All the best!!!

Wednesday, January 22, 2014

Impact of NUMA when Virtualizing Business Critical Applications

We tend to think of our physical servers as a sea of memory and CPU cores, but virtualization adds contention, with multiple cores working in shared memory space. Unless your Virtualization Consultant designing your infrastructure has a firm grasp on Non-Uniform Memory Access (NUMA) node sizes and its impact on large virtual machines, this can lead to performance issues.

Nehalem Chip onwards has memory handling inside CPUs
Intel changed its processor microarchitecture starting with the Nehalem chip. Intel moved the memory management inside the CPU chip, rather than a separate chip connected on the Northbridge. In this architecture, a particular RAM dual inline memory module (DIMM) will be attached to just one CPU socket. To get to RAM attached to another CPU socket, the RAM pages must be requested over the interconnect bus that joins the CPU sockets, then the remote socket accesses the DIMM and returns the data.

Access to remote DIMMs is slower than access to local DIMMs since it crosses an additional bus, making for a non-uniform memory architecture (NUMA). The combination of the CPU cores in one socket and the RAM that is local to that socket is called a NUMA node; the physical server's BIOS passes this information to the ESXi server at boot time. With a four-socket host, the NUMA nodes would have eight CPU cores and 32 GB of RAM.



VMware vSphere tries to jockey VMs in a NUMA node
Let's use the following example: A modern virtualization host might have four sockets each with eight cores and 16 GB RAM DIMMs, each with eight gigabytes of RAM. The whole host has 32 cores and 128 GB of RAM. This sounds like a great place to run a VM with four vCPUs and 40 GB of RAM or one with eight vCPUs and 24 GB of RAM, but both of those VM configurations make life difficult for the VMkernel and lead to potentially odd performance.

The vSphere hypervisor has been NUMA-aware since ESX 3, and it tries to keep a virtual machine inside a single NUMA node to provide the best and most consistent performance. A large VM might not fit inside a single NUMA node. A VM with four vCPUs and 40 GB of RAM configured wouldn't fit in my example NUMA node. This large VM would need to be spread across two NUMA nodes into a NUMA-wide VM. Most of the RAM would be in one NUMA node but some would be in another node and, thus, slower to access. All of the CPUs would be inside the first node -- its home NUMA node -- so every vCPU would have the same speed of access to each page of RAM. The VM with eight vCPUs and 24 GB of RAM is also too wide; although its RAM fits into its home NUMA node, two of its vCPUs need to be scheduled on another node. For the vCPUs on the non-home node, all RAM is remote so they will run slower, although the VM wouldn't know why.



Knowing your application workload and architecture is crucial here. If the wide VM can be split into two smaller VMs, each of which fits into a NUMA node, then you will likely get better performance. At the very least, you will get more consistent results, which will help when troubleshooting.

If your application is NUMA aware, even better. vSphere can create a VM that is NUMA aware with vNUMA. The VM will be divided into virtual NUMA nodes, each of which will be placed onto a different physical NUMA node. Although the virtual machine is still spread across two NUMA nodes, the operating system and application inside the VM are aware of the NUMA split and the resources will be optimized. Business critical application like Microsoft Exchange is not NUMA aware, however Microsoft SQL Server is NUMA aware.

VMware Design Experts and Consultants need to be aware of hardware configuration
Knowing the hardware is important. That means knowing the NUMA node size of the physical servers and fitting your VMs to that node size. It is also important to keep the clusters consistent, having the same NUMA node size on all the hosts in the cluster since a VM that fits in a NUMA node on eight-core CPUs may not fit so well on a host with six-core CPUs. This also affects the number of vCPUs you assign VMs; if you have several VMs with more than two vCPUs, make sure that multiple VMs fit into the NUMA core count. The example six-core NUMA nodes suit two- and three-vCPU VMs well, but multiple four-vCPU VMs per NUMA node might not perform so well since there isn't space for two of them at the same time on the NUMA node. Four vCPU virtual machines fit NUMA node sizes of four and eight cores far better.

Like so many things that impact vSphere design, awareness is the key to avoiding problems. Small VMs don't need to know or care about NUMA but the large -- and therefore critical -- VMs in your enterprise may need NUMA awareness to perform adequately. Design your large VMs to suit the NUMA architecture of your hosts, and make sure your HA and DRS clusters have a consistent NUMA node size.

Wednesday, January 16, 2013

Oversized Virtual Machines


Recently we had an internal discussion about the overhead an oversized virtual machine generates on the virtual infrastructure. An oversized virtual machine is a virtual machine that consistently uses less capacity than its configured capacity. Many organizations follow vendor recommendations and/or provision virtual machine sized according to the wishes of the customer i.e. more resources equals better performance. By oversizing the virtual machine you can introduce the following overhead or even worse decrease the performance of the virtual machine or other virtual machines inside the cluster.

Note: This article does not focus on large virtual machines that are correctly configured for their workloads.

Memory overhead
Every virtual machine running on an ESX host consumes some memory overhead additional to the current usage of its configured memory. This extra space is needed by ESX for the internal VMkernel data structures like virtual machine frame buffer and mapping table for memory translation, i.e. mapping physical virtual machine memory to machine memory.

The VMkernel will calculate a static overhead of the virtual machine based on the amount of vCPUs and the amount of configured memory. Static overhead is the minimum overhead that is required for the virtual machine startup. DRS and the VMkernel uses this metric for Admission Control and vMotion calculations. If the ESX host is unable to provide the unreserved resources for the memory overhead, the VM will not be powered on, in case of vMotion, if the destination ESX host must be able to back the virtual machine reservation and the static overhead otherwise the vMotion will fail.

The following table displays a list of common static memory overhead encountered in vSphere 5.1. For example, a 4vCPU, 8GB virtual machine will be assigned a memory overhead reservation of 413.91 MB regardless if it will use its configured resources or not.

Memory (MB)
2vCPUs
4vCPUs
8vCPUs
2048
198.20
280.53
484.18
4096
242.51
324.99
561.52
8192
331.12
413.91
716.19
16384
508.34
591.76
1028.07

The VMkernel treats virtual machine overhead reservation the same as VM-level memory reservation and it will not reclaim this memory once it has been used, furthermore memory overhead reservations will not be shared by transparent page sharing.

Shares (size does not translate into priority)
By default each virtual machine will be assigned a specific amount of shares. The amount of shares depends on the share level, low, normal or high and the amount of vCPUs and the amount of memory.

Share Level
Low
Normal
High
Shares per CPU
500
1000
2000
Shares per MB
5
10
20

That is, a virtual machine configured with 4CPUs and 8GB of memory with normal share level receives 4000 CPU shares and 81960 memory shares. Due to relating amount of shares to the amount of configured resources this “algorithm” indirectly implies that a larger virtual machine needs to receive a higher priority during resource contention. This is not true, as some business critical applications perfectly are run on virtual machines configured with low amounts of resources.

Oversized VMs on NUMA architecture
In vSphere 4.1, CPU scheduler has undergone optimization to handle virtual machines which contains more vCPUs than available cores on one NUMA physical CPU. The virtual machine (wide-vm) will be spread across the minimum number of NUMA nodes, but memory locality will be reduced, as memory will be distributed among its home NUMA nodes. This means that a vCPU running on one NUMA node might needs to fetch memory from its other NUMA node. Leading to unnecessary latency, CPU wait states, which can lead to %ready time for other virtual machines in high consolidated environments.

Wide-NUMA nodes are of great use when the virtual machine actually run load comparable to its configured size, it reduces overhead compared to the 3.5/4.0 CPU scheduler, but it still will be better to try to size the virtual machine equal or less than the available cores in a NUMA node.

More information about CPU scheduling and NUMA architectures can be found at VMware site here.

Impact of memory overhead reservation on HA Slot size 
The VMware High Availability admission control policy “Host failures cluster tolerates” calculates a slot size to determine the maximum amount of virtual machines active in the cluster without violating failover capacity. This admission control policy determines the HA cluster slot size by calculating the largest CPU reservation, largest memory reservation plus it is a memory overhead reservation. If the virtual machine with the largest reservation (which could be an appropriate sized reservation) is oversized, its memory overhead reservation still can substantial impact the slot size.

The HA admission control policy “Percentage of Cluster Resources Reserved” calculate the memory component of its mechanism by summing the reservation plus the memory overhead of each virtual machine. Therefore allowing the memory overhead reservation to even have a bigger impact on admission control than the calculation done by the “Host Failures cluster tolerates” policy.

DRS initial placement
DRS will use a worst-case scenario during initial placement. Because DRS cannot determine resource demand of the virtual machine that is not running, DRS assumes that both the memory demand and CPU demand is equal to its configured size. By oversizing virtual machines it will decrease the options in finding a suitable host for the virtual machine. If DRS cannot guarantee the full 100% of the resources provisioned for this virtual machine can be used it will vMotion virtual machines away so that it can power on this single virtual machine. In case there are not enough resources available DRS will not allow the virtual machine to be powered on.

Shares and resource pools
When placing a virtual machine inside a resource pool, its shares will be relative to the other virtual machines (and resource pools) inside the pool. Shares are relative to all the other components sharing the same parent; easier way to put it is to call it sibling share level. Therefore the numeric share values are not directly comparable across pools because they are children of different parents.



By default a resource pool is configured with the same share amount equal to a 4 vCPU, 16GB virtual machine. As mentioned in part 1, shares are relative to the configured size of the virtual machine. Implicitly stating that size equals priority.

Now let’s take a look again at the image above. The 3 virtual machines are re-parented to the cluster root, next to resource pools 1 and 2. Suppose they are all 4 vCPU 16GB machines, their share values are interpreted in the context of the root pool and they will receive the same priority as resource pool 1 and resource pool2. This is not only wrong, but also dangerous in a denial-of-service sense — a virtual machine running on the same level as resource pools can suddenly find itself entitled to nearly all cluster resources.
Because of default share distribution process we always recommend to avoid placing virtual machines on the same level of resource pools. Unfortunately it might happen that a virtual machine is reparented to cluster root level when manually migrating a virtual machine using the GUI. The current workflow defaults to cluster root level instead of using its current resource pool. Because of this it’s recommended to increase the number of shares of the resource pool to reflect its priority level. More info about shares on resource pools can be found in Duncan’s post on yellow-bricks.com and VMware vSphere ESX 4.0 Resource Management Guide.

Multiprocessor virtual machine
In most cases, adding more CPUs to a virtual machine does not automatically guarantee increase throughput of the application, because some workloads cannot always take advantage of all the available CPUs. Sharing resources and scheduling these processes will introduce additional overhead.
For example, a four-way virtual machine is not four times as productive as a single-CPU system. If the application is unable to scale than the application will not benefit from these additional available resource.

Progress
Although relaxed co-scheduling reduces the requirement of the VMkernel to simultaneous schedule all vCPUs of the virtual machine, periodically scheduling the unused or idle vCPUs is still necessary to keep the progress of each vCPU in the virtual machine acceptably synchronized.
Esxtop also gives scheduling stats for SMP virtual machines;
%CRUN: All VCPUs want to run at once. CRUN is the amount of time between when a PCPU is told to run a certain VCPU on an SMP VM and when it is actually able to run that VM. This should be almost 0.
%CSTOP: If a VCPU gets ahead of another VCPU of the same SMP VM, then we ask the faster VCPU to stop until the other one can catch up. The time spent in this stopped state is CSTOP.

Single thread application
Only applications with multiple threads and allow them to be scheduled in parallel can benefit from multiprocessor systems. A single-threaded application can only be scheduled on one CPU at the time and will not benefit from the multiple CPUs available. The Guest OS is able to migrate the thread between the available CPUs, introducing unnecessary overhead such as interrupts or context switches and cache misses.

Timer interrupts
In older guest operating systems, the unused virtual CPUs still take timer interrupts, which consumes a small amount of additional CPU. Please refer to KB articles “High CPU Utilization of Inactive Virtual Machines – KB1077

Configured memory
Oversizing the memory configuration of a virtual machine can impact the performance of the virtual machine itself or even worse, impact the other active virtual machines on the host and in the cluster. Using memory reservations on oversized virtual machines will make it go from bad to worse.

Application memory management 
Excess memory is a problem when the application uses this memory opportunistically, in other words the application is hoarding memory. Java, SAP and often Oracle workloads assume it can use all the memory it detects. Because ESX cannot determine which memory is important to the virtual machine, it always backs memory pages of the virtual machine with physical pages. Besides creating a large memory footprint on the physical level, these kinds of applications add a third level of memory management as well.
Due to this additional management level, the Guest OS does not understand which pages are important and which are not. And because the Guest OS isn’t aware, it cannot return inactive pages to the balloon driver when requested, therefor impacting the performance of the application during contention even more.
Setting memory reservation at virtual machine level will guarantee the availability of physical memory and will secure a certain level of application performance (if memory bound). However setting memory reservations at virtual machine level will impact the virtual infrastructure and the larger the memory reservation, the larger the impact. Visit Frank Denneman’s post on Impact of memory reservation for more info.
To avoid these effects, it is recommended to monitor the behavior of the application over time and tune the configuration of the virtual machine and its reservation to get proper performance and limit the impact of its configured memory and the memory reservation.

NUMA node
If the virtual machines mentioned in the previous paragraph are configured with more memory than available in their home NUMA node, the system needs to fetch the memory from remote NUMA nodes. Accessing memory from remote nodes introduces latencies and generally reduced throughput of the vCPU. ESX does not communicate any NUMA information to the Guest OS and therefore both the Guest OS as well as the application are unaware of the non-uniform latency characteristics of the underlying platform. The Guest OS and application are therefore unable to prioritize which memory it will use.

If the virtual machine uses all the available memory of a NUMA node, it will lead to a higher degree of remote memory of all the other active virtual machines using the pCPU, leading to higher memory latencies and less throughput of the other virtual machines and eventually an intra-node migration.
Attempt to configure virtual machine with less memory than available in a NUMA node.

Swap file
During boot a swap file is created that equals the virtual machines configured memory minus the configured memory reservation. If no memory reservation is set, the virtual machine swap file (.vswap) equals the configured memory. Large virtual machines will generate an additional requirement for storing these large swap files reducing the consolidation ratio of virtual machines per VMFS datastore.

Bootstorms

“A bootstorm is the occurrence of powering on a multitude of virtual machines simultaneously.”

Virtual infrastructures running versions prior to ESX 4.1 can encounter memory contention when a bootstorm occurs of virtual machines running windows. Windows checks how much memory is available to the OS by zeroing out pages it detects. Transparent page sharing will collapse these pages but this will not occur immediately. Transparent Page Sharing is a cycle-driven process that tries to make a pass over the virtual machine memory with a timeframe of 3600 seconds. The level of contention will impact the speed of the TPS process. During a bootstorm, this zero-out behavior and delayed TPS process can introduce contention. Usually this contention is short-lived. Unfortunately during the startup phase of the guest OS the balloon driver will not be loaded and this situation can lead to compressing (10% of configured memory) and swapping useless data straight to disk.

ESXTOP will display swapped out memory but due to the nature of the data will show little to none swap-in.
From ESX 4.1 onwards, ESX uses a new technique called zero-page sharing. An in-depth post about this cool new technique will follow shortly.

End-note
This post concludes the article about the impact of oversized virtual machines. The reason I wrote these articles is that I know many organizations still size their virtual machines on assumed peak loads happening somewhere in the (late) future of that service or application. Many organizations are using the same policy or method used for physical machines. The beauty of using virtual machines is the flexibility an organization has when it comes to determining the size of a machine during its lifecycle. Leverage these mechanisms and incorporate this in your service catalog and daily operations. Size the virtual machine according to its current or near-future workload.

Saturday, October 06, 2012

Join The Movement To Stop Answering After-Hours Email


As often as email is proclaimed to be dead, and then not, one thing is for sure: knowledge workers - like me, and probably you -  still have to deal a whole lot of it. Some employees and their employers are starting to push back against the "always-on" mentality, advocating "email-free" after hours. But is this a realistic goal or a fools' errand? 
I struggle with this issue almost every day, so it's encouraging to see more and more people address the problem.
The most recent sighting of this trend was found recently in the Washington Post, which highlighted the experiments of some firms to limit or restrict use of email communications during non-business hours.
So far, the experiments (at least those mentioned by the Post) seem to be improving employees' moods. "At PBD Worldwide, an Atlanta-based shipping company, the mood among workers has been noticeably better since the company adopted a policy of nights- and weekends-free. Work e-mails 'can wait,' said Lisa Williams, vice president of human relations. 'The world isn't going to end.'," the Post reported.

Changing Expectations Of Work

On the surface, the idea that people can put away their work and dedicate their lives to, you know, actually living is a very charming one. But in an age where many people - like me - work from home at least part of time or are constantly connected to the Internet through a smartphone or tablet, it's hard to imagine that this is anything but an exception in a world where the norm remains that workers will respond instantly.
The expectation of the working world is increasingly one of more hours for less pay, and as stressful as handling email after work can be, businesses are still looking at productivity levels that have shot up 254.3% between 1948 and 2011, even as real hourly compensation of production/nonsupervisory workers only grew 113.1% in the same period, according to the Economic Policy Institute. A similar disparity exists between productivity and real median family income.
While there are companies out there that are trying to ease their workers' burden, it's hard not to envision a pushback on a full stop on after-hours communication if there is a real or perceived decline in productivity.

Why Cutting Back Is Hard For Me

It is a trap that many of us can fall into. As a freelancer IT Consultant, working mostly from home, the trap can be especially difficult to avoid. According to the latest numbers from TeleworkResearchNetwork.com, there are around three million telecommuting workers in the US, some 2.5% of the total workforce. It is not clear if that counts people like me, who own a business of their own that happens to be located in a home office. For such home workers, the balance between work and not-work is always difficult, and email is a big part of the problem. With projects and articles, for instance, it can be easy to walk away for a w while, unless there's some looming deadline. But email (and other communications) can be harder to avoid, since they can and do show up at all hours on my smartphone and tablet devices.
Turning email off is of course possible; part of my work habit is to not check email for long periods of the day while I am writing, to avoid distractions. But even in the evening, the trap is there, because I don't want to miss a note from my partners or one of my clients asking for help.
Some days are better than others, but there is always the small underlying dread that something won't get caught in time. That dread leads to stress and a broader concern about a lack of productivity. 

How Productive Is Email, Really?

But does productivity really decline if email isn't diligently managed? It turns out the reverse may be true.
In a CBS interview in May, Judith Glaser, CEO and founder of Benchmark Communications, revealed the results of a study that indicated that workers who walked away from emails for even as little as five days were more focused and less stressed.
"This does make a difference. Changing how the heart works, changing how the brain works, to become less stressed, gives everybody greater productivity. And this shows up for the boss's benefit," Glaser stated.
Of course, taking such a vacation is not always easy. Glaser mentioned in the same interview that she had to coach one executive who required his employees to respond to all of his emails within seven minutes or else be regarded as a lower-level employee.
The move to cut back, or even dump email altogether, may be a part of a reaction to this larger problem of rising expectations for workers. Even if we cut back on after-hours email, that won't effect real change unless employers moderate their requirements for instant response 24 hours a day. The genie has been let out of the bottle, and it is very common for employers (who know you have a smartphone because they are paying for it) to expect that 24/7 responsiveness. Remote workers feel this pressure, perhaps in an effort to "stay productive" or just "look productive" to justify to their bosses why they should be allowed to not be in the office.
Unless the expectations of "always on" can be reduced to a reasonable level, the intrusiveness of work into our daily lives is never going to end. No matter how much we want it to.

Saturday, December 31, 2011

Array-based Thin Provisioning over VMware's Thin Provisioning

Over the last few years thin provisioning has steadily moved into the main stream of storage management - so much so that not only has it found its way onto many leading storage systems but into operating systems as well. Clearly one of the largest endorsements of using thin provisioning at the operating system level came last year when VMware announced its inclusion of thin provisioning as an option within vSphere 4.

But with thin provisioning now available at both the OS and storage system levels, organizations need to quantify at which of these two levels that they can derive the greatest number of benefits from thin provisioning or if, in fact, there are sufficient benefits at both levels to implement both storage and VMware thin provisioning .

A place that is as good as any to start in trying to understand the benefits for each is a video that was embedded in a blog on the StorageRap website on September 1, 2009. This video contrasts the benefits of using VMware's thin provisioning with using the thin provisioning feature found on the 3PAR InServ Storage Server.

The initial scenario that the presenter, 3PAR's Michael Haag, illustrates is when the storage allocated to VMware is a fat volume (i.e. NOT thinly provisioned) from traditional storage. All of this storage capacity is discovered and allocated by the VMware file system (VMFS) and brought under its management.

It is in this scenario that the primary value of VMware's new thin provisioning feature comes into play. As new virtual machines and their associated VMDK files are created, VMFS will only allocate as much physical space as each individual VMDK file actually needs, reserving only enough space for the VMDK's data. This results in storage savings and enables vSphere to more efficiently meet the storage requirements of the virtual machines it hosts as well as open the door to hosting more VMs.

However there is still a very real up front storage cost with VMware's thin provisioning implementation: all VMFS capacity must be provisioned (physically) up front instead of as it is needed for each VM and VMDK.

So while VMware vSphere's thin provisioning feature certainly helps organizations get more value from "fat" storage with a thin methodology from VMFS to the VM, it still leaves the door open to increase storage utilization with array-based thin provisioning. Array-based thin provisioning not only associates physical capacity with the VMDK only as writes occur, but does so with VMFS as well so there is no up front allocation or waste.

Consider this scenario that occurs every day in organizations of all sizes. It is only natural for an application administrator to request more storage capacity than the application needs. In this case, VMware's thin provisioning helps alleviate this over provisioning request by enabling storage administrators to thinly provision VMFS' allocated storage capacity to the VM-based application as is written.

However the storage administrator is still forced to hedge his bets. While he may suspect the application administer has over estimated how much storage capacity he needs, the storage administrator needs to over provision the networked storage assigned to that vSphere server to account for the situations where the application owner's estimates are correct or even too low.

It is for this reason that the presenter in this video as well as other virtualization experts not affiliated with 3PAR recommend, at a minimum, using thin provisioning at the array level. VMware thin provisioning can be used in conjunction with array-based thin provisioning; however, thin provisioning at the array level should be used at minimum because it provides the most efficient use of storage capacity for three reasons:
  • Storage capacity is only allocated and consumed (to VMFS and the VMDK) when data is actually written by the application on the VM to the attached storage system.
  • Storage capacity does not need to be over-provisioned
  • Storage management can remain a function of the storage team for those enterprise organizations that have this separation of duties. However because of 3PAR's integration with VMware's vCenter, organizations can also monitor, manage and report on storage through this management console.
Finally there are cases where it makes sense to use the two thin provisioning solutions together. The additional overhead of adding VMware TP on top of array-based TP can be beneficial in cases where separate Server and Storage Administrators want the flexibility to over-provision at their respective layer.

The growing acceptance of thin provisioning in the data center is leading to increased cost savings and efficiencies but, as this example with VMware vSphere illustrates, sometimes it is not always clear which is the best way to implement it.
 
The short answer is that if the storage system you are using now does not offer thin provisioning, using vSphere's thin provisioning feature is clearly the way to go. But in circumstances where you have both VMware and a storage system from 3PAR available to you and if thin provisioning can only be used in one place, then it should absolutely be done at the array-based layer to maximize utilization and minimize upfront costs.
 
Administrators can then decide to deploy VMware thin provisioning on top of 3PAR thin provisioning if it provides enough management benefits to the VMware administrator to outweigh the need to manage thin provisioning in two places. 

Saturday, October 08, 2011

My Out of Office Message

I am currently out of the office on vacation.

I know I'm supposed to say that I'll have limited access to email and won't be able to respond until I return - but that's not true. I will be carrying my smartphone with me, which is connected 24/7 with the newly implemented email system and I can respond if I need to. And I recognize that I'll probably need to interrupt my vacation from time to time to deal with something urgent.
That said, I promised my wife that I am going to try to disconnect, get away and enjoy our vacation as much as possible. So, I'm going to experiment with something new. I'm going to leave the decision in your hands:

If your email truly is urgent and you need a response while I'm on vacation, please resend it to interruptmyvacation_@_qadri.me and I'll try to respond to it promptly.

If you think someone else from my team might be able to help you, feel free to email my subordinates at the IT Infrastructure Department (HSAITInfrastructureTeam_@_felix-sap.com) and they will try to point you in the right direction.

Otherwise, I'll respond when I return…

Warm regards,
Mohiuddin Qadri

Wednesday, June 29, 2011

LUN configuration to boost virtual machine performance

Advanced virtual machine (VM) storage options can improve performance, but their benefits will go only so far if your physical logical unit numbers (LUNs) are not configured with best practices in mind.

Only when a LUN configuration meets the needs of your VM workloads can you significantly improve virtual machine performance. When it comes to LUN configuration, hardware choices, I/O optimization and VM placement are all important considerations.

Hardware and LUN configuration
The hardware on which you house your LUNs can make all the difference in VM performance. To avoid an overtaxed disk subsystem, choose hardware with similar resource levels as your host systems. It does no good to design a cluster of servers with two six-core processors and 128 GB of RAM and attach it to an iSCSI Serial-Attached Technology Advancement (SATA) storage area network (SAN) over a 1 GB link. That arrangement can create a storage bottleneck at either the transport or disk-latency level.

As you set up the LUN configuration, correctly sizing your disk subsystem is the key to ensuring acceptable performance. Going cheaper on one component may save you money up front; but if a resulting bottleneck reduces overall VM storage capacity or stability, it could ultimately cost you much more.

Disk type
To improve virtual machine performance, choose disk types for VM storage based on workload. Lower speed, lower duty cycle and higher latency drives such as SATA/FATA may be good for development environments. These drives usually range from 7,200 RPM to 10,000 RPM. For production workloads, or those with low latency needs, various SCSI/SAS alternatives give a good balance of VM performance, cost and resiliency. These drives range from 10,000 RPM to 15,000 RPM.

Solid-state drives are also a realistic option. For most workloads, these kinds of drives may be overkill technically and financially, but they provide low latency I/O response.

I/O optimization
To ensure a stable and consistent I/O response, maximize the number of VM storage disks available. You can maximize the disk number in your LUN configuration whether you use local disks or SAN-based (iSCSI or Fibre Channel) disks. This strategy enables you to spread disk reads and writes across multiple disks at once, which reduces the strain on a smaller number of drives and allows for greater throughput and response times. Controller and transport speeds affect VM performance, but maximizing the number of disks allows for faster reads and resource-intensive writes.

RAID level
The RAID level you choose for your LUN configuration can further optimize VM performance. But there’s a cost-vs.-functionality component to consider. RAID 0+1 and 1+0 will give you the best virtual machine performance but will come at a higher cost, because they utilize only 50% of all allocated disks.

RAID 5 will give you more gigabytes per dollar, but it requires you to write parity bits across drives. On large SANs, any VM performance drawback will often go unnoticed because of powerful controllers and large cache sizes. But in less-powerful SANs or on local VM storage, this resource deficit with RAID 5 can create a bottleneck.

Still, on many modern SANs, you can change RAID levels for a particular LUN configuration. This capability is a great fallback if you’ve over-spec’d or under-spec’d the performance levels your VMs require.

Transport
Whether the connectivity between host servers and LUNs is local, iSCSI or Fibre Channel, it can create resource contention. The specific protocol determines how quickly data can traverse between the host and disk subsystem. Fibre Channel and iSCSI are the most common transports used in virtual infrastructures, but even within these designations there are different classes, for example 1/10 Gb iSCSI and 4/8 Gb Fibre Channel.

Thin provisioning
Thin provisioning technologies do not necessarily increase virtual machine performance, but they allow for more efficient use of SANs, because only the data on each LUN counts toward total utilization. This method treats total disk space as a pool that’s available to all LUNs, allowing for greater space utilization on the SAN. With greater utilization comes greater cost savings.

Block-level deduplication
Block-level deduplication is still an emerging technology among most mainstream SAN vendors. Again, this technology does not improve virtual machine performance through the LUN configuration, but it does allow data to be stored only once on the physical disk. That means large virtual infrastructures can save many terabytes of data because of similarities in VM workloads and the amount of blank space inherent with fixed-size virtual hard disks.

Number of VMs on a LUN
With the best LUN configuration for your infrastructure, you can improve virtual machine (VM) performance. Keep in mind disk types, I/O optimization, hardware, RAID level and more as you configure LUNs.

But the number of VMs you put on those LUNs depends on the size of your infrastructure and whether your environment is for testing and development.

Large number of VMs in production implementations
In medium or large infrastructures, with anywhere from 100 to 1,000 VMs, there are a few ways to glean the best virtual machine performance. In most cases, you can get away with a RAID 5 configuration if you have a SAN with two to four controllers, a larger disk cache and a 10 Gb iSCSI or at least a 4 Gb Fibre Channel transport.

This VM storage strategy has proven to provide a good balance of virtual machine performance and cost for production workloads.

In my own infrastructure, which uses 4 Gb Fibre Channel to a HP EVA 8400 SAN with 300 GB 15K SCSI drives, I can get 20 to 25 VMs per 1 TB RAID 5 LUN. The VMs range in size and I/O resource demands, but in general they have one or two processors, 2 GB to 4 GB of RAM and between 25 and 60 GB of virtual disk.

As the number of VMs grows, the LUN configuration tends to scale out to many LUNs on multiple SANs. That improves virtual machine performance and provides greater resiliency.

Small number of VMs or test/development environments
Small or test environments, with fewer than 100 VMs and low I/O utilization, are where many admins tend to struggle with LUN configuration. Many fledgling virtual infrastructures grow faster than anticipated, or admins have unrealistic I/O performance expectations.

That said, any VM storage method -- local storage, a lower-end direct-attached SCSI or even an iSCSI SAN with SATA drives -- can work well. In small infrastructures, where controller CPUs and disk caches have lower resource usage, look to RAID 10 to provide an extra boost in virtual machine performance.

The ideal number of VMs for a LUN configuration in smaller infrastructures varies greatly. Anywhere from four to 15 VMs per LUN is possible, but if you assign more than your infrastructure can handle, disk I/O could be saturated quickly.

It’s also important to appropriately size your host resources to meet the expected performance of your VM storage subsystem. If you buy an excessive number of CPU cores or large amounts of memory, for instance, these resources will go to waste because disk I/O will be exhausted well before the others.

So what does this all mean? For optimum VM performance and cost savings, use a healthy combination of the previously mentioned options. Using the best possible resources and LUN configuration is ideal, but it’s not practical or necessary for the majority of virtual infrastructures.

Tuesday, May 03, 2011

Filling VMware vCenter Server management vOids

I love VMware vCenter and use it every day. But it isn't perfect.

VMware vCenter Server provides a centralized management console for vSphere hosts and virtual machines (VMs). Most of vSphere's advanced features, such as vMotion, Distributed Resource Scheduler and Fault Tolerance, require vCenter Server.

But some vCenter Server management features need improvement. And some vCenter Server management gaps are larger than others. So what should a VMware administrator do? How do you patch these virtual infrastructure cracks?

Every organization is different, as is every virtual infrastructure, so you may see different holes in vCenter than I do in mine. That said, here is my list of vCenter Server management vOids (get it?) and how to fill them:

vCenter Server management hole No. 1: Backup and recovery
You need a backup application that recognizes the virtual infrastructure and can interact with vCenter to determine the locations of VMs. Every edition of vSphere, starting with Essentials Plus, includes VMware Data Recovery (VDR), which is a good backup tool for virtual infrastructures with up to 100 VMs. (You can overcome this limitation with multiple VDR appliances.)

But VMware just doesn't promote VDR, nor does the company offer many new VDR revisions or features. And most use third-party virtualization backup tools, regardless of how many VMs an organization has. These tools scale to thousands of VMs, and you'll never have to replace your VM backup tool because it ran out of gas at 100 VMs. Plus, these tools may offer additional features, such data replication between data centers, instant restore and advanced verification. Two such virtualization backup tools are Veeam Backup and Replication 5 and Quest vRanger.

vCenter Server management hole No. 2: Mass changes
If you have only a handful of VMs, for example, making a modification to the VM properties isn't too difficult. But what if you want to disconnect the CD drive on 500 VMs? That's a problem. VCenter doesn't offer a way to automate this process.

Instead of performing mass changes in the vSphere Client or vCenter, VMware recommends PowerCLI, which is PowerShell interface with vSphere-specific additions. On its own, PowerCLI alone isn't terribly enjoyable to use. But it can be enhanced with a nice toolset and a library of preconfigured scripts to jump-start your mass changes, which is possible with PowerGUI and the VMware Community PowerPack.

vCenter Server management hole No. 3: Alarm alerting
vCenter offers 42 default alarms. But if you aren't in the vSphere Client when an alarm goes off, it's likely that you won't find out about it until much later -- or when the CIO chews you out. And manually configuring an action to send an email on all 42 alarms is a pain.

Instead of configuring actions on the default alarms, you can exploit two quick, easy and free alarm tools that can send customizable alerts, whether or not the vSphere Client is running. XtraVirt's vAlarm and Nick Weaver's vSphere Mini Monitor solve this vCenter Server management issue.

vCenter Server management hole No. 4: Capacity planning
vCenter does a decent job with performance charts and tracking history, such as logs, events and metrics. But performance monitoring could be easier. Plus, it has no built-in method for capacity planning or for easily identifying capacity bottlenecks.

VMware's vCenter Capacity and third party tools, such as vKernel's vOperations Suite, simplify capacity planning and answer the following capacity questions: How much excess VM capacity do you have? Where are the bottlenecks in the infrastructure? At the current growth rate, how long before you'll need to upgrade a server's RAM?

vCenter Server management hole No. 5: VM guest processes and files
vCenter does well in managing hosts and VMs, but it doesn't go into the guest operating system. In the vSphere Client, you can view all the running processes inside every VM, edit a VM guest OS file or run a script across all VMs.

The free VMware Guest Console, an experimental application created by VMware Labs, is a great tool for managing vSphere VM processes and files. You can view, sort and kill processes across all VMs, and you can also run a script on all Windows or Linux VMs. I hope VMware includes these experimental features in the vSphere Client soon!

vCenter Server management hole No. 6: Virtual network insight
The vSphere Client can indicate that VM network traffic is causing a 1 GB Ethernet adapter to have a 99% utilization rate. But strangely, it doesn't display which kind of traffic is going across the virtual networks, where it came from or where it's going.

To learn which traffic is going across a virtual network, there's another free tool for vSphere: Xangati for ESX, a virtual appliance that tracks conversations on the virtual network. It's great for troubleshooting any virtual network issue, analyzing virtual desktop infrastructure and correlating vCenter performance stats with virtual network stats.

Tuesday, April 12, 2011

Virtualizing Exchange Server on Hyper-V versus VMware

Some experts turn the virtualization debate between Microsoft Hyper-V versus VMware vSphere into an emotional, almost religious crusade. However, the discussion of which platform is best for an organization can be boiled down to a logical list of pros and cons.

No matter which platform you choose, virtualizing Exchange offers hardware independence and improved disaster recovery options. For this mission-critical application, being able to swap out a server on lease or quickly fail over to a compatible server when a disaster occurs makes virtualizing Exchange on any platform a wise decision.

Before directly comparing the pros and cons of Microsoft Hyper-V versus VMware vSphere, let's consider a few points:

  1. You are going to virtualize Exchange 2007 or Exchange 2010 and move the server from physical to virtual hardware.
  2. You plan to run virtualized Exchange on Windows Server 2008 SP2 or Windows Server 2008 R2.
  3. You will virtualize all Exchange server roles, except for unified messaging.
  4. A dedicated server will run only Exchange and support either virtualization Hyper-V or vSphere.
  5. You have no bias toward either Microsoft Hyper-V or VMware vSphere. While this point may not seem that important, every organization has some pre-disposition about Microsoft versus VMware, especially if it already has virtualized applications from one vendor. For most companies, it makes sense to virtualize Exchange on the same virtualization platform as other enterprise servers.

Exchange Server virtualization: Hyper-V or vSphere?
There are a few questions to ask when weighing the pros and cons of virtualizing Exchange Server on either Hyper-V and vSphere. When trying to decide with product is best for your organization, consider these points:

Cost -- What costs are associated with each virtualization platform?

Microsoft Hyper-V -- You have two choices for virtualizing Exchange Server on Microsoft Hyper-V: Hyper-V Server 2008 R2, which is free, or Windows Server 2008 R2, which costs $1,029 with System Center Virtual Machine Manager (SCVMM) Workgroup, which costs $505.

Windows Server’s virtual instance license states that you must use the OS on one virtual machine (VM) with the Standard edition and four VMs with Enterprise edition. If you already have a Windows Server 2008 R2 license for Exchange, you can use that as the Hyper-V host license in your Exchange guest VM. However, that would be the only VM you could run on that server without having to purchase another license.

VMware vSphere -- There are also two options for virtualizing Exchange Server on VMware vSphere: use ESXi, VMware’s free hypervisor or purchase vSphere Essentials, which includes vCenter Essentials, for $611.

Exchange Server is an enterprise application, so you should research a platform with robust features and support. Therefore, you may want to consider higher-end editions of Hyper-V and vSphere.

A Windows Server Enterprise 2008 R2 license lets you run four Windows VMs at no additional cost, but lacks the advanced features that vSphere offers. vSphere Enterprise Plus doesn't include any Windows guest OS licenses, though it does include several advanced features, such as hot-add virtual CPU (vCPU) and RAM, 8-way virtual symmetric multiprocessing (vSMP), storage and network I/O control, as well as several advanced-memory management techniques.

  • Check out these additional resources for specific, feature-by-feature comparisons of VMware vSphere and Microsoft Hyper-V:

    Windows Enterprise 2008 R2 costs $3,999 and vSphere Enterprise Plus costs $3,495. When you include the features and options in each package, the pricing is very similar.

  • Feature set -- Which virtualization platform offers the most, and best, features?
  • Hyper-V is missing a few key features. It lacks memory optimization capabilities such as transparent page sharing, memory compression and memory ballooning. However, when run on Windows Server 2008 R2 SP1, Hyper-V has dynamic memory that competes with VMware’s memory over commitment -- although the two work very differently.

    Additionally, Microsoft will not fully support a virtualized Exchange server running on Hyper-V if you want to:

    • Use snapshots;
    • Use VMotion or Quick or Live Migration, as VMotion prevents you from using VMware’s DRS clustering.
    • Use dynamic disks or thin-provisioned disks.
    • Have any other software installed on the host server except for backup, antivirus and virtualization management software.
    • Exceed a 2:1 virtual-to-physical (V2P) processor ratio.

    In my opinion, these limitations prevent organizations from benefitting from some of the best virtualization features -- snapshots, VMotion and Quick or Live Migration.

    vSphere offers numerous advanced features that you may want to take advantage of for other VMs running on the same server, like distributed resource scheduler (DRS), distributed power management (DPM), VMotion, Storage VMotion and VMware high availability (HA).

    You may not be able to use all of VMware's features on your Exchange VM; however, you generally can use them for different VMs running on the same server. In doing this, you'll be able to place more VMs on a single vSphere server. Don’t overdo this, though, as VM sprawl can negatively affect performance.

  • Note: The cost and features comparisons above are not full comparisons of these two platforms. For more cost and feature comparisons, I recommend the following resources:
  • vSphere vs. Hyper-V Comparison
    Choosing vSphere vs. Hyper-V vs. XenServer
    How to run Exchange 2010 on Hyper-V R2
  • Resources and educational options -- How much information and support is readily available online about each platform? When you need help with your virtualized Exchange infrastructure, which platform is there more information on?

    In my opinion, the VMware community offers more educational options, innumerable blog posts, certification options and additional guides. There also are more conferences and resources available that focus on VMware.

  • Third-party support and tools -- How many third-party tools are on the market for each virtualization platform? How many new tools are released, on average, in a month or a year? While there are quite a few free and paid tools out there for Hyper- V, it seems that there are more options for companies working with vSphere.
  • Scope of product line -- It’s important to consider product maturity, how much each vendor has invested in its virtualization platform and its plans for future growth.

    Because Exchange Server and Hyper-V are both Microsoft products, you can use a single product suite -- System Center -- to manage them both (in one way or another). Although the suite has fewer features than VMware’s tools, there may be benefits to having your management layer all from the same vendor.

VMware offers more than 20 different virtualization pieces that fit into its overall vSphere product line. This is important because you may want to expand your company’s virtual infrastructure and will need more than what is offered natively.

Tuesday, August 24, 2010

Four things to remember about server virtualization security concerns

I've been studying virtualization and virtual server environments pretty carefully the last few years, so I'm always a little surprised when our clients who are looking to deploy virtual server farms in their data centers start getting confused about server virtualization security.

The reason is that virtualization changes nothing. No, really. Let me explain.

You have the same access control issues and the same systems. Nothing fundamentally changes when you roll out a virtual environment compared to an existing physical environment. What was important before is still important.

Of course, just because the big picture is the same doesn't mean that the details are the same. For example, some old security functions -- especially of intrusion detection and prevention -- become more difficult to do in a virtual environment. When you get rid of 40 or 50 patch cords and turn that switch into a virtual switch split across multiple virtualization hosts, it's not so easy to find a place to jack in an IDS or to put an inline IPS.

Another security issue in virtualized environments is the unpredictability of location. When you virtualize within a data center, or even across data centers, you don't know what physical host any particular virtual machine is going to be running on at any one moment. In the physical world, you are trading individual Ethernet ports for trunked VLANs. This means you may have to redesign your security topology to be less focused on what systems are sitting in a particular rack, to what functions are running on a particular VLAN or subnet.

At the same time, performance and management become issues we have to plan around. When we had lots of systems, it was simple to buy a lot of small, cheap firewalls that could split the load; it was also easy to define policy because each firewall only handled a small number of systems. With large virtualized clusters, your pile of firewalls may have to coalesce into a smaller number of larger devices, each capable of handling much higher loads. A more subtle issue is that most firewalls have poor facilities for management of large, multizone policies. I have found many firewall vendors who have been good partners for a decade can't handle virtualization topology without making you stand on your head when it comes to policy definition.

Four considerations for virtualization server security integration

As your virtualization project comes together, keep in mind the following important points to ease security integration:

  1. VLANs are king, and you will need to get used to bringing trunked interfaces into your switches and firewalls. Make sure you have at least 1Gbps ports everywhere, and look to the day when 10 Gbps may be needed. If you're buying anything that only goes 100 Mbps, you're wasting your money.
  2. Putting more eggs in fewer baskets means paying more attention to high availability. Everything should come in pairs and make sure you have two paths throughout the network. Any one component should be able to fail with absolutely no loss of connectivity or security.
  3. Traffic inspection tools such as IDS and IPS are harder to place in virtual environments. Running them in a virtual machine is almost never the right answer, but you may need special tools or hooks into your virtualization environment to get the traffic out where it can be inspected.
  4. Look to your existing vendors to extend existing tools to support virtual environments, rather than buying a second set of tools just to handle virtualization. For example, it's better to have a single backup solution for both physical and virtual systems than trying to manage two separate backup solutions.

Saturday, October 17, 2009

VMware Resources

There are plenty of resources on how to architect a virtual infrastructure and do it correctly. These resources will help you gain knowledge and experience with VMware so you can make sure your virtualization project is successful.

  • Documentation – VMware has excellent documentation and many individual documents for specific areas. You may not read all of them, but at least review the release notes, configuration maximums and installation guides before installing VMware. Then read the Server Configuration and Resource Management Guides.
  • Classes – A great way to kick-start your learning is to spend a week in a class on how to implement virtualization. You will learn from the material, can ask questions of the instructor and will have hands-on labs to practice what you learn. While classes can be a useful tool, they are expensive. But classes are a requirement if you want VMware Certified Professional (VCP) certification.
  • Books – VMware experts have written several books that share these experts' knowledge and experience. Search your favorite book website on VMware and you will have plenty to choose from.
    Websites – There are lots of great websites full of VMware information, news, tips, webcasts, videos and much more like TechTarget's SearchVMware.com and SearchServerVirtualization.com.
  • Blogs – Dozens of VMware- and virtualization-specific blogs provide a wealth of information from experienced VMware veterans. For a complete listing, including TechTarget's own Virtualization Pro blog, check out vLaunchpad.
  • VMworld – VMworld is the greatest annual virtualization show on the planet. So if you're serious about using VMware, you should attend. Besides more than 200 great technical sessions, there are hundreds of third-party vendors and partners at the show and thousands of customers, industry experts, VMware employees and more.
    Webcasts/Podcasts – VMware has regular technical webcasts and podcasts that are a great way to learn about specific topics. TechTarget also has a large library of webcasts on both SearchVMware.com and SearchServerVirtualization.com websites. If you miss a live one, you can access them in the archives.
  • VMware User Groups – VMware user groups, or VMUGs, are a great way to meet your local VMware crew, watch technical presentations from VMware, customers, partners and vendors and to meet other local users. It's a great way to share information and to get answers to questions. Most large cities have a VMware users group, and groups typically meet every few months. You can view the upcoming schedule of VMUG meetings and sign up to attend at VMware's website.
  • Knowledgebase – When you think of a knowledgebase, you usually think of a repository of documents that cover problem causes and solutions. VMware's knowledgebase is a lot more than that, though; it is full of how-to and informational documents that go well beyond how to solve specific problems. If you have a question on any VMware-related subject, this is a good place to start looking for answers.
  • Virtual Infrastructure Operations – Virtual Infrastructure Operations, or VI:OPS, is a VMware community portal that contains great information from VMware employees, customers and partners. It includes information such as proven practices, how-tos and other great information focused in specific areas such as strategy, security, management and more.
  • Forums – Support forums such as VMware's VMTN forums and TechTarget's IT Knowledge Exchange are a fabulous way to get answers to questions, share ideas and experiences and learn from other experienced users. Even if you don't have a specific question, you can browse through the many thousands of posts or answer a fellow IT pro's question.
  • Social Networking – When you think of social networks tools like Twitter, you might think of users posting what they had for dinner or the weather. You might be surprised to learn that many users using Twitter post questions, comments and experiences about virtualization-specific topics. And you'd be surprised what you can learn in 140 characters. So sign up for an account, and if you're looking for virtualization-related people to follow, try following the followers for people like John Troyer, Hannah Drake or Eric Siebert.


Practice makes perfect


Gaining knowledge is a great way to become educated, but gaining experience is what will really help you improve your virtualization skills. Knowledge and experience go hand in hand. You can learn only so much by reading. To become truly knowledgeable, however, you need to take it to the next level by actually doing the things you read about, and to do that you'll need software and hardware.

Getting the software:

  • Free products – Products like VMware ESXi and VMware Server are great free products that you can install to start gaining experience with virtualization. While VMware Server installs on Windows/Linux systems and is more of a desktop product, ESXi installs on bare-metal and is a true data center virtualization product. Both products will install on a variety of server hardware (including older hardware) and are a way to gain experience before you invest in the more expensive editions of ESX and ESXi.
  • Evaluations – VMware offers 60-day evaluation copies of its full-featured VMware ESX and ESXi editions as well as its vCenter Server management application. This is a great way to experience higher-end products and gain experience configuring enhancement products, such as Distributed Resource Scheduler and Fault Tolerance.

Getting the hardware:

  • White-box and older hardware – Bare-metal products such as ESX and ESXi are officially supported only on specific hardware listed on VMware's Hardware Compatibility List (HCL) but fortunately ESX and ESXi will run on a lot of hardware that isn't listed on the HCL. Not everyone has spare server hardware to use to learn virtualization but you can use white-box (generic) hardware and older name-brand server models (i.e. Hewlett-Packard G2 and G3 models) for this.

You can find many cheap older servers on auction sites like eBay, but be aware that they may not support some of the newer features such as Fault Tolerance, which require the latest CPUs. Also, vSphere requires 64-bit hardware. Using new white-box hardware is a cheap alternative to buying new brand-name servers will often support features such as Fault Tolerance. Additionally you can find many cheap iSCSI/Network File System (NFS) network-based storage devices such as the Iomega 1X2 so you can use some of the advanced features that require shared storage.

VCAP-DCA (VDCA550) - FINALLY NAILED IT

I feel proud to inform you that I have passed my VMware Certified Advanced Professional - Data Centre Design (VCAP-DCD) certification exam s...