The quarterly publication of the International Legal Technology Association
Issue link: https://epubs.iltanet.org/i/51267
VDI Server Clusters: We're running VMware ESXi 4.1 on IBM x3850 X5 servers in an N+1 cluster configuration. We keep memory and CPU utilization below 70 percent across all host nodes so that if one node fails or needs to be taken out of the cluster for maintenance, we can continue to support the full load of the cluster on four hosts without any performance degradation. Each server contains four (8-core) Nehalem processors and 256GB of RAM. We're able to deploy at least 90 virtual desktops on each host server. CPU resources used to be the most constrained resource in our clusters until we made the move to Nehalem-based servers. Memory is now the most constrained resource. We're about to put a fourth cluster into production and will continue to add servers and storage as our VDI environment grows. VDI Storage: We're using a NetAPP model 3140 NAS for backend VDI fiber channel storage. The NAS is equipped with dual controllers, 256GB Performance Accelerator Modules (PAM flash cards) to provide increased disk IOPS capacity for the 20TB of storage provisioned to accommodate up to 1200 virtual desktops. Data Center LAN: From a data center networking perspective, each VDI cluster server has redundant Emulex 4Gbps host bus adapters and eight 1Gbps Ethernet adapters (two teamed for the Service Console, two teamed for VMware traffic, two teamed for vMotion, and two spares). Each server connects to redundant Brocade SAN fabric switches and redundant Cisco 3750G top-of-cabinet network switches. Plans are now under way to replace our Cisco core 6513 switches with new Cisco Nexus 7009 switches to facilitate a move to 10Gbps and FCoE in the data center. View Client Hardware: On the client side, we've deployed HP TC5740 thin clients running Windows Embedded Standard (WES) 2009. We just completed testing of the Samsung NC240 PCoIP zero client and are now deploying this as a standard moving forward. We're expecting to get at least five years of life out of the new zero-client hardware. Virtual Desktop Configuration: We're running View 4.6 and leveraging linked clones. We're running a persistent desktop today, but we'll ultimately move to a nonpersistent desktop once all applications are virtualized and available for streaming to the desktop. Each virtual desktop is provisioned with one virtual CPU, 2GB of RAM and just under 25GB of disk (20GB for the OS, 2GB for user data and some scratch disk). Each user also has a mapped home directory drive where they can store additional data without any file size limitations. The virtual desktop disk resides on NetApp storage, which provides the benefit of data deduplication. View PCoIP Footprint and WAN QoS: Foley is using the PCoIP protocol, which is a dynamic protocol. It makes it important to configure the appropriate floor and ceiling bandwidth, minimum and maximum image quality, frame rate, and audio bandwidth limit values so that there are some controls around how much bandwidth PCoIP can use. Our average View footprint is about 120Kbps, and we have standardized on the following PCoIP configuration settings: • PCoIP Bandwidth Floor = 300Kbps • PCoIP Bandwidth Ceiling = 3000Kbps • PCoIP Minimum Initial Image Quality = 50 percent • PCoIP Maximum Initial Image Quality = 70 percent • PCoIP Maximum Frame Rate = 14 fps • PCoIP Audio Bandwidth Limit = 450Kbps If you're running the PCoIP protocol across a WAN, it is critical that you assign your VDI traffic with the highest possible QoS class of service. It makes a big difference from a video quality perspective. If you don't give PCoIP high priority, you'll experience what is referred to as "build to lossless" screen painting issues, whereby the screen image will initially appear blurred and then come into clear focus over a very noticeable one to two seconds. The user experience will be significantly degraded if you don't get this right; the Class of Service (CoS) AF31 marking works well in the Foley environment. Having fail-over redundancy in your WAN is obviously critical as well. Peer to Peer the quarterly magazine of ILTA 95