Wednesday, December 3, 2014

Configure HP Quality Center for Streamed XenDesktop 7.6

HP Quality Center is web based tool used by tester and developer (Don’t ask me more detail beyond this : ) )

image

While working with HP QC we must know it requires:

  • Requirements for QC to work is to have MS office 32 bit for now in base image, early next year TCOE/HP will update UFT after that we will be able to   use MS office 64 bit.
  •   HP QC will work only in IE 32 bit version
  • HP Quality Center download 280 MB of plugin and store under c:\users\APPDATA\LOCAL\HP

Now with above requirement if we are planning to deploy desktop class OS then you have multiple choice to choose from Flexcast model

image

Challenge : We have tried using UPM to synch around c:\users\APPDATA\LOCAL\HP but couldn’t  succeeded. So we tried PVD (Personal vDisk ) route but PVD does require minimum space and spending space just for 280MB file is waste of valuable storage space.

Solution: How about using Streamed static non persistent desktop. Once users logged in, desktop will be assigned to the users at the same time image can be streamed. We need to find out the way to redirect  APPDATA\LOCAL\HP . Remember this is not straight forward and that’s why we are discussing here.

User profile SID  sits in HKLM\Software\Microsoft\WindowsNT\Currentversion\ProfileList which is non-persistent since the C:\ drive is streamed in  read-only mode.

So first redirect : HKCU\Software\Microsoft\Windows\CurrentVersion\Explorer\Shell Folders

 image

HKCU\Software\Microsoft\Windows\CurrentVersion\Explorer\User Shell Folders

image

When running Quality Center  for the first time it loads ALM-Platform-Loader.msi, a copy and associated files are copied to AppData\Local\Microsoft\Windows\Temprorary Internet Files\Content.IE5 folder.  Because of the read-only nature of the images and the UPM is not synchronizing AppData\Local contents when a VM is rebooted and re-logged. It also creates a folder under AppData\Local\Temp called TD_80; this folder disappear when a user reboots the VM.

To fix this we have to do two more redirection 

HKCU\Software\Microsoft\Windows\CurrentVersion\Explorer\Shell Folders
image

CACHE

image

Tem and Temp here to be redirected:

image

All the registry changes can be pushed via GPO along with UPM .

Credit : Thanks for sharing this inputs my friend, you know whom I mean Smile

Tuesday, December 2, 2014

Load balancing users across datacenter using XenDesktop 7.6

We get requirement quite often to load balance users across datacenter and provide DR with XenDesktop. There are tons of article which will help you to design so. But what we are discussing here is load balancing users within delivery group.

Requirement : Load balance users delivery group across datacenter. If there are 100 users in a particular use case then 50 users should be directed to datacenter A and 50 users should be redirected to datacenter B.

Challenge: It would have been easy if we had to just load balance users. We could have used GSLB and distribute users in round robin fashion. But when it comes to delivery group this has its own challenge. To achieve this we do require single farm architecture. To build single farm architecture we do require SQL availability across the location. Challenge is with the amount of require bandwidth and latency within India. In general latency across two cities in India is around 60ms.

Gotchas: Profiles availability across datacenter. Microsoft does not support profile replication. So if we need profiles with the users then we must use two separate store at each datacenter.

How to achieve this: To start with I will put some drawing to make it simple.

SutherLand Design Document - ForBlog

Component configuration:

Two Delivery Controller at both datacenter: Total of four  Delivery Controller will be part of the single XenApp/XenDesktop Site. But here is the catch, VMS at respective DC will be pointing to respective Delivery Controller . So VMS will be register with only with respective site delivery controller.

Two StoreFront at both the datacenter: Two storefront will be in cluster at each of the datacenter. NetScaler will be used to load balance each of the storefront farm across datacenter.

Separate PVS farm at each of the datacenter: Each of the farm will be streaming VM’s in their respective datacenter.

Now coming to the important part is SQL setup. There are many ways we can setup SQL for database replication and I am not going to explain those. You can refer to article like this to get that configure. What I will explain you is how we did the setup in our environment. Two  SQL has been setup with Multi Subnet fail over cluster and something similar explained here

image

In our case we have one node at each of the site. WFC has been setup with the following roles

image
Under the server name there are two IP’s and this is used for Availability Group Listeners under SQL

image

WFC resource property is important to understand for failover .

image 

This is how listener group looks like

image

We are replicating three database a)Site b)Logging c)Monitoring using AlwaysOn High Availability. During site creation we pointed it to listener database and allow Studio to create database. Once database were setup, It was then moved to Always ON group.

 image

Then we separated all the database and moved separately. Now time for testing? No before we start testing we have to follow few more steps to ensure Delivery Controller is MultiSubnet aware and logins are replicated. To do so I followed Citrix blog and download the script listed here. Now its time for powershell magic. Open powershell from desktop studio and check if you have all the scripts. We need to run Change_XD_TO_MultiSubnetFailover.ps1

image 

Once script executed then it will be upload

image

Post this when we run get-brokerDBConnection it will showing multisubnetfailover=true.image

Now make sure logins for all the DDC is created on replica database .

image

DDC is ready for fail over testing. Now we need to create delivery group for  datacenter A and map catalog corresponding catalog. Separate delivery group for datacenter B  mapped to catalog for respective dc.

Now we need to publish desktop  to this both deliver group.

Add-BrokerApplication -Name "Publised App Name" -DesktopGroup "Delivery Group A"

Add-BrokerApplication -Name "Publised App Name" -DesktopGroup "Delivery Group B"

So what will be the end result: Users will hit GSLB which will deploy users in round robin fashion across datacenter. Users will  land on one the LB Store Front server and will get access to application. Users will be load balanced in round robin fashion but will land on same delivery group. Delivery group which doesn’t have priority (Fail over priority can be defined for delivery group) defined. Delivery group is going to distribute users across the VM . Incase of one of datacenter goes down SQL connection will failover to other site. This will have to wait till DNS update happen and listner group IP is changed to other site. Then we will be using connection leasing feature of XD 7.6 which is similar to LHC of XenApp 7.6 .

Drop a note incase you have question.