Featured Post

Using VMware's Horizon Performance Tracker For Rudimentary Blast Optimization

Recently updated for Horizon 7.10, the  VMware Blast Extreme Optimization Guide  focuses on, "two key configurable components: the tran...

Thursday, March 26, 2020

A Primer On NSX Advanced Load Balancer (Avi Vantage) For Horizon And Workspace ONE

NSX Advanced Load Balancer, formerly called Avi Vantage, is a solution VMware secured through the acquisition of Avi Networks.  A fully software defined load balancing solution/application delivery controller, Avi Vantage adds L4 - L7 server load balancing to NSX, rounding out an already impressive SDN solution.  Overall, the Avi Vantage offering is a natural progression for VMware, a continuation of what the company has always been good at: replacing beefy, unwieldy hardware bound solutions with agile and efficient virtualization.

While the acquisition has been cause for VMware network geeks to rejoice, it's also a particularly exciting development for VMware's end user computing products, Horizon and Workspace ONE.  Traditionally these solutions have required the use of third party load balancers, which has been fine, though it does introduce a bit of complexity and another vendor to deal with. So to start with the Avi acquisition offers an opportunity to simplify the VMware EUC stack, along with the promise of a more tightly integrated load balancing solution.  In mid March the release of UAG 3.9 added, "Qualified support for the AVI Networks load balancer used in front-ending Unified Access Gateway for Horizon."   Earlier in the year, a Reference Architecture For Horizon leveraging Avi Networks was released.  Further, there's this step by step configuration guide, Configure Avi Vantage For VMware Horizon. While these documents are quite exhaustive, I put together this post as a primer on Avi Vantage for Horizon Admins.  The idea is to give folks a high level overview of how Avi Vantage plugs into the Horizon/WS1 stack and why it's relevant.

Why I'm So Giddy About Avi Vantage And Horizon

When it comes to VDI and App Publishing it's essentially a 2 company game: VMware vs Citrix.  The competition and rivalry is intense to say the least.  Large fortunes and entire careers fuel fierce debate, endless FUD, mud slinging, hyper bake offs and neurotic excel spreadsheets filled with feature by feature comparisons.  Fear and loathing abounds with otherwise genteel engineers staring out through dead shark eyes, broken half bottles in hand, ready to cut ya!  At times it feels more akin to identity politics, fanatical sports rivalry or a down right Hatfiled vs McCoys family feud.   As someone in the middle of this conflict I've always had to admit that Netscaler sounded like a pretty solid product.  For awhile, the worst thing you heard about it was it's too expensive and offers more functionality than Citrix customers actually need.  However, with it's latest vulnerability Netscaler's stature as unquestionably awesome has come under scrutiny.  Combined with the notoriously bad treatment and support customers receive from Citrix, folks are really starting to wonder if it's worth the trouble to rely on them for this critical functionality.

More notably, both Citrix and VMware customers, being techies, are always looking for more innovative and smarter ways of handling things.   In the field of load balancing there hasn't been a lot of innovation or change, so in that regard Avi Vantage really stands out.  We're not talking about just P2V-ing a load balancer and patting ourselves on the back. With Avi Vantage we're talking about an elastic fabric that allows you to take advantage of the virtualization infrastructure you already have in place, whether it's across multiple data centers or even different cloud vendors.  Accordingly, Avi Vantage is a real shot in the arm for VMware's EUC stack in a couple major ways.   First, by adding load balancing and application controller capabilities to VMware's arsenal, it brings it's EUC stack much closer to parity with what Netscaler and Citrix offers. Two, while Avi Vantage might not be at complete parity with Netscaler, it does a lot that Netscaler can't.   In light of the current pandemic and associated challenges this differentiator has some real teeth.  When firing up a new data center in the middle of a crisis do you want to wait on the purchase and shipment of new hardware?  Do you want to limp along with a virtual appliance that's a sub par version of the load balancer you normally work with? Or would you rather prefer walking through a few left clicks and right clicks on your Avi Controller, simply extending a fabric you already have in place?

No doubt, there will be plenty of debate over Netscaler + Citrix vs Avi Vantage + Horizon.  If reason and cooler heads prevail it wont be a simple debate, but instead a thought provoking and interesting one.

Avi Vantage Overview

At a high level, Avi Vantage is a software defined load balancing solution/application delivery controller that functions across an entire enterprise, including separate cloud environments like AWS, Azure or Google Cloud.  Most relevant for typical Horizon shops, it integrates quite impressively with traditional on-premisses vSphere environments.   It all begins with a software based Avi Controller, the brains of the operation where all load balancing policies are defined.   The Controller, or controller cluster, essentially binds to your vSphere environment(s).  In turn, the Avi Controller manages and controls the placement of virtual services across your vSphere infrastructure, what are referred to as Avi Service Engines.  Based on instructions received from the Avi Controller, the Service Engines, "perform load balancing and all client- and server-facing network interactions." They also collect,  "real-time application telemetry from application traffic flows." The controller can automagically control the setup and distribution of these service engines across the ESXi host within your vSphere environment, ensuring proper redundancy, capacity and work load distribution.

These different Service Engines laid out across the vSphere infrastructure are what endpoint clients actually connect to and interface with.  They're associated with the VIPs and handle traffic based on the virtual services and pools defined on the Avi Controller.  So essentially, you define the load balancing logic on the controller, then these Service Engines act as minions that execute the logic for incoming client connections.

The end result is an elastic load balancing solution that avoids the challenges with efficiency that plagued traditional hardware based load balancing solutions.  The ability to automatically spin up Services Engines on the fly, scaling out VIPs horizontally as needed, allows for right sizing.   Service Engines can be spun up or spun down in increments as small as 1 vCPU, 2 gigs of RAM and 10 GB of storage.  Contrast this to redundant pairs of active/stand by hardware based appliances and this benefit of Avi Vantage becomes pretty compelling.

For more info check out this Architectural Overview for Avi Vantage.

Avi Vantage For UAG Appliances

The Reference Architecture for Horizon reviews 3 different methods for load balancing external traffic to UAGs. Factors such as the need for HIPAA compliance or whether you’ll have multiple clients behind a single NAT, at a remote site, determine which method is most appropriate. For this post, I’m going to review the first option, Single VIP with two virtual services.

Regardless of which option you go with, it all begins with a Horizon client communicating with a virtual service supported on Avi Service Engines.  Virtual services are comprised of IP and port combinations defined on the Avi Controllers.  The client traffic is passed by these services to the optimal UAG appliance based on pools that have also been defined on the Avi Controller. Pools determine the ideal server to pass traffic to based off configurations like server lists, health monitoring, load balancing algorithms, etc... 

To illustrate, below is a graphic detailing the anatomy of a typical Horizon Blast session through a UAG appliance.  Initially you have the primary Horizon protocol handling authentication through XML structured messages over port 443.   Then you have the secondary Horizon protocol, Blast in this example, operating over 8443.  (For an excellent primer on UAG load balancing and Horizon protocols check out this amazing post by Mark Benson.)

Accordingly, we have two virtual services to configure on Avi Vantage, one for the primary protocol and one for the secondary protocol.  Below is a screen shot from my own lab.   The virtual service Horizon_UAG_L7 is configured to accommodate the primary Horizon protocol operating over TCP 443, while Horizon_UAG_L4 is configured for both the PCoIP and Blast extreme secondary protocols that operate over TCP/UDP 4172 and 8443 respectively.

These virtual services in turn are associated with a pool that determines server selection for incoming traffic based off configurations such as load balancing algorithms, health monitoring and persistence profiles.  

Finally, below is a screenshot of a custom Health Monitor that's created for Horizon.  The Health Monitor is associated with a pool and helps, "validate whether servers are working correctly and are able to accommodate additional workloads."

One of the key requirements of this entire setup is ensuring that users are routed to the same UAG appliance for both the primary and secondary protocols.  In a nutshell, we have to ensure the same UAG appliance that authenticates a user is used for the display protocol traffic as well.  For a single Horizon connection, you can't have authentication against one UAG appliance then display traffic flow over a separate UAG appliance.

This has been a very basic high level overview of what's involved in load balancing UAG appliances through Avi Vantage.  For more details and step-by-step guidance, check out the Reference Architecture For Horizon along with Configure Avi Vantage For VMware Horizon.   Again, three different methods to choose from, based off the specifics of your use case, are detailed in this documentation.

Horizon Connection Server 

Traditionally load balancers have always been a requirement for Horizon Connection servers, with at least two Connection servers needed to ensure redundancy for a production caliber deployment.  So for a typical Horizon deployment with UAG appliances you'll need load balancing in front of both the the UAG appliances as well as in front of the Connection servers.  Below is a helpful image to illustrate:

As you might imagine, accommodating this model is pretty much a slam dunk for Avi Vantage.  Setting up load balancing for the Horizon Connection Servers is very similar to that for the UAG appliances.  As with UAG appliances, you'll configure a virtual service(s), a pool and health monitor, then you're off to the races.  For detailed step by step instructions on configuring Avi Vantage for Horizon Connection servers, check out this section of the Reference Architecture For Horizon.

For those familiar with UAG's built in load balancing-ish capability referred to as High Availability, note that HA for UAG doesn't include load balancing for the Horizon Connection servers, just rudimentary load balancing for the UAG appliances.  This is a major advantage Avi Vantage offers over HA, though certainly not the only one.

Global Load Balancing For Always On Point Of Care Architecture

Always On Point Of Care is an architecture that's been around for about 9 years now.  The basic idea is to provide a fully redundant, bullet-proof Horizon deployment.  Essentially, you stand up two separate Horizon environments that share no interdependencies, so that theoretically you could loose an entire site but still have Horizon services available.   Key to this model is a global load balancing solution that sits in front of the two sites, routing the client connections to the separate Horizon environments.   Historically, this functionality has been handled by our load balancing partners. 

Nowadays, rather than leaning on a partner, we can leverage Avi Vantage for global load balancing.  The documentation refers to this global load balancing feature as Avi GSLB.  For more details on configuring leveraging Avi GSLB for Horizon, check out GSLB In Avi Vantage For Horizon.  Here's an awesome looking graphic on this deployment model for APOC that I stole from the Avi Networks website:

App Volumes

Avi Vantage also supports the Always On Point Of Care model by providing load balancing for App Volumes.  Load balancing has always been a requirement for App Volumes redundancy and scaling. You have multiple, essentially stateless App Volume managers that share a common database, sitting in front of a load balancer.  Load balancing for App Volumes is briefly covered in the Avi Reference Architecture for Horizon .  For reference you can also check out the F5 guide, Load Balancing VMware App Volumes.

Client Connection Breakdown 

Depending on the deployment method you go with, Avi Vantage can offer a nifty little break down of the session health for individual connections. It can distinguish between latency between the remote client and the Avi Service Engine versus latency between the service engine and the back end server. It can also account for fast or slow app server response time. This promises to come in handy when trying to get to the bottom of latency encountered by your Blast connections through UAG.

WS1 Use Cases

With official support for Horizon access already, it seems like only a matter of time before there's official support for WS1 UEM services on UAG like Secure Email Gateway (SEG), VMware Tunnel and Content Gateway.    Further the resources these services provide access to - email, intranet sites, SharePoint, etc... - are the more typical types of servers Avi Vantage has always been able to accommodate.   So just as for the Horizon use case, you'll have front-ending for the UAG appliances along with load balancing for on premises resources.

vIDM Connector 

While it's kind of a niche scenario, there are situations that require load balancing for vIDM Connector, such as when it's used for kerberos authentication.  I'm not aware of any official support but there's no reason to believe Avi Vantage can't provide load balancing for vIDM Connectors.


This is the most excited I've been about a VMware acquisition since AirWatch.  Along with all the practical capabilities that Avi Vantage brings to the EUC stack in the here and now, there's all the speculation about what it might be built to do in the future.  There's about 2 or 3 different scenarios that consistently pop up when I speculate with old timers over what VMware might do with Avi Vantage to further enhance the EUC experience.  I'm not going to go into that here, but I'm confident I'll be writing about such enhancements in the future.  

Tuesday, December 24, 2019

Using VMware's Horizon Performance Tracker For Rudimentary Blast Optimization

Recently updated for Horizon 7.10, the VMware Blast Extreme Optimization Guide focuses on, "two key configurable components: the transport protocol and display protocol codec."   To gain real-time insight into the configuration of these components, and Blast performance in general, the Horizon Performance Tracker is a natural fit.  Both free and built into the Horizon agent, it's a very accessible way to get started with rudimentary Blast optimization. This article details general principals behind Blast optimization and illustrate how Horizon Performance Tracker can assist in the fine tuning of Blast protocol behavior.   It aims to provide context and guidance for tuning Blast's transport protocol, then moves on to codec and bandwidth considerations.  Along the way it will also review how the Horizon Help Desk Tool, built into the Horizon solution as well,  can further assist with Blast optimization.

The Basic Anatomy Of A Horizon Blast Session

The blog post, Load Balancing Across VMware Unified Access Gateway Appliances, contains one my favorite descriptions of Horizon sessions.  Under the section titled, "Horizon Protocols," it details the distinction between a primary and secondary Horizon protocol.   The primary Horizon protocol is all about authenticating against the Horizon environment through XML over 443 .  The secondary protocol is the display protocol itself, what translates/transmits pixels from within a virtual desktop OS to the display of an endpoint device.   This is what we're primarily concerned with when optimizing the Blast experience.   If you go with the default port of 8443 for Blast traffic here's what the traffic flow looks like when remoting into a Horizon environment through a UAG appliance:  

Typically, the primary protocol is completely over 443 between the Horizon client and UAG appliance, as well as between the UAG appliance and Horizon Connection server.   For the secondary protocol, Blast Extreme in this example, traffic flows over 8443 between the client and the UAG appliance.  Then, from the UAG appliance to the virtual desktop or RDS host, traffic flows over 22443.  

In the context of optimizing Blast for your environment, one of the first questions to ask about your Blast traffic is whether UDP or TCP is used for the transport protocol.  For most use cases UDP is more ideal and is what the Blast protocol first attempts to leverage by default.  Accordingly, confirming that UDP is actually in use for your environment is a first step towards achieving an optimal Blast experience.

Observing The Transport Protocol In Use 

While you can look at Blast logs to determine what transport type is in use, the Horizon Performance Tracker offers a really, really, really easy and convenient way to determine this info. While not installed by default, Horizon Performance Tracker is built right into the Horizon agent and is offered as an optional component during the agent install.  (Here's more official guidance on installing Horizon Performance Tracker.)  Once installed, from within an active session launch Horizon Performance Tracker from your start menu.   When it's launched you're presented the, "At a Glance," tab.  While this initial screen is certainly interesting in it's own right, things get particularly useful when you click on the icon with the grids in the right corner.  (Underlined with red in the image below.)

In the screenshot below, under the transport section, there's confirmation that UDP is leveraged for the transport protocol in both directions, the default behavior that the Blast strives for. 

If UDP were being blocked for some reason, you'd see something like this:

Again, for most uses cases, UDP is the optimal transport, with the optimization guide stating that with but two exceptions, "VMware recommends that you use UDP for the best user experience.  And if Blast Extreme encounters problems making its initial connection over UDP, it will automatically switch and use TCP for the session instead."  Accordingly, in most scenarios, if you see TCP in use as a transport protocol, something has gone wrong and tuning Blast involves making adjustments to ensure UDP is leveraged instead.  Your first step is to determine if there's issues with UDP port connectivity for 8443 or 22443 along your Horizon session's network path.  (I've provided guidance on this process in a previous post, Troubleshooting Port Connectivity For Horizon's Unified Access Gateway 3.2 Using Curl And Tcpdump.)    If you find that UDP traffic is getting blocked while traversing a foreign network outside of your control, you can try and stack the deck in your favor by leveraging port 443 for external Blast traffic on your UAG appliance.  

Shifting External Blast Traffic To Port 443 On UAG

Configuring your UAG appliance to leverage 443 for external Blast traffic increases the likelihood that external networks will allow your Blast traffic to pass. 443 TCP access is pretty much a given everywhere, a slam dunk in most uses cases.  While 443 UDP connectivity isn't as certain as 443 TCP connectivity, it certainly has better odds that 8443 and is worth a shot.   Further, as an added bonus, making this change will most certainly increases your odds of TCP connectivity and having at least some kind of successful Blast connection.  Here's what the traffic flow will look like:

Shifting Blast traffic to 443 on your UAG appliance is a relatively simple process.  First, navigate to Horizon Edge services on the UAG appliance.   Here's what it looks like when the Blast External URL is configured for port 8443:

To change it to 443, simply append 443 instead of 8443 to the configured URL:

When configuring the Blast External URL, I like to imagine I'm sitting inside a Horizon endpoint client itself, looking for a path to forward Blast traffic too.   Think in terms of what's externally resolvable and accessible from the perspective of the endpoint.   Typically, it ends up being the VIP and associated DNS on a load balancer.

When To Use TCP For Your Transport Protocol

The optimization guide indicates that UDP is usually the optimal transport to leverage, with two exceptions.  First, you'd want to go with TCP if, "Traffic must pass through a UDP-hostile network service or device such as a TCP-based SSL VPN, which re-packages UDP in TCP packets."  Since the days of PCoIP dominance TCP-based SSL VPNs have always been a challenge for Horizon.  The encapsulation of UDP traffic into TCP packet by such VPNs is a real downer, nullifying the performance benefits of UDP.  For Blast traffic it's best to stick to TCP when using these types of devices or when there's some other network challenges preventing UDP use.   

The second reason to go with TCP instead of UDP is when, "WAN circuits are experiencing very high latency (250 milliseconds and greater)"   In regard to this 2nd consideration, Horizon Performance Tracker can again be of assistance.  Round trip latency is prominently displayed under the network section in real-time.

In the above screen shot, with latency at 65ms it would seem that all is right with the world in terms of the transport selection of UDP.   However, if we were witnessing some latency above 250ms, something like below, we'd want to consider forcing TCP usage. 

With latency above 250 ms and low packet loss, the optimization guide is pretty clear in its guidance to leverage TCP for the transport protocol.  However if packet loss were also high, the decision wouldn't be as straight forward.   With Blast's UDP stack's better handling of packet loss than it's TCP stack, you might still want to stick with UDP as a transport protocol in a high latency situation.  Fortunately the Horizon Help Desk Tool can provide insight into whether or not there's packet loss so we can make an informed decision.  

Horizon Help Desk Tool 

The Horizon Help Desk Tool offers an even more useful view of network latency for a particular Horizon session.    It provides a breakdown of network latency for a specific session over the span of 15 minutes, given you a better overall sense of what latency is.  Below is a graph cranked out by the tool for a particularly challenged Horizon session that spikes to latencies above 1200 ms, certainly not the most ideal of scenarios.  

A further benefit of the tool is its ability to report on packet loss within a session which, as previously mentioned, is relevant in determining the optimal transport protocol. After looking up a user's session, from the details screen expand the user metrics session and under Blast counters you'll see the packet loss.  For the session above, though there's high latency, there's no indication of packet loss.   

With high latency and zero percent packet loss we have network conditions bettered accommodated by the TCP transport.  However, had there been high packet loss, we'd have to make a choice between TCPs performance benefits in high latency environments versus the UDP stacks ability to better handle packet loss.  To simulate such a situation in my lab I used a utility called clumsy on my remote endpoint.  After configuring the utility to create significant packet loss the hit on network performance was clearly reflected through the Horizon Help Desk tool.

In this situation, where packet loss is high, UDP might be the preferred transport to stick with, despite the hight latency. Both the VMware Blast Extreme Optimization Guide and Blast Extreme Display Protocol In VMware Horizon 7 white paper indicate that UDP is the optimal transport to stick with under high packet loss conditions.  The white paper specifically states that, "UDP is better at handling packet loss than TCP.  UDP can deliver a good user experience in conditions of up to 20 percent packet loss."

Fun Facts About Codecs

The optimization guide states that, "A codec is a computer program that can encode or decode a digital data stream for transmission. The word codec is a blend of the words coder- decoder." As of today Blast offers a choice between three codecs, H.264, JPG/PNG and H.265, with H.264 being the default.  

One of the H.264 codecs claims to fame is it's ability to handle rapidly changing content.  Another major claim to fame is the ability to leverage the built in H.264 chip of endpoint devices for hardware based decoding, sparing the endpoint's CPUs the trouble.  This both improves performance and extends the battery life of these endpoint devices.  When NVIDIA grid cards are in the mix, things get even more exciting.   The encoding of the codec can be offloaded to the NVIDIA GPU, improving performance and offloading the encoding from the server.  This offloading in turn improves user density and efficiency on the ESXi hosts.   

JPG/PNG, sometimes referred to as the adaptive encoder, is the original codec used by Blast and does software based encoding and decoding. While H.264 is the default, Blast will fall back to JPG/PNG when H.264 isn't option, such as when the HTML client is used from a non-chrome browser. It's also desirable when you have, "Images that require lossless compression," such as quality still images, complex fonts or medical imaging.  However, the optimization guide is pretty clear that it's not so great for rapidly moving content, something the H.264 codec excels at. 

H.265, referred to as High Efficiency Video Decoding (HEVC), is the bigger, badder successor to H.264.  While it introduces bandwidth improvements, it absolutely requires the use of NVIDIA GRID GPUs on your ESXi hosts.  It also requires clients with H.265 decode support, which is common nowadays but not guaranteed.

Finally, a new feature called Encoder Switcher allows Blast, "to dynamically switch between the JPG/PNG and H.264 codecs, depending on screen content type."

Using Horizon Performance Tracker To Observer Codec Usage

Regardless of which codec is best suited for your use case Horizon Performance Tracker can provide visibility into which one your session is actually using.  To observe this in action we can control the codec selection using the VMware Blast settings on the Horizon client.  Here's a screen shot of the codec settings from the Horizon client:

If you uncheck the option, "Allow H.264 decoding," you'll fall back to JPG/PNG and Performance Tracker will report, "adaptive", as the encoder.  (Note: The Blast Extreme Display Protocol in VMware Horizon 7 clarifies that, "JPG/PNG is referred to as the adaptive encoder.")

Whereas accepting the default of, "Allow H.264 decoding," under typical conditions, will cause Horizon Performance Tracker to report, "h264 4:2:0," as the encoder. 

Should you select the option, "Allow high color accuracy," and H.264 is successfully implemented, the tool will report back, "h264 4:4:4," as the encoder name. 

Further, if H.264 is enabled and there's an NVIDIA Grid card enabled for your VM, the tool reports back an encoder name of, "NVIDIA NvEnc H264."  Here's an example from a GPU enabled VM in VMware's TestDrive environment: 

Finally, to allow for use of H.265 when connecting to the same virtual desktop as detailed above, on the Horizon client I checked the box for, "Allow High Efficiency Video Decoding (HEVC)." With my client supporting H.265, Horizon Performance Tracker reports back that, "NVIDIA NvEnc HEV," as the encoder in use.  

Observing Blast's Bandwidth Consumption In Real-Time

Roughly 5 and half years ago I had the honor of meeting the great Cale Fogel, Breaker Of Chains, Knower Of Things And Talker Of Straight. During some chit chat in the hallways of VMworld 2014 he summarized the situation with display protocols quite succinctly.   "It's all about how much screen real estate you're dealing with, resolution, number of screens, versus the amount of changes on the screen.  The more changes that occur and the higher the resolution, the more pixels that have to cross the wire and get reordered on the endpoint."   So, if you have a single monitor with low resolution and a completely static screen, you'll have very few pixels to change and the protocol will gobble up very little in terms of compute resources and network bandwidth.  On the other hand, if you have multiple monitors at high resolution, displaying a lot of active changing content, compute consumption will be high and bandwidth usage will be high.  

An easy way to see this first hand in real-time is through the Horizon Performance Tracker.    Along with the nifty info we've discussed so far, it details how much bandwidth the display protocol is currently gobbling up.   Under the encoder section, there's a field, "Bandwidth used."  Reduce the screen resolution and do nothing within the VM, and you'll see the bandwidth usage plummet.

Only 10k of traffic generated by the Blast protocol, woo-hoo!  However, don't get too excited haole.  Within the same session, move the Horizon Performance Tracker utility itself around on the desktop, shaking it hard and violently like a chimpanzee on meth.  Bandwidth will temporarily spike.

Now, for some real fun, fire up youtube, put in a trailer for Star Wars, increase the youtube resolution to high definition and then take a look at performance tracker.

When it comes to Horizon's display protocols, I like to say, the only way through is through.   Lots of changes on the desktop translate to lots of compute and bandwidth usage.   Fundamentally, it's more of a math problem than anything else.   In the optimization guide, this dynamic is well articulated with the statement, "It is extremely important to recognize that optimizing for higher quality nearly always results in more system resources being used, not less. Except under very unique conditions, it is not possible to increase quality while limiting system resources."  It goes on to elaborate on the inverse relationship between quality experience and optimized resource usage,  stating, "Except in unique situations, optimizing quality increases bandwidth utilization, whereas optimizations for WANs require limiting quality to function over poor network conditions."  So, you're going to have to be honest with yourself and pick your poison.

More Advanced Tuning Covered By The Optimization Guide

The optimization guide goes on to cover additional Blast tuning settings such as Max Session Bandwidth, Minimum Session Bandwidth and Frame Per Second.  While Horizon Performance Tracker can assist with the configuration of these more advanced settings, before mucking around with them I'd circle your attention back to the VM, OS and underlying infrastructure.  This isn’t to say that advanced Blast tuning methods are a waste of time. It’s just that in the absence of other information about your use case, holistically speaking, I'd say you’re more likely to have challenges with the user experience due to the VM and underlying infrastructure than due to advanced Blast tuning.  The optimization guide echoes this sentiment, recommending that, “Before tuning Blast Extreme, it is critical to properly size and optimize the virtual desktops, Microsoft RDSH servers, and supporting infrastructure.”  Remember, key processes behind Blast, VMBlastS.exe, VMBlastW.exe and VMBlastP.exe, are running WITHIN the OS of your virtual desktops. So if those VMs are under specced or starved for resources, your Blast processes will be starved and Blast performance is going to suck. Further, if critical apps within your VM are starved for resources no amount of tuning is going to make up for an app experience that's ruined before anythings even been remotely displayed.  Along those lines, after confirming your VMs are properly specced, optimized and supported by your infrastructure,  I'd recommend taking a hard second look at profile configuration, critical apps and the network paths those apps rely on.  Often a poor user experience is the result of a deficiency outside the Horizon stack, with Horizon just being the messenger.  And we all know what folks love to do to messengers.  

So, in summary, when it comes to Blast tuning, to begin with I'd confirm you're getting the proper transport and codec selection.  I'd also recommend being honest without yourself about the bandwidth requirements, use case requirements and network limitations.  However, before doing a deep dive into the advance tuning of Blast, I'd take a very long, hard second look at the rest of your environment.    

Sunday, September 22, 2019

Wrapping Workspace ONE Goodness Around Office 365 - A Primer

In my first desktop support role I’d hop from cubicle to cubicle, hurling my plastic Microsoft Office 2000 disk like a ninja star at beige Dell towers. I’d take a seat at a users desk, pop in my little silver friend, punch in a memorized CD key and then, 10 to 15 minutes later, I’d assure the user it was no problem at all and walk on to the next cubicle.

Well, it’s 2019 and everything’s more demanding and complex. With Office 365 deployments we're aiming to make Office available to users from anywhere on pretty much any mobile device.  To fulfill this desire for ubiquitous Office access, engineers must design for a balance between convenience and security.  To think such a task will be easy or without challenges is about as reasonable as this:  

After the challenge of securing Office 365 is truly appreciated the Workspace ONE solution becomes and incredibly compelling proposition.  Leveraging cloud based instances of Workspace ONE Access and UEM, within hours we can wrap WS1 security and convenience around Office 365 access.  Not only does this address Office 365 deployment challenges, but it also establishes a foundation for the delivery of other SaaS based solutions within a digital workspace. 

Recipe Overview

Key security benefits of Workspace One, such as SSO and conditional access based on device compliance, can be extended to Office 365 by federating an Azure tenant with Workspace ONE Access. (Workspace ONE Access is the artist formerly known as VMware Identity Manager, RIP) In another post, Wrapping Workspace ONE Goodness Around Office 365 Access - A Quick And Dirty Recipe,  I fully detail an integration between Office 365 and WS1.  The recipe calls for a federation directly between Azure and WS1, without the complexity of ADFS.  If you're interesting in jumping right into this recipe, again, here's the link.  Otherwise, below is some overview and context on Office 365 and Workspace ONE.

The Standard Microsoft Options For Office 365 Access

If you want to leverage local AD accounts for Office 365 access you start by standing up an instance of Azure AD Connector within your trusted network.   This component syncs your local AD users to your Azure tenant, which in turn allows you to entitle them to office 365 licenses. Once these users are synchronized and enabled for Office 365 access, the next question is, "how do you authenticate these users against the local AD environment. For that you have 3 basic options: ADFS, PHS (Password Hash Synchronization), and PTA (Pass-through Authentication). 

Password Hash Synchronization

Password hash synchronization is the default authentication method when Azure AD Connect is installed.   One of the more notable features of this option is that you don't need to pol any holes in any firewalls or setup any internet accessible infrastructure.  Local AD passwords, via the Azure AD Connector, are hashed and stored in the Azure environment so that AD users can authenticate to Office 365 using their normal credentials.  

Pass-through Authentication

Similar to PHS, Pass-Through Authentication (PTA) allows you to authenticate against your on-premises AD environment without having to poke holes through firewalls or setup any internet accessible infrastructure.  However, no AD password are hashed in your Azure environment.  Instead, authentication against your local AD environment is handled by a special agent running on Azure AD Connect within your trusted environment.   This agent communicates with the Azure tenant over outbound 443 traffic.   

Seamless Single Sign-On 

Regardless of whether you go with PHS or PTA, you can leverage seamless single sign-on for your on premise users.   This capability makes PHS or PTA a very attractive option for replacing ADFS in situations where Office 365 is the only application you need access to. 


ADFS is the original Microsoft solution for addressing authentication of on-premises AD users to Office 365.  Unlike PHS or PTA, if you want users to have access to Office from the external world, with the ADFS model you'll need to setup some internet facing infrastructure.  In light of this requirement, PHS or PTA appear to be the path of least resistance.  However, if you're looking to integrate other SaaS solutions outside of Office, without the assistance of any other 3rd party IDPs, ADFS is still relevant.  

Utilizing Workspace ONE Access For Office 365

Workspace ONE Access Federation With ADFS

One option for integrating Workspace ONE with Office 365 involves federation with ADFS.  ADFS is federated with Azure, and then in turn is federated with Workspace ONE Access. This can involve setting up Workspace One Access as a 3rd party identity provider for ADFS or vis versa, configuring ADFS as 3rd party identity provider  for Workspace One Access.   

Workspace ONE Federation With Another 3rd Party Identity Solution

Another option is to have some kind of federation between your Office 365 environment and another identity provider like Ping or Okta.  Then in turn, you can federate the 3rd party IDP with Workspace ONE access, allowing the 3rd party IDP to leverage the device awareness of Workspace ONE UEM.  

Direct Federation Between Azure And Workspace ONE Access

The recipe detailed in my post, Wrapping Workspace ONE Goodness Around Office 365 - A Quick Dirty Recipe, is based on a direct federation between Workspace ONE Access and an Azure tenant.  With this model, Workspace One Access becomes the primary identity provider for your Office 365 subscription.  A key capability that allows for this is configuring an on premise vIDM Connector in outbound mode.   While Azure AD Connect continues to sync users to the Azure tenant, actual authentication is handled by a vIDM Connector in a manner very similar to Microsoft's Pass-Through Authentication model.   

The benefit of this deployment model is the simplicity of setting up PTA combined with the full breath of Workspace ONE capabilities.  Most notably, we get the benefits of an integration with Workspace One UEM (The artist formerly know as AirWatch.) Leveraging the device compliance policies of WS1 UEM (AirWAtch), we can factor in device posture when implementing our conditional access policies.

Additional Resources

Wrapping Workspace ONE Goodness Around Office 365 - A Quick And Dirty Recipe:

Hybrid Identity And Directory Synchronization For Office 365:


Official VMware Guidance: 

Dean Flaming Elaboration: 

Peter Bjork Blog: 

Preparing a non-routable domain for directory synchronization: 

Configuring VMware Identity Manager As A Third Party IDP In AD FS:

VMware Identity Manager using Azure AD as 3rd party Identity Provider:

Wrapping Workspace ONE Goodness Around Office 365 - A Quick And Dirty Recipe

This recipe will detail a federation of Office 365 with a cloud instance of Workspace ONE Access, the product formerly know as VMware Identity Manger.  It's a very quick and dirty, all in approach, to an integration between Workspace ONE and Office 365 that's ideal for POCs and lab environments. It's certainly not the only way to integrate Office 365 with Workspace ONE.  The ideal production deployment strategy very much depends on the specifics of your circumstances, like current WS1 licensing, ADFS requirements or other identity providers in the mix.  However, as far as standing something up quickly in a lab so that you can explore options of a Office 365/WS1 integration, I think this deployment model is the cats pajamas and I'm super excited to share it.  

With this recipe, Azure AD Connect is used to sync users from a local on-premises AD environment with an Azure tenant. While users will be synced using Azure AD Connect, actual authentication against the local AD environment will be handled through Workspace ONE Access through the deployment of an on-premise vIDM Connector. Doing this allows for a very straight forward federation of Office 365 with Workspace ONE, bypassing the need for ADFS.  For additional context and pretty pictures about this strategy, check out this primer post. Otherwise, below is a quick overview of the recipe, followed by actual steps.

Basic Resource Requirements

     Azure AD/Office 365 Subscription
     Workspace ONE Access Cloud Tenant (The Artist Formerly Known As vIDM)
     1 Small On Premise Server For AD Connect 

     1 Small On Premise Server For vIDM Connector
     Elbow grease and access rights

Deployment Outline

     Preparing Office 365: 
          Spinning up an Office 365 environment
          Prepare non-routable domain for directory synchronization 
          Integrate with on-premises AD through Azure AD Connect

     Prepare Workspace One Access Environment:
          Deploy vIDM Connector 
          Bind to on-premises AD 
          Enable outbound mode for vIDM Connector  

     Integrate Office 365 with Workspace ONE Access: 
          Add Office 365 To Catalog 
          Federate Azure Tenant With Workspace ONE Access 
          Configure Conditional Access Policies

Preparing Office 365

After getting my hands on Azure tenant for Office 365, my next step was adding the friendlier custom domain name of EvenGooder.com.  My on-premises AD environment uses non-routable .local domain name, lab.local.  To enable these AD users to authenticate against this Office environment with their AD credentials, I next had to add the alternative UPN suffix of EvenGooder.com to the local AD domain.   After that, I deployed Azure AD Connect on a server located within my trusted network.   Deployed with the Pass-through Authentication option, initially Azure AD Connect not only synced user from the local AD domain to Azure, but could also authenticate these local AD users for access to the Office 365 environment.   Later on in the deployment, after the federation of Office 365 with WS1, Azure AD Connect continues to sync local AD user, while authentication of these local AD users shifts over to Workspace ONE Access and an on-premises vIDM Connector.  

Spinning Up An Office 365 Environment

To get access to an Azure/Office 365 subscription, I'm leveraging the Office 365 Developer program through my MSDN subscription.  This provides year round access to an Office 365 environment for development purposes.    For more information, you can check out this FAQ.  If you can get access to this program I highly recommend it.  Otherwise, you can get a free 30 day Office 365 eval from here

Setting up my tenant was a fairly straightforward process.  Eventually I was prompted for a domain name to use for my subscription. 

This lead to the creation of an Azure tenant environment for Office 365, with the domain name of evengooder.onmicrosoft.com. At this point, any account I created directly in my 
Azure environment would have the UPN suffix of evengooder.onmicrosoft.com.

In turn, this new account provisioned in Azure automatically is populated in the Office 365 portal where an admin can entitle the user to an Office license. 

Using A Friendlier UPN Suffix

If you want a friendlier UPN suffix, one without the onmicrosoft.com, you can add a custom domain name. In my environment, an Office 365 E3 Developer wizard guided me through the process of adding a public domain name I already own, evengooder.com.

After providing evengooder.com as a domain to connect to, I was prompted for my GoDaddy credentials to prove I owned the domain.

After authorizing Microsoft to make changes to evengooder.com, I was walked through some necessary DNS changes. Once completed, I ended up with a 2nd custom domain name, evengooder.com.

At this point, when creating users directly in the Azure environment, they can have either the UPN suffix of evengooder.com or evengooder.onmicrosoft.com.


Preparing A Non-routable Domain For Directory Synchronization

Because my AD lab is leveraging a non-routable .local domain name, LAB.LOCAL, I had one more step before deploying Azure AD Connect. I needed to add evengooder.com as an alternative UPN suffix for members of the LAB.LOCAL domain. Fortunately, a brainy colleague of mine, Leonardo Valente, pointed me to a very relevant Microsoft article that provides guidance on this procedure. Here are the steps.

First, you need to log into Active Directory Domains And Trusts. Right click on Active Directory Domains And Trusts and select properties.

Next, add the friendly UPN suffix you want to use against your Office 365 environment. In my case, it was the public domain name evengooder.com.

Finally, navigate to the account properties of any AD account you want to have access to Office 365. Change their logon name to leverage the new UPN suffix.

At this point, when the user syncs to the Azure AD environment, they will show up as jj@evengooder.com and will be able to log into the Office environment accordingly. Before that happens though, Azure AD Connect needs to be stood up and configured.

Integrate With On Premise AD with Azure AD Connect

To sync your on-premises AD users to an Azure tenant , you deploy Azure AD connect. It also can provide authentication of your local AD users against your Azure tenant through features like Password Hash Synchronization (PHS) and Pass-through Authentication (PTA).

After downloading Azure AD Connect from my Azure admin portal, I executed the installer on my local server and then walked through a wizard. For the initial setup I went with Pass-through Authentication.

Next, I provided the wizard global admin credentials to Azure tenant. Then, when prompted for a on-premises directory to sync with, I selected LAB.LOCAL. 

Then I provided enterprise admin credentials for LAB.LOCAL.

Then I opted for the user principal name as an attribute to use for the Azure AD username.

I then went on to next my way through the next 4 or 5 screens, essentially sticking with
defaults. Soon, the wizard completed and I found members from my local AD environment had been successfully synced with my Azure environment. User who had the alternative UPN suffix configured in AD showed up accordingly under my Azure tenant users.

So, vditest2 and vditest3 are example of AD accounts configured with the alternative UPN.  Vditest4 is an example of account that didn't have the alternative UPN configured, so it defaulted to vditest4@evengooder.onmicrosoft.com during the import. Regardless, I could now log into the office environment using these local AD account credentials and the appropriate UPN.

Prepare Workspace ONE Access Environment

Fortunately, prior to this integration I already had access to a Workspace ONE Access (vIDM) cloud hosted tenant.  If you reach out to your VMware sales rep they could help you get access to a test tenant.  Also, if you register at VMware TestDrive, https://www.vmwdemo.com/, you can get access to a tenant there as well.  Once you have access to a Workspace ONE Access tenant, the next step is to integrate it with your local AD environment using vIDM Connector.  This involves downloading and installing the connector, selecting proper user attributes, and binding to the local AD domain.   Finally, you'll need to enable the connector in outbound mode so that folks outside your trusted network will be able to authenticate against the local AD environment.  

Deploy vIDM Connector

The latest version of vIDM Connector is 19.03, which you can download from here. For a small test deployment you need a Windows server with 2vCPU, 6 gigs of RAM and 50 gigs of storage. You need network connectivity to internal resources like AD and DNS.  Also, you need outbound 443 access from your vIDM Connector to the Workspace ONE Access tenant. Essentially, make sure it has internet access. For more specifics on system requirements, check out the official documentation here.

After executing the installer, you'll get the welcome window.

Next your way through the next few screens, accepting the defaults. Confirm you have the proper hostname for the vIDM Connector.

Run the connector service as a domain user.

Click next, then next on the following screens and then finally install. After a successful installation, you'll get redirected to port 8443 on the local host. This is where you'll complete the setup from.

Click next on the first screen.

After setting the admin password for this local connector instance, you'll get prompted for an activation code. You need to grab the code from your Workspace ONE Access tenant.

Log into your tenant environment. Navigate to Identity & Access Management --> Setup --> Connectors. You'll see the unactivated connector.

Click on the view activation code option. From there you'll have an option to generate an activation code. Generate the activation code, then copy and paste that activation code into wizard for the connector activation.

If things go well, you'll get the, "Setup is complete," message. Now under Connectors within the admin console you'll see more info populated about the connector.

Next, we have to associate this connector with a directory.

Configure The Required User Attributes

Prior to binding to your local domain, you need to ensure you have the required user attributes configured for Office 365 integration . You'll need the user principal name and objectguid attributes enabled. Navigate to Identity & Access Manager --> Setup --> User Attributes. You should have something like this: 

Bind To On-Premises AD

After confirming your attributes are straight, proceed to Identity & Access Management --> Manage --> Directories click Add Directory.

Select the option for, "Add Active Directory over LDAP/IWA."

Add the name of your directory. Ensure your vIDM Connector is selected as the Sync Connector. Choose Yes for, "Do you want this Connector to also perform authentication." Then, scroll down a bit and you'll get prompted for an account to bind with. Enter the bind account name in a user principal name format.

Hit Save & Next.

I then selected my lab.local domain.

I t
hen nexted my way through the next several screens, providing DNs like cn=users,dc=lab,dc=local to filter out users and groups. After getting a summary of changes the sync settings would trigger, I clicked on, "Sync Directory."

After a successful sync, you'll see a bunch of new users under the Users & Groups tab.

Authentication In Inbound Mode

By default, after creating a directory and and associating it with our vIDM Connector, you're connector can authenticate AD user in inbound mode, which involves users directly connecting against the vIDM connector located on the trusted network.  Essentially, after pointing your browser to the Workspace ONE Access tenant in the cloud and selecting the local AD domain you want to authenticate against, you're redirected to the URL of the vIDM Connector. 

If you want folks to authenticate directly through cloud tenant, rather than against the vIDM connector, you can enable outbound mode.

Enabling Outbound Mode For vIDM Connector

We can enable outbound mode by associating our new Connector with the Built-In identity provider. Navigate to Identity And Access Management --> Manage --> Identity Providers.

Click on the hyperlink for Built-in. Select the relevant directory and network ranges. Then scroll down.

Under Connectors, select your new vIDM Connector. Then click on the, "Add Connector," button.

You'll now have the option to select Connector Authentication Methods. Select the option for, "Password (cloud deployment)."

After changing your access policy rules to use the Password (cloud deployment) authentication option, you'll have the ability to authenticate against the AD environment directly from your SaaS instances, without having your browser redirected to the vIDM Connector.

Integrate Office tenant with Workspace Access

With both your Office 365 tenant and your Workspace One Access Manager cloud tenant integrated with your local AD domain, you can now proceed with the federation of the two environments. The first step is to add your Office 365 environment as an applicationto your Access manager catalog as a service provider. Then, you enable your Workspace One tenant as identity provider for Office 365 through a PowerShell command that federates the two environments. Add Office 365 To Catalog

Fortunately, there’s already a preconfigured Office 365 integration wizard you can follow to guide you through the integration. Within your Workspace One Access manager 
environment, navigate to Catalog --> Web Apps.

Then click on the, New button to create a new SaaS application.

Click on the option to browse from catalog. Look for the preconfigured application called, "Office
365 with Provisioning."

Give it a name that will be displayed in your user's portals. I lack imagination, so I'm just going with Office 365.

Next, we have to populate some critical SAML information. Fortunately, a lot of information is already preconfigured for us. We essentially have only 3 pieces of information we have to add to the definition. These are a target URL, a tenant and an issuer. The target URL is where you get redirected to after SAML request is accepted. For my lab, I'm going with office.com. (I could just as well go with a specific office application by going with https://www.office.com/launch/word or https://www.office.com/launch/excel.) For this integration, I want to provided access to the entire office suite, so I'm just going with office.com.

Next, I have to enter in a tenant and an issuer. For the tenant, I'm entering the tenant url for my
Office 365/Azure registered domain, evengooder.com. For the issuer, I'm going with the URL of my Workspace One Access (vIDM) tenant, justinjohnson.vmwareidentity.com.

Now, you’ll see an entry for Office 365 within the catalog.

To actually make this configured application work, we have to federate the Office 365 tenant with our Workspace One Access tenant. All this will take is some very ugly PowerShell commands.

Federate Azure Tenant With Workspace ONE Access 

With Office 365 added to your catalog you can complete the integration through a somewhat intimidating PowerShell command.   The first step is adding the MSOline module to your PowerShell environment. 
 For my desktop, in order to successfully install MSOnline module, I first had to install the Microsoft Online Services Sign-In Assistant. This is something I downloaded from hereWith this component installed, I was able to install the module with this command:

Install-Module -Name MSOnline

After installing the module, you should be able to run Connect-MsolService in order to connect to your Azure tenant. You’ll get prompted for your credentials.

Assuming your credentials are accepted, you’ll now have access to your tenant environment. Run Get-Msoldomain to observe accessible domains within your environment.

Before running a really long ugly PowerShell command, you need to make sure the domain you're going to federate isn't the primary domain.  Accordingly, I navigated to the custom domain names section of the Azure Active Directory Admin Console and selected the evengooder.onmicrosoft.com domain.  After drilling into this custom domain name I clicked on the option, "Make Primary."  

After confirming the action I was able to proceed with the federation. 

With the domain properly configured and an active connection to the Azure tenant through Connect-MsolService, there's just two PowerShell commands left to federate the environments. To get the syntax correct, I leaned on a blog post by Peter Bjork titled, "VMware Identity Manager 2.8 – Office 365 User Provisioning and Federation."  Within the article, he includes a wonderfully convenient template to work from. Here's the template for the first command:

Set-MsolDomainAuthentication -DomainName < O365 registered Domain > -Authentication Federated -IssuerUri “<serviceportal.customer>” -FederationBrandName “<Customer.com>” -PassiveLogOnUri “https://< mycompany.vmwareidentity.com >/SAAS/API/1.0/POST/sso” -ActiveLogOnUri “https://< mycompany.vmwareidentity.com >/SAAS/auth/wsfed/activelogon” -LogOffUri “https://login.microsoftonline.com/logout.srf”

And for the second command, he provides this template:

Set-MsolDomainFederationSettings -DomainName < O365 registered Domain > -MetadataExchangeUri “< https:// mycompany.vmwareidentity.com SAAS/auth/wsfed/services/mex >” -SigningCertificate < X509Certificate >

To determine how to populate the different fields, I took advantage of presentation by Dean Flaming, "VMware Identity Manager and Office 365 Integration". Here's a very relevant slide from that demo:

Accordingly, based on the particular of my installation, my first command used the following values:

DomainName = evengooder.com
Federation Brand Name = EvenGooder Inc
PassiveLongUri = https://justinjohnson.vmwareidentity.com/SAAS/API/1.0/POST/sso
ActiveLogOnUri = https://justinjohnson.vmwareidentity.com/SAAS/auth/wsfed/activelogon

Leading to a command of:

Set-MsolDomainAuthentication -DomainName evengooder.com -Authentication Federated -IssuerUri justinjohnson.vmwareidentity.com -FederationBrandName “EvenGooder Inc.” -PassiveLogOnUri https://justinjohnson.vmwareidentity.com/SAAS/API/1.0/POST/sso -ActiveLogOnUri https://justinjohnson.vmwareidentity.com/SAAS/auth/wsfed/activelogon -LogOffUri https://login.microsoftonline.com/logout.srf

Also, please note that this entire command must be on a single line.

Then, for the second command, I needed the signing certificate from your Workspace One Access tenant. Within the console I navigated to Catalog --> Web Apps. From there I clicked on settings, then selected SAML Metadata under SaaS Apps.

Following Mr. Flamings instructions, I yanked off quotes, along with -----BEGIN CERTIFICATE----- and -----END CERTIFICATE-----, so that there was only one long alphanumeric string for the certificate value.   My command ended up looking like this:

Set-MsolDomainFederationSettings -DomainName evengooder.com -MetadataExchangeUri https://justinjohnson.vmwareidentity.com/SAAS/auth/wsfed/services/mex -SigningCertificate MIIFHzCCAwegAwIB<REALLY_REALLY_REALLY_LONG_ALPHA_NUMERIC_STRING_REPRESENTING_THE_SIGNING_CERT>

Again, this command too must be on a single line.  

The resulting output from the PowerShell console was rather uneventful.

However, after executing the command, I got the magic I was looking for. Issuing
the command Get-MsolDomainFederationSettings helped confirm a successful federation. 

More notably, signing into Office 365 from Workspace ONE portal started working. With the default access policy in place, providing my AD credentials to a Workspace One authentication prompt was enough to get me access to Office 365. 

Configure Conditional Access Policies

With Workspace ONE Access now federated with Office 365, you can leverage conditional access policies to control access to Office 365.  This means any authentication methods enabled for Workspace ONE can be leveraged for Office 365 access.  For example, with your on premise domain joined desktops, you could SSO through Kerberos.   Here's a video recording of accessing Office 365 from an on premise virtual desktop leveraging kerberos:

And here's an example of Kerberos at work should the user try accessing Office 365 directly.  They're briefly redirected to WS1, then handed back to Office 365 completed authenticated.