Featured Post

A Quick And Easy Win Along The Path To Zero Trust: Workspace ONE's Certificate Authentication For Windows 10 And macOS

I was recently introduced to an elegant solution for enabling certificate authentication on Windows 10 and macOS devices through VMware'...

Wednesday, August 3, 2022

If You Can't Bring Your Virtual Desktop To The Cloud, Bring Cloud To Your Virtual Desktop

In late June of this year I had the honor of pre-recording a VMware Explore session with Todd Dayton and Cris Lau.  The session, "Can't Take Your Virtual Desktop To The Cloud? Bring Cloud To It,"  focuses on ways to enhance on-premises Horizon environments with VMware hosted services.   It stems from a recognition that shifting VDI capacity to the cloud is not quite feasible for many customers, at least not yet.   As Todd put's it, "VDI really isn’t an application workload itself. It’s a support system for Windows applications that typically can’t or wouldn’t be modernized….These Windows applications aren’t always a great cloud candidate."  So, sure, you can stuff any application in a cloud based desktop, but if it's too resource hungry, too latency sensitive, or generates too much ingress/egress traffic there could be problems.  Performance or cost savings, or both, can take a serious hit.  For this and other reasons lots of customers have decided to keep virtual desktop workloads on-premises.  However, all is not lost.  There's still plenty to gain from slathering cloud services on top of existing on-premises Horizon environments, shifting management, monitoring, and security to VMware's SaaS offerings.   

These VMware hosted services ease the burden of on-premises Horizon management while wrapping modern capabilities around traditional Windows workloads.  For day 2 operations the Horizon Control Plane, with features like the Universal Horizon Console, Help Desk Tool, and Assist for Horizon, enables effective support  from anywhere in the world.  Further, a subset of the Horizon Control Plane called the Cloud Monitoring Service (CMS) offers high level monitoring and reporting against Horizon from the cloud, capabilities recently improved upon through Workspace ONE Intelligence for Horizon.  Along with SaaS based support and monitoring there's the ability to enhance remote Horizon access with cloud based Workspace ONE and Carbon Black.   These services allow customers to wrap modern capabilities around Horizon sessions while facilitating adoption of 3rd party SaaS like Office 365, Okta, and ServiceNow.  The end result is a comprehensive remote access solution, an on-premises Horizon environment augmented with cloud based services to deliver a digital workspace for remote and hybrid workers. 


COVID-19 Brings Horizon Remote Access To The Foreground

Horizon is more relevant than ever given the spike in remote and hybrid work driven by the pandemic.  For nearly 15 years Horizon had been a relatively niche solution, adopted primarily by segments sensitive to security and regulations.  Despite this narrow vertical adoption, over the years Horizon progressively improved at remoting Windows through updates to its clients, agents and the Blast display protocol.  This finely tuned capability was an absolute godsend as customers scrambled to accommodate remote access in the early days of the pandemic.







While Citrix and Horizon are very similar solutions, a clear distinction emerges as one explores innovations for remote access.   For Citrix, remote access centers around hardware based versions of Citrix ADC, the artist formerly known as NetScaler.   You place these multipurpose network appliances in your DMZ and, as they are packed with impressive but for most customers largely extraneous features, they cost a small fortune.  In contrast, remote access for Horizon is handled by a free and flexible software based solution, a virtual appliance called Unified Access Gateway (UAG).  It's a mature bespoke technology for securing remote Horizon access with a proven track record integrating with 3rd party solutions to beef up security.  That said, it shines brightest when we combine it with the Workspace ONE suite to wrap functionality like identity and modern management around remote Horizon sessions.  This approach enhances remote access from the cloud while allowing customers to purchase germane technology a la carte. 

VMware Hosted Services Wrap Comprehensive Security And Management Around Remote Horizon Access

Over half a decade ago Workspace ONE UEM (AirWatch) was already shifting towards predominantly SaaS based adoption.  There's certainly exceptions, but generally speaking Workspace ONE UEM is a cloud first solution.   The same goes with Workspace ONE Access nowadays, as customers are entitled to a SaaS based tenant through their Horizon Universal subscriptions.  Offering a unique integration of identity and endpoint management capabilities, WS1 UEM and Access combined offer amazing enhancements to remote Horizon access like contextual authentication, endpoint management, and SSO.  This ideal model for remote and hybrid workers is further enhanced through Workspace ONE Intelligence.  Intelligence, along with providing advance reporting capabilities, enables ruthless automation against WS1 UEM environments as well as any 3rd party solutions supporting REST APIs.  Finally, Carbon Black, a VMware acquisition from 2019, provides cloud based next-gen antivirus for Windows 10 and macOS.   When these VMware hosted services are combined with Horizon you get a solution ideally suited for remote and hybrid workers, a superb remote access Horizon experience augmented with mature cloud based security and management. 











These SaaS offerings wrap remote Horizon sessions in modern capabilities like Zero Trust, beefing up security for Windows applications that historically have been less than secure.   Further, while these services are a natural fit for remote endpoints, we can also use them to manage virtual desktop images themselves.  WS1 UEM can be used to manage persistent VDI and Carbon Black is supported on both Instant Clones and Full Clones.  Likewise, WS1 Access can be used to secure SaaS adoption both inside and outside the virtual desktop. 


Harnessing 3rd Party SaaS Based Solutions For An Enhanced Horizon Experience

When it comes to enhancing Horizon from the cloud it's not just about VMware hosted services, but also 3rd party SaaS like Office 365, Okta or ServiceNow.  For over a decade WS1 Access has made access to 3rd party SaaS easy and secure for Horizon users.  Within the virtual desktop it offers incredibly convenient consumption of SAML integrated applications through the WS1 portal or directly from any supporting Windows apps.  Outside the virtual desktop security can be fully addressed by WS1 Access and the rest of the Workspace ONE suite.  As with Horizon, we can use the Workspace ONE suite to enhance and secure access to these SAML integrated solutions. 




















In addition to enabling the adoption of cloud based service providers, there's the option to leverage solutions like Okta, Ping or Azure as identity providers.  By configuring these services as trusted IDPs we can leverage their authentication mechanisms for securing Horizon or any other Workspace ONE integrated application. It's a way to beef up the already impressive set of Workspace ONE security capabilities, another way of bringing cloud to the desktop. 










Finally, there are two very interesting ways in which Workspace ONE Intelligence facilitates cloud adoption.  First, through the Trust Network it can ingest threat events not only from Carbon Black, but other cloud based members of the Trust Network like Lookout.  Second, events collected in the Intelligence data lake can trigger actions through automation connectors.  Out of the box there's built-in connectors for WS1 UEM, Slack and ServiceNow, however there's an option to create custom connectors for any solution that offers a REST API. 

These automation connectors represent an amazing opportunity to fine tune enhancement and support of Horizon environments from 3rd party cloud services.  Horizon admins are usually grizzled veterans when it comes to scripting within the desktops.   With Intelligence they can now turn their attention to scripting against SaaS, automating REST API calls to 3rd party cloud solutions that are becoming increasingly relevant.


The Horizon Control Plane Services 

Horizon Control Plane Services enable day 2 support for on-premises Horizon environments from the cloud.  Its Horizon Universal Console provides Horizon administration enterprise wide through a single web based URL while also providing global access to the Help Desk tool.  So a support team, wherever they are in the world, without the need for direct network access to Horizon environments, can look up real time session details for any Horizon user.  They'll also have the ability to troubleshoot through actions like killing processes or restarting VMs.  If necessary there's even an option to remote into a virtual desktop using Workspace ONE Assist for Horizon.  Finally, for more high level support and monitoring, "the big picture," there's the Cloud Monitoring Service (CMS).  CMS provides health, capacity, and usage metrics for any cloud connected Horizon environment.  (For example, if a certificate expires on a Horizon Connection server, this challenge will trickle up to the Horizon Universal Console through CMS.)  The Universal Console, the Help Desk tool, Assist for Horizon and CMS all connect to on-premises environments through the Horizon Cloud Connector and clone Worker Node(s) that provide redundancy.   

While CMS provides high level insight Workspace ONE Intelligence for Horizon provides additional detail, granularity and customization in terms of monitoring and tracking the health of your on-premises Horizon environments.  This provides more in-depth support for day 2 operations while laying the ground work for future Workspace ONE integration with Horizon.


Workspace ONE Intelligence For Horizon 


Workspace ONE Intelligence For Horizon was first announced during VMworld 2021 and as of July 28th, 2022 is generally available.   This rounds out the overall strategy of porting information from all VMware EUC components into Intelligence.  For someone that specializes in both Horizon and Workspace ONE this is welcome news.  Intelligence has been offering advanced reporting and automation for WS1 UEM for years now and it's great to see VMware extend this functionality to Horizon.  




















This first iteration provides built-in dashboards, custom reports, and custom dashboards, expanding beyond the canned reporting capabilities of CMS.  We're talking boat loads of raw and relevant data regarding the health and performance of Horizon. Just to give you a taste of how vast this dataset is here are screenshots from Intelligence custom reports detailing visible attributes from Horizon PODs, Pools and VMs:


Even more impressive and overwhelming are the available, "Session Snapshot," attributes:


So yeah, there's a lot to work with here. While this info is relevant for Horizon health and performance monitoring across the board, it certainly rounds out the already impressive model of supporting remote Horizon access with cloud based services. When troubleshooting performance challenges with remote access it can provide critical network insight like display protocol packet loss and round trip latency, along with detailed information of virtual desktop resource usage.  You also get invaluable context regarding general POD health and performance.  Finally, you get the ability to slice and dice through this information with WS1 Intelligence customizable dashboards and widgets, allowing you easily zero in on and visualize relevant data.


The fact we get this info enterprise wide from a cloud based service is quite compelling and affords Horizon customers an opportunity to really up their game in terms of monitoring Horizon performance.  Further, as a cloud based service that leverages Horizon Cloud Connectors many customers already have in place, it's very accessible and easy to stand up.  (It took me less than 15 minutes to get it working for my lab.)  Finally, it comes standard with most of the new Horizon entitlements at no additional cost, so the price is right.  


A VMware Explore Session On Extending Cloud To The Virtual Desktop

Though not everyone is ready to move their VDI workloads to the cloud all existing Horizon customers stand to benefit from the adoption of VMware hosted services.  These services, already available today, can be layered on top of existing Horizon environments non-disruptively and easily.  These are the main takeaways of the explore session,  "Can't Take Your Virtual Desktop To The Cloud? Bring Cloud To It."  It begins with an amazing introduction from Todd Dayton.  He elaborates on the benefits of cloud adoption, challenges with Windows workload migrations to the cloud, and the ideal compromise of shifting Horizon management to the cloud.  Then Cris Lau provides an impressive demo of the Horizon Universal Console, Help Desk tool, Assist for Horizon and Intelligence for Horizon.  Finally, I wrap things up reviewing ways we can enhance remote Horizon access with cloud based Workspace ONE and Carbon Black. 



Also, one final anecdote.  Todd pointed out that even if you're confident your virtual desktop workloads will eventually get migrated to the cloud there's absolutely nothing lost if you start off with these cloud based enhancements to your on-premises environment today.  It's not like you'd be burning any bridges or painting yourself in a corner.  In fact, arguably you'd be stacking the deck in your favor for a successful workload migration by already having cloud based management services configured, adopted and in place.  So there's really nothing to loose except the burden of managing on-premises resources. 

Tuesday, May 3, 2022

Driving Horizon Automation With WS1 Intelligence, Postman, And The Horizon REST API

Last year I published, "Ruthless Automation With Workspace ONE Intelligence," an article highlighting the impressive automation capabilities of Intelligence.  Well, in this post I'm going to detail adaptations to WS1 Intelligence that provide even ruthlesser automation! Huzzah!  Using Postman webhooks and VMware's Unified Access Gateway you can amplify the sophistication and reach of Intelligence Custom Connectors.  While any solution supporting a REST API may benefit from either enhancement, a Horizon on-premises environment benefits from both, making it an ideal use case to demonstrate.  Traditionally Horizon has been out of reach from Intelligence automation but Postman webhooks and UAG's web reverse proxy capabilities combine to close the gap and enable the use of Custom Connectors for Horizon.










In the illustrated solution a REST API call is triggered by a defined event within the Intelligence data lake, as with any Custom Connector implementation.  However, the call made from Intelligence is to a Postman webhook Url rather than directly to the Horizon environment.  The webhook triggers an entire collection to run from the Postman cloud against the Horizon environment, an activity that's tracked and managed through a Postman Monitor.  This allows Intelligence to trigger much more sophisticated REST API calls that are chained together and build upon each other, shifting complex logic to the Postman cloud where it's executed and tracked for fractions of a penny.  Further, the reach of these calls from Intelligence are extended to an on-premises environment by using UAG as a web reverse proxy.  This is critical for providing access to the Horizon REST API from the Postman cloud.  The video below demonstrates both enhancements working in concert to integrate Intelligence and Horizon on-premises.  


In the demo above actions against the Horizon environment are triggered manually using a test feature of the Custom Connector built for Horizon.  However, in the demo below actions against Horizon are triggered by Carbon Black malware detection on an endpoint device, as dictated by a configured Intelligence automation workflow.  


Again, both Postman webooks and UAG's web reverse proxy capabilities have potential to enable or enhance integration between Intelligence and any other REST API, not just Horizon's.  So a deeper understanding of these adaptations is useful beyond the Horizon use case and could be of interest to anyone looking to explore options for WS1 Intelligence Custom Connectors.  

This post reviews in depth an integration between Horizon and Intelligence, starting with the Postman client and Horizon REST API. It explains the logic behind API calls executed from Postman, followed by a discussion on how UAG, acting as a reverse proxy, enables communication between the Postman cloud and on-premises Horizon environment. Further, it details the creation of webhooks in Postman as well as the configuration of Custom Connectors within Intelligence.  Finally, it wraps up with a few security considerations and final thoughts.


Getting Up To Speed On Postman

Creating the Custom Connector detailed in this post definitely requires familiarity with Postman and REST APIs.  Fortunately, the Postman website includes a Learning Center with incredibly helpful walk-throughs.  Within minutes of reviewing this site I got my hands dirty with essentially the, "hello world," of Postman requests, postman-echo/get.  This call leverages an open API server that doesn't require any kind of authentication, providing a very accessible introduction to REST API calls from the Postman client.














Along with the Learning Center itself, there's enablement available from Valentin Despa on YouTube. He has a 3 part video series called, "Introduction To APIs," providing an excellent overview of the how and the why of REST APIs and API clients like Postman. Then there's his 6 part, "Intro To Postman," series which I absolutely loved. After working through this series I found myself dangerous enough to start hacking together my desired solution.

The series teaches that accessing a REST API from Postman can be as simple as executing a request against a single URL.  However, for more complex operations you can chain multiple calls together in a collection.  This allows you to take output from one call, then distill and leverage it during the execution of subsequent calls.  Variables are passed from call to call, with JavaScript running within the Tests and Pre-request Scripts associated with each call.  In a nutshell, your collection is a series of calls executed in a specific order, with chunks of JavaScript potentially performed before and after each call.  Despa covers chaining in episode 5, "Chain Requests." 

Finally, since the Tests and Pre-request Script scripting is based on JavaScript, well, there's a whole internet out there to help you work through that.   While I've executed Hello World in countless languages and have certainly gotten hot and heavy with VBScript and PowerShell, I had no prior experience with JavaScript.  However, through Google-fu I was introduced to foreach loops and if statements, along with some variable management, and that was enough for me to get cooking with JavaScript.  I think anyone with scripting experience could find themselves getting dangerous with Postman pretty quickly if they were motivated. 


The Horizon REST API 

Info on the Horizon REST API is available directly from the Horizon Connection Server by pointing your browser to https://<Your-Connection-Server-FQDN>/rest/swagger-ui.html.  However, there's a must see article available from VMware's Tech Zone, "Using The VMware Horizon Server REST API," written by Chris Halstead.  It provides an introduction to the Horizon REST API along with demonstrations on how to use its endpoints, "in combination to achieve your goals."  Along with tons of useful information, it includes a link to sample collections that can be directly imported into your Postman workspace.  The linked resource, available on VMware {code}, is called, "Postman Collection - Horizon REST API."  With Postman already open on my machine I clicked on the button, "Run In Postman," and voila, I had over a 100 preconfigured calls to work with.  











Dang!!!  Talk about making folks dangerous quick.  With a free Postman account you can import these samples and begin making calls against your local Horizon environment in a matter of minutes.  Just update a handful of collection level variables and you're off to the races.  These variables are required to successfully execute a call to the login endpoint on the Connection Server.  A successful call returns a token from the Horizon environment that is assigned to a global variable which in turned is used by the rest of the sample calls for authorization.  While some sample calls require additional information/parameters, many are immediately available once you've executed the login call successfully,  such as all the Monitor samples.  Other calls, arguably the more interesting ones, require additional info. For instance, the disconnect endpoint requires an active session ID from the Horizon environment to target its action.  Chaining calls together to execute more complex actions like this is what we'll review next. 


The Basic Logic Behind My Collections

All four collections associated with the Custom Connector detailed in this post follow the same basic logic, so we'll review just one of them in detail.  The collection, "Disconnect Horizon Session," is made up of 5 different calls to the Horizon REST API, each of which was copied from Halstead's samples.  The collection begins with a call to the login endpoint that uses the token returned to authorize the next 4 calls.  Based on an AD username fed to the collection - more to come on that a bit later - the second call retrieves a list of AD accounts from the Horizon environment, finds the matching AD username, then passes the associated user_id to the next call via a global variable.  This 3rd call retrieves a list of sessions from Horizon and finds the session associated with the targeted user_id.  The matching session yields a session id that's key to executing the final two calls to the send-message and disconnect endpoints.  



The first 3 calls are the real work horses of the collection, performing the critical task of locating the session ID to target.  All the logic happens in either the Pre-request Script or the Tests associated with each call.  For instance, here's the JavaScript used with the call to ad-users-or-groups:  










In a nutshell, we're taking the response of our call to ad-users-or-groups endpoint and saving it to jsonData. Then we're fetching the global variable, "user" and adding that value to a local variable called targetUser. Finally, using a foreach function, each object stored within jsonData is walked through while comparing its AD account name with the target username. If there's a match, the ID associated with that matching AD account is copied to a global variable called user_id. This user_id global variable is then consumed by the next call to the sessions endpoint. The sessions endpoint call uses pretty much identical JavaScript logic.



Looks familiar, right? The names have changed, but the logic is identical. The response to the sessions endpoint call is copied to jsonData. Then each object returned is searched for a matching user_id. When a match is found that objects session ID is copied to the global variable SessionHunt. And then the fun begins, with the session ID getting fed to the next call to the endpoint session-message.



And boom, you've got a message getting sent to your user's session.
 


Finally, there's the actual disconnect. Similar to the send-message call, the SessionHunt global variable is used to target the action.
 


And there you have it. Waka! Waka! 5 REST API calls, 2 foreach loops, two if statements, a handful of variables later and you've got yourself a sweet little collection for automating the task of messaging and disconnecting a specific user. An entire collection like this can be executed in sequence by right clicking on the collection and selecting the option, "Run Collection."



Now, to make these actions accessible from Workspace ONE Intelligence a first step is to make the Horizon REST API available to the outside world. While there's countless solutions for achieving this, I'm going to turn to one of my favorite and dearest pieces of technology, VMware's Unified Access Gateway.

Making Calls Remotely Against An On-Premises Horizon Environment Through UAG

While not the most popular of use cases, Unified Access Gateway (UAG) can act as a web reverse proxy.  It's been a feature for years now, originally developed to provide access to on-premises vIDM environments, but now available for any on-premises resource.  For my lab UAG plays the key role of making the Horizon REST API accessible to Postman, more specifically Postman Monitors that live in the cloud and are triggered by webhooks. 

Fortunately, the configuration as a reverse proxy is fairly straightforward.  The trickiest part is configuring the proxy pattern.   To narrow down the reverse proxy functionality to only the REST API destination URL I went with this for a proxy pattern:  (/rest(.*))











This prevents the reverse proxy from exposing the entirety of the Horizon Connection server to the outside world.  Instead, only access to the REST API is possible when hitting the UAG appliance with a URI path that's matched to the destination url for the Horizon REST API. 










UAG's web reverse proxy capabilities provide Postman Monitors access to the Horizon Connection Server's REST API, allowing us to run collections against the Horizon environment whenever they're triggered by Intelligence.  With a collection in Postman configured and reverse proxy solution in place, the next step is to create a webhook to trigger collections that runs across the UAG appliance. 


Creating the Webhook To Your Postman Collection  

While we can make calls directly to 3rd party REST APIs using a WS1 Intelligence Custom Connector, we can only make a single call at a time based on data already located within Intelligence.  There's no option to probe these 3rd party REST APIs, collect some input, then process it in additional follow up calls.   However, that's exactly what we need in order to do anything interesting with the Horizon REST API: chain multiple calls together.   For instance, with the collection I walked through earlier, we're executing 5 different calls, passing variables from the first 3 calls to the final 2.  To accommodate this challenge, we can leverage Postman webhooks to trigger a run of an entire collection stored in the Postman cloud.  

Creating a webhook generates a Url that can be called upon by a WS1 Custom Connector to trigger the collection associated with the webhook.  Further, we can pass variables from the Intelligence data lake to the collection in the process of making a call to the webhook.  In the case of the collection detailed earlier in this post, WS1 Intelligence passes an AD username to the collection through the webhook. While there's official documentation on webhooks in the Postman Learning Center, "Triggering Runs With Webhooks,"  I found this short and concise recorded presentation on Youtube, "Postman Webhooks," to be really helpful. (There's also a very interesting, though much longer, youtube video on Postman webhooks called, "Automate All The Things With Webhooks.") 

As you can see in the video a webhook is created leveraging the Postman API and an endpoint called webhooks.   Making this call successfully requires a workspace ID,  an API key for your Postman account, and a UID for the collection you want to trigger with the webhook.   Locating your workspace ID is easy enough, as you can see in the guidance provided here.  Generating an API key is fairly straight forward and is one of the first things covered in the official documentation for the Postman API.   Once you have this key generated and copied you can use it to obtain the required collection UID using the Postman API's collections endpoint.  To make a successful call against this endpoint you need to include the API key in the header, populating it as a value for the key, "x-api-key."    



With this proper header key in place execution of the call generates a response with info about all your collections, including the UID for the specific collection you want to trigger with your webhook.   With the collection UID and workspace ID in hand you can create your webhook, populating the body of your request with the UID and adding the workspace ID as a parameter. (As with the call to the collections endpoint you'll need to include the API key in the header.)    Successful execution will yield a webhook Url that can be called upon to trigger your collection.  In the example below, a webhook Url of, https://newman-api.getpostman.com/run/13724510/69dbc0d3-0be9-4038-bf83-6c96da23dfe0, has been created and associated with the collection. 










When making a call to this webhook behind the scenes your leveraging Postman Monitors.  These provide you the added bonus of a paper trail/tracking of collection execution.  For each webhook you create there'll be a corresponding Monitor within your Postman workspace. 














When trying to figure out what went wrong with collection execution, or, more optimistically, what went right, you can drill into the events detailed under each monitor to get play by play action.  Below, you can see all the calls that were made as a result of the collection getting triggered by its associated webhook at 2:34pm.  

















You can also get more in-depth, play by play insight, by clicking on console log.   

















So, as if having the ability to trigger collections with a webhook Url wasn't enough, you also get the tracking and performance visibility normally afforded by Postman Monitors.  Next, we'll create a Custom Connector that makes a call to our Postman webhook, completing a circuit between the WS1 Intelligence cloud and the on-premises Horizon environment.  


Creating A Custom Connector To The Webhook













While WS1 Intelligence provides out-of-the-box integrations with UEM, ServiceNow and Slack, for years now it's offered the option of using Custom Connectors to integrate with any solution that supports a REST API. A Custom Connector can be setup to make calls to a Postman webhook by following the same guidance that's always applied to Custom Connector creation. Accordingly, useful guidance can be found is a post by Andreano Lanusse and Adam Hardy called, "Workspace ONE Custom Connector Samples." Along with providing incredibly useful samples the article lays out the steps for creating your own Custom Connectors. The basic process is to craft an API call in Postman, save a successful result of the call, export the call as a json collection, then import the exported json into Intelligence while creating a Custom Connector. Accordingly, I went to Postman and created new collection called, "Disconnect Horizon Desktop - Execute webhook," placing in it a single call to the webhook Url that triggers the, "Disconnect Horizon Session," collection detailed earlier.






We can pass variables from WS1 Intelligence through a webhook.  In this example we're passing an AD username from Intelligence as a value for, "username2."  The triggered collection is designed to ingest this parameter and target its search accordingly.   Before exporting this collection, you need to execute this call successfully, then save the result as a sample.   










At this point, you're ready to export the collection by navigating to collection, clicking on the 3 dots representing, "View more actions," and selecting export.
 








Go with the, "Collection v2.1," option and the exported json will download.  Next, go to the WS1 Intelligence console, navigate to Integrations --> Outbound Connectors, and select add custom connector.  For a base URL, you'll enter in the base url for your webhook, https://newman-api.getpostman.com. 













Next, you're prompted to import your exported collection.   Consistently I've run into challenges importing my own hand made custom connectors at this point with an error message of, "Invalid Postman JSON file: Content-Type header must be present."   

















This pitfall is referenced in the sample custom connector guidance article, which cautions, "Note: Consider adding headers as Content-Type: application/json. If you do not add headers as the content type JSON, the APIs can default to XML and XML does not work with custom connections."  Accordingly, one way I've gotten around this challenge is by copying the header from the working samples and inserting them into my custom connectors.   So it's all about replacing the default header on these exported collections from what's displayed here: 


    "method": "POST",
    "header": [

                            ],
    "body": {


With this:

        
    "method": "POST", 

    "header": [ 

    

            "key": "Content-Type", 

            "name": "Content-Type",

            "value": "application/json",

            "type": "text" }

    ], 

    "body": {


Once I made this edit to my exported collections the imports completed successfully.  In the end, after following this entire process for each of the collections a webhook was created for, I had these actions available from my outbound connector within Intelligence:  












While each action leverages a different collection, all actions traverse the same basic path:

Intelligence --> Postman webhook --> UAG --> Horzon REST API

To summarize, you have Intelligence triggering the Postman webhook based on reporting and automation configured within Intelligence.   The calls within the collection are executed from the Postman cloud, traversing the UAG web reverse proxy to the internal Horizon Connection Server.  Information about the environment is ascertained through a handful of initial calls and then leveraged by subsequent calls to target the automations within the internal Horizon environment.











Security Considerations

Exploring an option like this is destined to bring up security concerns. Below are a few I've run across as well as some relevant considerations.

Storing credentials in Postman:
Yes, scary indeed, particularly given that Horizon REST API credentials require root access for Horizon administration. However, any credentials stored in a Postman variable in your collections will be, "encrypted on the server-side before storage." Further, Postman has recently introduced support for MFA when you register using a Google based account. While both encryption and MFA take the edge of this concern, it should also be considered that the REST API credential account doesn't necessarily have any special AD rights.

Accepting Horizon Admin Credentials Through Public URL:
Having to open up an administrative REST API of your internal Connection Server to the external world is certainly a bit nerve wracking. However, the Professional and Enterprise Postman customers have the option to run their monitors with static IPs.  So, through firewall rules you can limit access to your UAG appliance to the public IPs used by Postman Monitors. That certainly reduces your risk. Also, while it hasn't been built yet, there's definitely Postman customers asking for the ability to leverage certificate auth for Postman Monitors. (I have seen client certificate authentication work through UAG for Postman requests from laptops, but it's not supported from Monitors yet.)

Triggering Administrative Actions Through Webhooks:
 I'll forgive anyone for being nervous about raining down ruthless automation from the sky based on calls to webhooks. However, my understanding is that webhooks are often known to rely on security by obscurity. The Postman webhook Urls are pretty long and ugly and I'm not sure how easily they're ascertained. I've had monitors running for over a month now and I haven't seen a single unsolicited request. Further, these webhooks aren't exposing folks to any credentials or direct access to Horizon. Bad guys can make calls to them for cheep thrills or random maliciousness but the chances of them getting any meaningful access to anything doesn't strike me as high. They're simply calls to perform administrative tasks with impact that depends on what's been automated. In addition, all these calls are tracked through Postman Monitors so you would have a paper trail. All that said, if push comes to shove, there definitely appears to be some do it yourself options for securing webhooks.

I'm Still Freaked Out:
Yeah, I get it. I think if security is a real sticking point for your team you could always develop your own full blown REST API. While developing your API would not be for the faint of heart, this post should provide a clear path forward to guide your development.

I certainly respect there's security considerations and concerns to address before implementing these adaptations. However, I think the subject is much more debatable than it seems at first blush and for some folks the benefits could out weigh the risks. Is the juice worth the squeeze? Well, depending on your use case, the juice could be awfully sweet. (If everything about these Custom Connector adaptations sounds awesome to you, but security is a real gotcha, I'd love for you to leave some comments, particularly around what use cases you have in mind.)


Final Thoughts 

The integration between WS1 Intelligence and Horizon detailed in this article is complicated and a lot to take in.  In a cantankerous mood you might go so far as to say, "it's a hot mess."  But, you know what's often the case with hot messes?  They hot, and this solution is absolutely, utterly, freaking gorgeous!  Driving automation against a Horizon environment based on a data lake in the cloud?   Hot!!!  Further, there's potential for the adaptations leveraged to span far beyond the Horizon use case.  UAG can extend the reach of Intelligence to any REST API within an on-premises environment.  Postman webhooks can increase the sophistication of REST API calls made to any 3rd party solution.  Combined together these adaptations significantly expand the reach and efficacy of Intelligence Custom Connectors.



Finally, as complex as the Horizon integration is, the overall objective is very much in line with the trajectory of VMware's EUC stack. "If you can't bring virtual desktop to cloud, bring cloud to the desktop," seems to be the battle cry for the entire Horizon suite, with more and more functionality getting shifted to the cloud even if workloads must remain on-premises.  Past success with SaaS based EUC solutions like UEM, Access and Intelligence not only enhance Horizon security, but also represent a shift to cloud management VMware's is striving to emulate for the Horizon stack.  For that matter, all of VMware seems to be charging in that direction, including vSphere itself.  It that light, the solution detailed in this article seems more like acceleration to a very probable destiny rather than some off the wall innovation. While this seemingly destined future isn't here today, in the meantime, if you've got the will for this functionality there's a way.  

Thursday, December 9, 2021

The Deprecation Of Basic Auth For Exchange And What It Means For VMware's Workspace ONE Customers

After several delays due to Covid-19 Microsoft has finally fixed a date for prohibiting Basic Auth in Exchange Online.  As of October 1st, 2022, Microsoft will begin disabling Basic Auth in all tenants, with short-term temporary disruptions for some customers beginning early 2022.  This news is initially a bit unnerving given that historically a lot of AirWatch/Workspace ONE customers have leveraged Basic Auth within their ActiveSync profiles.  However, it is limited to Exchange Online customers so on-premises Exchange customers, at least for now, need not worry.   Further, for existing Exchange Online WS1 customers leveraging Basic Auth there's a clear path forward through the adoption of Modern Authentication or other OAuth based alternatives.  This post begins with a quick overview of the ActiveSync Basic Auth deprecation and why it's relevant, then details the choice between Microsoft's Modern Auth or other OAuth based solutions for addressing the challenge.   Of all these OAuth based alternatives Workspace ONE Access is certainly my favorite, so I'll detail the magic that happens when you federate Azure AD with Workspace ONE Access and then introduce certificate based authentication with VMware's proprietary Mobile SSO solution. 

A MEM Misnomer: Rumors Of ActiveSync's Death Are Greatly Exaggerated


About a year and a half ago I started hearing grumblings of impending doom for WS1 customers and Mobile Email Management (MEM) in general.    The rumor went something like this: ActiveSync is getting deprecated which will lead to chaos in MEM everywhere, possibly triggering World War 3.   Making it somewhat believable was that ActiveSync hasn't been worked on for years now, with the latest version of 16.1 released in 2016.   Coupled with Microsoft's hyper focus on GRAPH APIs, in a bad mood, with your eyes squinted, it seemed possible ActiveSync could be going away.   However, the truth was more nuanced.  In August of 2020 I reached out to Martin Kniffin for guidance and he didn't fail to impress,  providing me and a handful of colleagues excellent context.   First and foremost he pointed out that it's not ActiveSync that's getting deprecated, but Basic Auth within ActiveSync. (More specifically, it's Basic Auth that's being deprecated almost across the board, not just within ActiveSync.)   When Basic Auth is used with Exchange Online you have the mail client storing a user's typed in credentials and then passing those credentials to Exchange, which in turn proxies those credentials to Azure AD.  These stored credentials on the endpoint device are constantly replayed against Exchange Online throughout the course of email access.   
 
Basic Authentication - Image taken from, "Disable Basic Authentication In Exchange Online"
So it's not ActiveSync that's dying off but rather this very rudimentary Basic Auth model that's going away, initially only in Exchange Online environments, not on-premises.   This deprecation has been in the works for awhile. Plans to disable Basic Auth in Exchange Online were first announced in Sept of 2019 with a target date of Oct 2020.   However, in response to Covid-19 it was postponed till the second half of 2021. Then in February of 2021 Microsoft indicated they would postpone until further notice.   At the same time they announced plans to begin disabling Basic Auth for tenants not currently using it.  Now, finally in late September of 2021, it was announced that Basic Auth would be disabled on all tenants starting October 2022, with more formal guidance coming out early November this year.  So, this hasn't exactly been a meteor the size of Texas hurling towards earth from out of nowhere.   More like The Blob, a really, really, really slow moving blob that, nonetheless, needs to be addressed. 


While ActiveSync payloads with Basic Auth have been wildly popular amongst Workspace ONE customers there's a clear path forward: leverage the OAuth ActiveSync payload setting for use with Microsoft's Modern Auth or a 3rd party federated IDP.    

Leveraging Microsoft Modern Auth With The ActiveSync OAuth Payload Setting

If your Office 365 tenant is purely leveraging Azure for identity, with no federation, both Basic Auth and Modern Auth are currently options for email access.   Modern Authentication is a Microsoft solution, "based on the Active Directory Authentication Library (ADAL) and Oauth 2.0."  With Modern Auth users authenticate with their AD credentials to Azure and then are issued a token granting access to Office 365.   So instead of having credentials stored within a mail client and proxied through Exchange Online, users are redirected to Azure at login.microsoftonline.com and upon successful authentication are issued a token that grants access to email, as well as the entire Office 365 suite.    

Modern Authentication Workflow - Image Borrowed From Shehan Perera's Tech Blog

In the diagram above you have a representation of Modern Auth in the context of a hybrid identity model that merges on-premises AD environments with an Azure tenant, allowing users to leverage their on-premises AD credentials when authenticating to Azure.  It starts with an on-premises Azure AD Connect instance that syncs accounts from on-premises with Azure. Then for authentication there's what's referred to as managed authentication, with a choice between password hash authentication (PHS) and pass-through authentication. (PTA)  With PHS hashes of your AD passwords are synchronized from your on-premises AD environment to the cloud. 

With PTA instead of having hashed passwords stored in the cloud validation occurs directly against your on-premises AD environment via an on-premises agent.  


Either model is supported with Modern Auth and the ActiveSync, "Use OAuth," payload setting. It's just a matter of personal taste for the organization. With both models you're extending your on-premises authentication to Azure and either one can work with the OAuth payload. As far as the ActiveSync payload settings in WS1 goes, all you have to do is check the box for, "Use OAuth, " and your email users will start getting prompted for Modern Auth. The, "OAuth Sign In URL," and, "OAuth Token URL," fields are not mandatory and can be left blank.  When you leaves these fields blank an autodiscovery process kicks in, one that first redirects login.microsoftonline.com.

The redirect to login.microsoftonline.com creates a slightly different experience from the traditional Basic Auth workflow, but it's not insurmountable.  Below is a recording that compares and contrast the two experiences with the built in iOS mail client. 

Also, there's certainly support for Modern Auth from most other mail clients as well, such as Boxer or Outlook. Here's what the process looks like for Boxer: 

 

Leveraging Workspace ONE Access With The ActiveSync OAuth Payload Setting

Along with Modern Auth, this, "Use OAuth," feature supports authentication against Workspace ONE Access, as well as various other federated IDPs such as ADFS, Okta or Ping.  When it's time to authenticate the user first hits login.microsftonline.com, then based on their email address gets redirected to a federated IDP.    In this example, AD authentication occurs through an instance of WS1 Access that's been federated with Azure.   It's very similar to Microsoft's Modern Auth model, except there's a redirection to a WS1 Access tenant where credentials are manually entered.  Here's a demonstration: 

For a more ideal experience you can accommodate authentication with  Mobile SSO for iOS, an incredibly compelling proprietary VMware solution that combines WS1 UEM with WS1 Access to provide SSO for mobiles apps. 

First and foremost, VMware's Mobile SSO solution provides an incredibly convenient certificate based single sign-on experience.  It also lays the ground work for the adoption of device compliance policies that allow us to factor in device enrollment and device posture while providing contextual authentication through conditional access policies.  Further, this solution extends device compliance security against the entire Office 365 suite, not just email access.  Even more exciting, since Mobile SSO for iOS or Android works for pretty much any Mobile App that supports SAML, adopting this solution for Office 365 puts into place a capability for securing mobile SaaS adoption across the board.  Combine this with VMware's certificate based authentication for modern management and you have a complete solution for layering zero trust security on top of SaaS adoption across most conceivable device types.  











One caveat to be aware of is that federation with an IDP like WS1 Access or other 3rd party solution is an all-or-nothing commitment.   You can't just have a subset of users handled by the federated IDP.  All of them will get initially redirected to the 3rd party IDP.    So before actually federating with another IDP you need to make sure that all your Office 365 users can be properly handled by it.  Further, federation will break Basic Auth, so you'd need to prepare accordingly.  

 

SEG For Office 365 Access

Many folks have quite a visceral response to the deployment model I'm about to mention. There are indeed some organizations that leverage SEG for Office 365 access.  I know, I know. While I can't throughly explain or exhaustively defend the design decision, to my understanding there are some use cases where this is a valid and legitimate option.  More customers than you'd image have needed it. 

I only bring it up here in the context of this ActiveSync discussion because with this model there is some authentication against Exchange Online, so it's possible a subset of folks with this type of deployment could be using Basic Auth.  Fortunately, these users can migrate to OAuth access as well.  Here's a sample from my own lab:














The Only Way Through Is Through - Tick Tock, Tick Tock

In a nutshell, the deprecation of Basic Auth is forcing customers to fall back to Modern Auth/OAuth, or, more accurately, fall forward to Modern Auth/OAuth.  As easy as it's been to just leverage Basic Auth we really should have already been marching away from it anyway, regardless of deprecation plans.  While I don't normally feel the need to defend a monster corporation like Microsoft, technically, it sounds like they're just forcing customers to do what they ought do.  Regardless, Workspace ONE/AirWatch has helped customer's navigate their mobile email management needs for over 10 years and is well positioned to assist with this challenge.  

There's no doubt in my mind that some VMware customers may still have some planning to do.  As of the time of this writing, early December 2021, customers have about 9 and half months to act.  Fortunately, Basic Auth is not dead yet, though the writing is certainly on the wall.