Featured Post

Securing Horizon With Cloud Hosted Workspace ONE And Carbon Black

For over a decade VMware's VDI solution has served up on-premises Windows desktops to remote Windows and Mac devices.  While the origina...

Tuesday, May 3, 2022

Driving Horizon Automation With WS1 Intelligence, Postman, And The Horizon REST API

Last year I published, "Ruthless Automation With Workspace ONE Intelligence," an article highlighting the impressive automation capabilities of Intelligence.  Well, in this post I'm going to detail adaptations to WS1 Intelligence that provide even ruthlesser automation! Huzzah!  Using Postman webhooks and VMware's Unified Access Gateway you can amplify the sophistication and reach of Intelligence Custom Connectors.  While any solution supporting a REST API may benefit from either enhancement, a Horizon on-premises environment benefits from both, making it an ideal use case to demonstrate.  Traditionally Horizon has been out of reach from Intelligence automation but Postman webhooks and UAG's web reverse proxy capabilities combine to close the gap and enable the use of Custom Connectors for Horizon.










In the illustrated solution a REST API call is triggered by a defined event within the Intelligence data lake, as with any Custom Connector implementation.  However, the call made from Intelligence is to a Postman webhook Url rather than directly to the Horizon environment.  The webhook triggers an entire collection to run from the Postman cloud against the Horizon environment, an activity that's tracked and managed through a Postman Monitor.  This allows Intelligence to trigger much more sophisticated REST API calls that are chained together and build upon each other, shifting complex logic to the Postman cloud where it's executed and tracked for fractions of a penny.  Further, the reach of these calls from Intelligence are extended to an on-premises environment by using UAG as a web reverse proxy.  This is critical for providing access to the Horizon REST API from the Postman cloud.  The video below demonstrates both enhancements working in concert to integrate Intelligence and Horizon on-premises.  


In the demo above actions against the Horizon environment are triggered manually using a test feature of the Custom Connector built for Horizon.  However, in the demo below actions against Horizon are triggered by Carbon Black malware detection on an endpoint device, as dictated by a configured Intelligence automation workflow.  


Again, both Postman webooks and UAG's web reverse proxy capabilities have potential to enable or enhance integration between Intelligence and any other REST API, not just Horizon's.  So a deeper understanding of these adaptations is useful beyond the Horizon use case and could be of interest to anyone looking to explore options for WS1 Intelligence Custom Connectors.  

This post reviews in depth an integration between Horizon and Intelligence, starting with the Postman client and Horizon REST API. It explains the logic behind API calls executed from Postman, followed by a discussion on how UAG, acting as a reverse proxy, enables communication between the Postman cloud and on-premises Horizon environment. Further, it details the creation of webhooks in Postman as well as the configuration of Custom Connectors within Intelligence.  Finally, it wraps up with a few security considerations and final thoughts.


Getting Up To Speed On Postman

Creating the Custom Connector detailed in this post definitely requires familiarity with Postman and REST APIs.  Fortunately, the Postman website includes a Learning Center with incredibly helpful walk-throughs.  Within minutes of reviewing this site I got my hands dirty with essentially the, "hello world," of Postman requests, postman-echo/get.  This call leverages an open API server that doesn't require any kind of authentication, providing a very accessible introduction to REST API calls from the Postman client.














Along with the Learning Center itself, there's enablement available from Valentin Despa on YouTube. He has a 3 part video series called, "Introduction To APIs," providing an excellent overview of the how and the why of REST APIs and API clients like Postman. Then there's his 6 part, "Intro To Postman," series which I absolutely loved. After working through this series I found myself dangerous enough to start hacking together my desired solution.

The series teaches that accessing a REST API from Postman can be as simple as executing a request against a single URL.  However, for more complex operations you can chain multiple calls together in a collection.  This allows you to take output from one call, then distill and leverage it during the execution of subsequent calls.  Variables are passed from call to call, with JavaScript running within the Tests and Pre-request Scripts associated with each call.  In a nutshell, your collection is a series of calls executed in a specific order, with chunks of JavaScript potentially performed before and after each call.  Despa covers chaining in episode 5, "Chain Requests." 

Finally, since the Tests and Pre-request Script scripting is based on JavaScript, well, there's a whole internet out there to help you work through that.   While I've executed Hello World in countless languages and have certainly gotten hot and heavy with VBScript and PowerShell, I had no prior experience with JavaScript.  However, through Google-fu I was introduced to foreach loops and if statements, along with some variable management, and that was enough for me to get cooking with JavaScript.  I think anyone with scripting experience could find themselves getting dangerous with Postman pretty quickly if they were motivated. 


The Horizon REST API 

Info on the Horizon REST API is available directly from the Horizon Connection Server by pointing your browser to https://<Your-Connection-Server-FQDN>/rest/swagger-ui.html.  However, there's a must see article available from VMware's Tech Zone, "Using The VMware Horizon Server REST API," written by Chris Halstead.  It provides an introduction to the Horizon REST API along with demonstrations on how to use its endpoints, "in combination to achieve your goals."  Along with tons of useful information, it includes a link to sample collections that can be directly imported into your Postman workspace.  The linked resource, available on VMware {code}, is called, "Postman Collection - Horizon REST API."  With Postman already open on my machine I clicked on the button, "Run In Postman," and voila, I had over a 100 preconfigured calls to work with.  











Dang!!!  Talk about making folks dangerous quick.  With a free Postman account you can import these samples and begin making calls against your local Horizon environment in a matter of minutes.  Just update a handful of collection level variables and you're off to the races.  These variables are required to successfully execute a call to the login endpoint on the Connection Server.  A successful call returns a token from the Horizon environment that is assigned to a global variable which in turned is used by the rest of the sample calls for authorization.  While some sample calls require additional information/parameters, many are immediately available once you've executed the login call successfully,  such as all the Monitor samples.  Other calls, arguably the more interesting ones, require additional info. For instance, the disconnect endpoint requires an active session ID from the Horizon environment to target its action.  Chaining calls together to execute more complex actions like this is what we'll review next. 


The Basic Logic Behind My Collections

All four collections associated with the Custom Connector detailed in this post follow the same basic logic, so we'll review just one of them in detail.  The collection, "Disconnect Horizon Session," is made up of 5 different calls to the Horizon REST API, each of which was copied from Halstead's samples.  The collection begins with a call to the login endpoint that uses the token returned to authorize the next 4 calls.  Based on an AD username fed to the collection - more to come on that a bit later - the second call retrieves a list of AD accounts from the Horizon environment, finds the matching AD username, then passes the associated user_id to the next call via a global variable.  This 3rd call retrieves a list of sessions from Horizon and finds the session associated with the targeted user_id.  The matching session yields a session id that's key to executing the final two calls to the send-message and disconnect endpoints.  



The first 3 calls are the real work horses of the collection, performing the critical task of locating the session ID to target.  All the logic happens in either the Pre-request Script or the Tests associated with each call.  For instance, here's the JavaScript used with the call to ad-users-or-groups:  










In a nutshell, we're taking the response of our call to ad-users-or-groups endpoint and saving it to jsonData. Then we're fetching the global variable, "user" and adding that value to a local variable called targetUser. Finally, using a foreach function, each object stored within jsonData is walked through while comparing its AD account name with the target username. If there's a match, the ID associated with that matching AD account is copied to a global variable called user_id. This user_id global variable is then consumed by the next call to the sessions endpoint. The sessions endpoint call uses pretty much identical JavaScript logic.



Looks familiar, right? The names have changed, but the logic is identical. The response to the sessions endpoint call is copied to jsonData. Then each object returned is searched for a matching user_id. When a match is found that objects session ID is copied to the global variable SessionHunt. And then the fun begins, with the session ID getting fed to the next call to the endpoint session-message.



And boom, you've got a message getting sent to your user's session.
 


Finally, there's the actual disconnect. Similar to the send-message call, the SessionHunt global variable is used to target the action.
 


And there you have it. Waka! Waka! 5 REST API calls, 2 foreach loops, two if statements, a handful of variables later and you've got yourself a sweet little collection for automating the task of messaging and disconnecting a specific user. An entire collection like this can be executed in sequence by right clicking on the collection and selecting the option, "Run Collection."



Now, to make these actions accessible from Workspace ONE Intelligence a first step is to make the Horizon REST API available to the outside world. While there's countless solutions for achieving this, I'm going to turn to one of my favorite and dearest pieces of technology, VMware's Unified Access Gateway.

Making Calls Remotely Against An On-Premises Horizon Environment Through UAG

While not the most popular of use cases, Unified Access Gateway (UAG) can act as a web reverse proxy.  It's been a feature for years now, originally developed to provide access to on-premises vIDM environments, but now available for any on-premises resource.  For my lab UAG plays the key role of making the Horizon REST API accessible to Postman, more specifically Postman Monitors that live in the cloud and are triggered by webhooks. 

Fortunately, the configuration as a reverse proxy is fairly straightforward.  The trickiest part is configuring the proxy pattern.   To narrow down the reverse proxy functionality to only the REST API destination URL I went with this for a proxy pattern:  (/rest(.*))











This prevents the reverse proxy from exposing the entirety of the Horizon Connection server to the outside world.  Instead, only access to the REST API is possible when hitting the UAG appliance with a URI path that's matched to the destination url for the Horizon REST API. 










UAG's web reverse proxy capabilities provide Postman Monitors access to the Horizon Connection Server's REST API, allowing us to run collections against the Horizon environment whenever they're triggered by Intelligence.  With a collection in Postman configured and reverse proxy solution in place, the next step is to create a webhook to trigger collections that runs across the UAG appliance. 


Creating the Webhook To Your Postman Collection  

While we can make calls directly to 3rd party REST APIs using a WS1 Intelligence Custom Connector, we can only make a single call at a time based on data already located within Intelligence.  There's no option to probe these 3rd party REST APIs, collect some input, then process it in additional follow up calls.   However, that's exactly what we need in order to do anything interesting with the Horizon REST API: chain multiple calls together.   For instance, with the collection I walked through earlier, we're executing 5 different calls, passing variables from the first 3 calls to the final 2.  To accommodate this challenge, we can leverage Postman webhooks to trigger a run of an entire collection stored in the Postman cloud.  

Creating a webhook generates a Url that can be called upon by a WS1 Custom Connector to trigger the collection associated with the webhook.  Further, we can pass variables from the Intelligence data lake to the collection in the process of making a call to the webhook.  In the case of the collection detailed earlier in this post, WS1 Intelligence passes an AD username to the collection through the webhook. While there's official documentation on webhooks in the Postman Learning Center, "Triggering Runs With Webhooks,"  I found this short and concise recorded presentation on Youtube, "Postman Webhooks," to be really helpful. (There's also a very interesting, though much longer, youtube video on Postman webhooks called, "Automate All The Things With Webhooks.") 

As you can see in the video a webhook is created leveraging the Postman API and an endpoint called webhooks.   Making this call successfully requires a workspace ID,  an API key for your Postman account, and a UID for the collection you want to trigger with the webhook.   Locating your workspace ID is easy enough, as you can see in the guidance provided here.  Generating an API key is fairly straight forward and is one of the first things covered in the official documentation for the Postman API.   Once you have this key generated and copied you can use it to obtain the required collection UID using the Postman API's collections endpoint.  To make a successful call against this endpoint you need to include the API key in the header, populating it as a value for the key, "x-api-key."    



With this proper header key in place execution of the call generates a response with info about all your collections, including the UID for the specific collection you want to trigger with your webhook.   With the collection UID and workspace ID in hand you can create your webhook, populating the body of your request with the UID and adding the workspace ID as a parameter. (As with the call to the collections endpoint you'll need to include the API key in the header.)    Successful execution will yield a webhook Url that can be called upon to trigger your collection.  In the example below, a webhook Url of, https://newman-api.getpostman.com/run/13724510/69dbc0d3-0be9-4038-bf83-6c96da23dfe0, has been created and associated with the collection. 










When making a call to this webhook behind the scenes your leveraging Postman Monitors.  These provide you the added bonus of a paper trail/tracking of collection execution.  For each webhook you create there'll be a corresponding Monitor within your Postman workspace. 














When trying to figure out what went wrong with collection execution, or, more optimistically, what went right, you can drill into the events detailed under each monitor to get play by play action.  Below, you can see all the calls that were made as a result of the collection getting triggered by its associated webhook at 2:34pm.  

















You can also get more in-depth, play by play insight, by clicking on console log.   

















So, as if having the ability to trigger collections with a webhook Url wasn't enough, you also get the tracking and performance visibility normally afforded by Postman Monitors.  Next, we'll create a Custom Connector that makes a call to our Postman webhook, completing a circuit between the WS1 Intelligence cloud and the on-premises Horizon environment.  


Creating A Custom Connector To The Webhook













While WS1 Intelligence provides out-of-the-box integrations with UEM, ServiceNow and Slack, for years now it's offered the option of using Custom Connectors to integrate with any solution that supports a REST API. A Custom Connector can be setup to make calls to a Postman webhook by following the same guidance that's always applied to Custom Connector creation. Accordingly, useful guidance can be found is a post by Andreano Lanusse and Adam Hardy called, "Workspace ONE Custom Connector Samples." Along with providing incredibly useful samples the article lays out the steps for creating your own Custom Connectors. The basic process is to craft an API call in Postman, save a successful result of the call, export the call as a json collection, then import the exported json into Intelligence while creating a Custom Connector. Accordingly, I went to Postman and created new collection called, "Disconnect Horizon Desktop - Execute webhook," placing in it a single call to the webhook Url that triggers the, "Disconnect Horizon Session," collection detailed earlier.






We can pass variables from WS1 Intelligence through a webhook.  In this example we're passing an AD username from Intelligence as a value for, "username2."  The triggered collection is designed to ingest this parameter and target its search accordingly.   Before exporting this collection, you need to execute this call successfully, then save the result as a sample.   










At this point, you're ready to export the collection by navigating to collection, clicking on the 3 dots representing, "View more actions," and selecting export.
 








Go with the, "Collection v2.1," option and the exported json will download.  Next, go to the WS1 Intelligence console, navigate to Integrations --> Outbound Connectors, and select add custom connector.  For a base URL, you'll enter in the base url for your webhook, https://newman-api.getpostman.com. 













Next, you're prompted to import your exported collection.   Consistently I've run into challenges importing my own hand made custom connectors at this point with an error message of, "Invalid Postman JSON file: Content-Type header must be present."   

















This pitfall is referenced in the sample custom connector guidance article, which cautions, "Note: Consider adding headers as Content-Type: application/json. If you do not add headers as the content type JSON, the APIs can default to XML and XML does not work with custom connections."  Accordingly, one way I've gotten around this challenge is by copying the header from the working samples and inserting them into my custom connectors.   So it's all about replacing the default header on these exported collections from what's displayed here: 


    "method": "POST",
    "header": [

                            ],
    "body": {


With this:

        
    "method": "POST", 

    "header": [ 

    

            "key": "Content-Type", 

            "name": "Content-Type",

            "value": "application/json",

            "type": "text" }

    ], 

    "body": {


Once I made this edit to my exported collections the imports completed successfully.  In the end, after following this entire process for each of the collections a webhook was created for, I had these actions available from my outbound connector within Intelligence:  












While each action leverages a different collection, all actions traverse the same basic path:

Intelligence --> Postman webhook --> UAG --> Horzon REST API

To summarize, you have Intelligence triggering the Postman webhook based on reporting and automation configured within Intelligence.   The calls within the collection are executed from the Postman cloud, traversing the UAG web reverse proxy to the internal Horizon Connection Server.  Information about the environment is ascertained through a handful of initial calls and then leveraged by subsequent calls to target the automations within the internal Horizon environment.











Security Considerations

Exploring an option like this is destined to bring up security concerns. Below are a few I've run across as well as some relevant considerations.

Storing credentials in Postman:
Yes, scary indeed, particularly given that Horizon REST API credentials require root access for Horizon administration. However, any credentials stored in a Postman variable in your collections will be, "encrypted on the server-side before storage." Further, Postman has recently introduced support for MFA when you register using a Google based account. While both encryption and MFA take the edge of this concern, it should also be considered that the REST API credential account doesn't necessarily have any special AD rights.

Accepting Horizon Admin Credentials Through Public URL:
Having to open up an administrative REST API of your internal Connection Server to the external world is certainly a bit nerve wracking. However, the Professional and Enterprise Postman customers have the option to run their monitors with static IPs.  So, through firewall rules you can limit access to your UAG appliance to the public IPs used by Postman Monitors. That certainly reduces your risk. Also, while it hasn't been built yet, there's definitely Postman customers asking for the ability to leverage certificate auth for Postman Monitors. (I have seen client certificate authentication work through UAG for Postman requests from laptops, but it's not supported from Monitors yet.)

Triggering Administrative Actions Through Webhooks:
 I'll forgive anyone for being nervous about raining down ruthless automation from the sky based on calls to webhooks. However, my understanding is that webhooks are often known to rely on security by obscurity. The Postman webhook Urls are pretty long and ugly and I'm not sure how easily they're ascertained. I've had monitors running for over a month now and I haven't seen a single unsolicited request. Further, these webhooks aren't exposing folks to any credentials or direct access to Horizon. Bad guys can make calls to them for cheep thrills or random maliciousness but the chances of them getting any meaningful access to anything doesn't strike me as high. They're simply calls to perform administrative tasks with impact that depends on what's been automated. In addition, all these calls are tracked through Postman Monitors so you would have a paper trail. All that said, if push comes to shove, there definitely appears to be some do it yourself options for securing webhooks.

I'm Still Freaked Out:
Yeah, I get it. I think if security is a real sticking point for your team you could always develop your own full blown REST API. While developing your API would not be for the faint of heart, this post should provide a clear path forward to guide your development.

I certainly respect there's security considerations and concerns to address before implementing these adaptations. However, I think the subject is much more debatable than it seems at first blush and for some folks the benefits could out weigh the risks. Is the juice worth the squeeze? Well, depending on your use case, the juice could be awfully sweet. (If everything about these Custom Connector adaptations sounds awesome to you, but security is a real gotcha, I'd love for you to leave some comments, particularly around what use cases you have in mind.)


Final Thoughts 

The integration between WS1 Intelligence and Horizon detailed in this article is complicated and a lot to take in.  In a cantankerous mood you might go so far as to say, "it's a hot mess."  But, you know what's often the case with hot messes?  They hot, and this solution is absolutely, utterly, freaking gorgeous!  Driving automation against a Horizon environment based on a data lake in the cloud?   Hot!!!  Further, there's potential for the adaptations leveraged to span far beyond the Horizon use case.  UAG can extend the reach of Intelligence to any REST API within an on-premises environment.  Postman webhooks can increase the sophistication of REST API calls made to any 3rd party solution.  Combined together these adaptations significantly expand the reach and efficacy of Intelligence Custom Connectors.



Finally, as complex as the Horizon integration is, the overall objective is very much in line with the trajectory of VMware's EUC stack. "If you can't bring virtual desktop to cloud, bring cloud to the desktop," seems to be the battle cry for the entire Horizon suite, with more and more functionality getting shifted to the cloud even if workloads must remain on-premises.  Past success with SaaS based EUC solutions like UEM, Access and Intelligence not only enhance Horizon security, but also represent a shift to cloud management VMware's is striving to emulate for the Horizon stack.  For that matter, all of VMware seems to be charging in that direction, including vSphere itself.  It that light, the solution detailed in this article seems more like acceleration to a very probable destiny rather than some off the wall innovation. While this seemingly destined future isn't here today, in the meantime, if you've got the will for this functionality there's a way.  

Thursday, December 9, 2021

The Deprecation Of Basic Auth For Exchange And What It Means For VMware's Workspace ONE Customers

After several delays due to Covid-19 Microsoft has finally fixed a date for prohibiting Basic Auth in Exchange Online.  As of October 1st, 2022, Microsoft will begin disabling Basic Auth in all tenants, with short-term temporary disruptions for some customers beginning early 2022.  This news is initially a bit unnerving given that historically a lot of AirWatch/Workspace ONE customers have leveraged Basic Auth within their ActiveSync profiles.  However, it is limited to Exchange Online customers so on-premises Exchange customers, at least for now, need not worry.   Further, for existing Exchange Online WS1 customers leveraging Basic Auth there's a clear path forward through the adoption of Modern Authentication or other OAuth based alternatives.  This post begins with a quick overview of the ActiveSync Basic Auth deprecation and why it's relevant, then details the choice between Microsoft's Modern Auth or other OAuth based solutions for addressing the challenge.   Of all these OAuth based alternatives Workspace ONE Access is certainly my favorite, so I'll detail the magic that happens when you federate Azure AD with Workspace ONE Access and then introduce certificate based authentication with VMware's proprietary Mobile SSO solution. 

A MEM Misnomer: Rumors Of ActiveSync's Death Are Greatly Exaggerated


About a year and a half ago I started hearing grumblings of impending doom for WS1 customers and Mobile Email Management (MEM) in general.    The rumor went something like this: ActiveSync is getting deprecated which will lead to chaos in MEM everywhere, possibly triggering World War 3.   Making it somewhat believable was that ActiveSync hasn't been worked on for years now, with the latest version of 16.1 released in 2016.   Coupled with Microsoft's hyper focus on GRAPH APIs, in a bad mood, with your eyes squinted, it seemed possible ActiveSync could be going away.   However, the truth was more nuanced.  In August of 2020 I reached out to Martin Kniffin for guidance and he didn't fail to impress,  providing me and a handful of colleagues excellent context.   First and foremost he pointed out that it's not ActiveSync that's getting deprecated, but Basic Auth within ActiveSync. (More specifically, it's Basic Auth that's being deprecated almost across the board, not just within ActiveSync.)   When Basic Auth is used with Exchange Online you have the mail client storing a user's typed in credentials and then passing those credentials to Exchange, which in turn proxies those credentials to Azure AD.  These stored credentials on the endpoint device are constantly replayed against Exchange Online throughout the course of email access.   
 
Basic Authentication - Image taken from, "Disable Basic Authentication In Exchange Online"
So it's not ActiveSync that's dying off but rather this very rudimentary Basic Auth model that's going away, initially only in Exchange Online environments, not on-premises.   This deprecation has been in the works for awhile. Plans to disable Basic Auth in Exchange Online were first announced in Sept of 2019 with a target date of Oct 2020.   However, in response to Covid-19 it was postponed till the second half of 2021. Then in February of 2021 Microsoft indicated they would postpone until further notice.   At the same time they announced plans to begin disabling Basic Auth for tenants not currently using it.  Now, finally in late September of 2021, it was announced that Basic Auth would be disabled on all tenants starting October 2022, with more formal guidance coming out early November this year.  So, this hasn't exactly been a meteor the size of Texas hurling towards earth from out of nowhere.   More like The Blob, a really, really, really slow moving blob that, nonetheless, needs to be addressed. 


While ActiveSync payloads with Basic Auth have been wildly popular amongst Workspace ONE customers there's a clear path forward: leverage the OAuth ActiveSync payload setting for use with Microsoft's Modern Auth or a 3rd party federated IDP.    

Leveraging Microsoft Modern Auth With The ActiveSync OAuth Payload Setting

If your Office 365 tenant is purely leveraging Azure for identity, with no federation, both Basic Auth and Modern Auth are currently options for email access.   Modern Authentication is a Microsoft solution, "based on the Active Directory Authentication Library (ADAL) and Oauth 2.0."  With Modern Auth users authenticate with their AD credentials to Azure and then are issued a token granting access to Office 365.   So instead of having credentials stored within a mail client and proxied through Exchange Online, users are redirected to Azure at login.microsoftonline.com and upon successful authentication are issued a token that grants access to email, as well as the entire Office 365 suite.    

Modern Authentication Workflow - Image Borrowed From Shehan Perera's Tech Blog

In the diagram above you have a representation of Modern Auth in the context of a hybrid identity model that merges on-premises AD environments with an Azure tenant, allowing users to leverage their on-premises AD credentials when authenticating to Azure.  It starts with an on-premises Azure AD Connect instance that syncs accounts from on-premises with Azure. Then for authentication there's what's referred to as managed authentication, with a choice between password hash authentication (PHS) and pass-through authentication. (PTA)  With PHS hashes of your AD passwords are synchronized from your on-premises AD environment to the cloud. 

With PTA instead of having hashed passwords stored in the cloud validation occurs directly against your on-premises AD environment via an on-premises agent.  


Either model is supported with Modern Auth and the ActiveSync, "Use OAuth," payload setting. It's just a matter of personal taste for the organization. With both models you're extending your on-premises authentication to Azure and either one can work with the OAuth payload. As far as the ActiveSync payload settings in WS1 goes, all you have to do is check the box for, "Use OAuth, " and your email users will start getting prompted for Modern Auth. The, "OAuth Sign In URL," and, "OAuth Token URL," fields are not mandatory and can be left blank.  When you leaves these fields blank an autodiscovery process kicks in, one that first redirects login.microsoftonline.com.

The redirect to login.microsoftonline.com creates a slightly different experience from the traditional Basic Auth workflow, but it's not insurmountable.  Below is a recording that compares and contrast the two experiences with the built in iOS mail client. 

Also, there's certainly support for Modern Auth from most other mail clients as well, such as Boxer or Outlook. Here's what the process looks like for Boxer: 

 

Leveraging Workspace ONE Access With The ActiveSync OAuth Payload Setting

Along with Modern Auth, this, "Use OAuth," feature supports authentication against Workspace ONE Access, as well as various other federated IDPs such as ADFS, Okta or Ping.  When it's time to authenticate the user first hits login.microsftonline.com, then based on their email address gets redirected to a federated IDP.    In this example, AD authentication occurs through an instance of WS1 Access that's been federated with Azure.   It's very similar to Microsoft's Modern Auth model, except there's a redirection to a WS1 Access tenant where credentials are manually entered.  Here's a demonstration: 

For a more ideal experience you can accommodate authentication with  Mobile SSO for iOS, an incredibly compelling proprietary VMware solution that combines WS1 UEM with WS1 Access to provide SSO for mobiles apps. 

First and foremost, VMware's Mobile SSO solution provides an incredibly convenient certificate based single sign-on experience.  It also lays the ground work for the adoption of device compliance policies that allow us to factor in device enrollment and device posture while providing contextual authentication through conditional access policies.  Further, this solution extends device compliance security against the entire Office 365 suite, not just email access.  Even more exciting, since Mobile SSO for iOS or Android works for pretty much any Mobile App that supports SAML, adopting this solution for Office 365 puts into place a capability for securing mobile SaaS adoption across the board.  Combine this with VMware's certificate based authentication for modern management and you have a complete solution for layering zero trust security on top of SaaS adoption across most conceivable device types.  











One caveat to be aware of is that federation with an IDP like WS1 Access or other 3rd party solution is an all-or-nothing commitment.   You can't just have a subset of users handled by the federated IDP.  All of them will get initially redirected to the 3rd party IDP.    So before actually federating with another IDP you need to make sure that all your Office 365 users can be properly handled by it.  Further, federation will break Basic Auth, so you'd need to prepare accordingly.  

 

SEG For Office 365 Access

Many folks have quite a visceral response to the deployment model I'm about to mention. There are indeed some organizations that leverage SEG for Office 365 access.  I know, I know. While I can't throughly explain or exhaustively defend the design decision, to my understanding there are some use cases where this is a valid and legitimate option.  More customers than you'd image have needed it. 

I only bring it up here in the context of this ActiveSync discussion because with this model there is some authentication against Exchange Online, so it's possible a subset of folks with this type of deployment could be using Basic Auth.  Fortunately, these users can migrate to OAuth access as well.  Here's a sample from my own lab:














The Only Way Through Is Through - Tick Tock, Tick Tock

In a nutshell, the deprecation of Basic Auth is forcing customers to fall back to Modern Auth/OAuth, or, more accurately, fall forward to Modern Auth/OAuth.  As easy as it's been to just leverage Basic Auth we really should have already been marching away from it anyway, regardless of deprecation plans.  While I don't normally feel the need to defend a monster corporation like Microsoft, technically, it sounds like they're just forcing customers to do what they ought do.  Regardless, Workspace ONE/AirWatch has helped customer's navigate their mobile email management needs for over 10 years and is well positioned to assist with this challenge.  

There's no doubt in my mind that some VMware customers may still have some planning to do.  As of the time of this writing, early December 2021, customers have about 9 and half months to act.  Fortunately, Basic Auth is not dead yet, though the writing is certainly on the wall.  




Sunday, October 10, 2021

Securing Horizon With Cloud Hosted Workspace ONE And Carbon Black

For over a decade VMware's VDI solution has served up on-premises Windows desktops to remote Windows and Mac devices.  While the original solution at its core has stayed relatively the same, the ability to secure Horizon sessions through a tightly integrated SaaS stack represents a dramatic shift.  Using cloud instances of Workspace ONE Access, UEM, Intelligence and Carbon Black customers wrap comprehensive security around an already stellar remote Horizon user experience.   The cloudiness of these offerings means this security is easily layered onto existing Horizon environments non-disruptively, with minimal on-premises footprint. 

Base Image Stolen From Andreano "The Moose With The Juice" Lanusse












This ideal remote access scenario begins with Horizon making virtual desktops and published applications available to the external world through Unified Access Gateway.  Authentication for these Horizon sessions is brokered by a cloud instance of Workspace ONE Access that enforces contextual authentication requirements through conditional access policies.  Workspace ONE UEM informs these policies with device posture insight, while also actively managing and securing these remote endpoint devices.  Additionally, Carbon Black provides Next-Gen Antivirus protection not only for Win10 or macOS endpoint devices, but also for the virtual desktops or RDS hosts remotely accessed through Horizon.   Finally, WS1 Intelligence pulls these solutions together, enhancing automation while further calibrating conditional access policies with information regarding anomalous or risky behavior. 

This post is a primer on how cloud instances of Workspace ONE and Carbon Black are layered onto Horizon deployments to beef up security for remote access.  It starts with a brief overview of Horizon remote access, then elaborates on the security enhancements provided by these cloud services.  I'll essentially break down and explain the image above with, yet, more stolen images!  Yes, for this post I've gathered some of the best images I've ever stolen, modified or otherwise used and abused in the name of love and technical clarity.  After using these images to illustrate the security enhancements enabled for Horizon from the cloud, I’ll move on to review VMware’s Secure Access, a key component of the Anywhere Workspace offering.  VMware Secure Access offers an interesting alternative to Horizon, one that extends the benefits of SD-WAN and SASE to a less centralized remote access deployment. 


Delivering Windows Desktops Or Published Applications Through Horizon


Stolen From Todd Dayton










The above graphic presents a rudimentary but conceptually useful breakdown of VMware Horizon.  To begin with, you have a desktop or RDSH image living within a VM, supported on the same vSphere technology used for traditional server workloads.  A Horizon Connection Server, very much the brains of a Horizon deployment, has full admin access to this vSphere environment, using those rights for provisioning and inventory purposes.  This Connection Server also acts as a broker for incoming connections, routing users to their assigned desktops or RDS hosts after they've been authenticated.  User's eventually view and remotely control their desktops or published applications through display protocols like Blast or PCoIP. 

So, to extend vSphere goodness to desktops we've had to bring the desktops to the vSphere infrastructure, with the desktop OS and supported apps shifting locality from the endpoint to the datacenter.  From there, the Windows desktop is essentially converted into a service that can be consumed from pretty much any device that has network connectivity to the Horizon environment.  The benefits of this model really start to pop when folks are mobile or shifting across various devices.  While your device and network location may change, your virtual desktop stays the same, maintaining the Windows desktop session state.  This leads to a consistent and reliable user experience often referred to as a "Follow-Me" desktop, a concept that's been breaking hearts and taking names in healthcare for over a decade. 













With doctors and nurses highly mobile within the walls of a hospital this "Follow-Me" desktop experience really shines, especially when combined with a badge access solution like Imprivata.  As a 13 year veteran from the mean streets of non-profit healthcare IT, I'd say this user experience is impossible to beat when supporting clinicians and is what drives a lot of VDI adoption in healthcare.  Here's a quick demo: 


High mobility, along with the need to share work areas, make clinicians uniquely suited to benefit from this model.  That said, if you're an office worker with a dedicated cubicle and a dedicated workstation tethered to it, and all your work is done within that cubicle, then the "Follow-Me" desktop lacks wow factor.  However, as soon as you throw in any kind of mobility, even if it's just between cubicles, the question of, "Why bother with Horizon?" starts to melt away.  Throw in remote access from home, possibly in a BYOD scenario, and the question is completely obliterated.  In those scenarios, a "Follow-Me" desktop, one that follows you from work to home, then back, makes for the most ideal Windows user experience imaginable. 

Original Image From: Using Horizon To Access Physical Machines



















The path these remote Horizon sessions take to your trusted network from user's homes is provided and secured through Unified Access Gateway.



Providing Remote Access To Your Horizon Service


Remote Horizon access is enabled through Unified Access Gateway (UAG), a Linux virtual appliance that's typically deployed in a DMZ.  It acts as a gateway for your external Horizon users, ensuring all traffic from the remote endpoint device to the virtual desktop or RDS host is on behalf of a strongly authenticated user.  Below is a depiction of Blast, Horizon's display protocol of choice, as it traverses a UAG appliance after successful authentication.  Encryption of this traffic is handled end to end for the entire session through the Blast protocol itself. 













Now, as far as the initial authentication goes, there's various options with UAG.  The default authentication method is passthrough against Horizon's local AD environment by typing in an AD username and password.  However, when Workspace ONE mode is enabled on Horizon Connection Servers, UAG passes SAML traffic for authentication instead, ensuring all Horizon Blast traffic passing through the UAG appliance is for users that have been authenticated according to conditional access policies defined in Access.  Leveraging WS1 Access in this fashion provides admins with the most flexibility and widest range of options when it comes to securing remote access to Horizon.


Brokering Authentication For Horizon Using Workspace ONE Access


Conversations around Workspace ONE Access typically focus on the portal and SSO experience it provides for Horizon and 3rd party SaaS apps.  What's often neglected is how WS1 Access acts as broker for different authentication methods as someone initially logs into the portal or accesses a specific app.  Through conditional access policies admins enforce contextual authentication against the various security solutions WS1 Access has been integrated with.  Auth requirements for any particular app will be determined by the specifics of theses policies and a user's current context.  App access may be a simple SSO experience or as complex as MFA from a fully enrolled and compliant device.  For a deeper dive on conditional access polices and SAML check out this overview on youtube. 


Base Image Stolen From Peter Bjork

























Several of the inbound authentication options detailed above are made possible through the deployment of a Workspace ONE Access connector in the customers trusted network.  This connector is key to an integration with an AD environment, syncing AD users to Workspace ONE access and providing the ability to authenticate to AD.  It also enables your tenants integration with on-premises resources such as your Horizon environments or security solutions that support RADIUS. Depending on the specifics of your deployment these WS1 Access Connectors may be the only necessary additional on-premises resources required for securing Horizon from the cloud. 


























Now when it comes to integrating WS1 Access with 3rd party security solutions, SAML chaining allows for integration with popular names like Okta, Ping, Azure, as well as any other solutions that support SAML.   After configuring these 3rd party solutions as trusted IDPs for WS1 Access we can  leverage their authentication mechanisms for applications managed through Access.  Below is an example of this process for an Okta integration, something I'm seeing a lot of nowadays.  With a fully documented process for configuring Okta as an IDP, "Integrating VMware Workspace ONE With Okta,"  it's a very accessible option for Workspace ONE customers who already leverage Okta for MFA.  





WS1 Access is basically integration goo, allowing you to integrate Horizon, or any other SAML compliant apps, with whatever security solutions you already have in place.  By linking up with these 3rd party solutions we enjoy a richer set of conditional access policies, as we pick and choose amongst various auth requirements for Horizon across different use cases and scenarios.  This ability to integrate with the security solutions customers are already using to protect their environments makes WS1 Access truly compelling.  You end up with something a bit motley and Frankenstein-ish, or pickle-Rick-ish if you will,  but arguably that's sort of unavoidable when you're stitching together disparate solutions from across your enterprise.  
























A WS1 Access deployment is only as interesting as the solutions it's been integrated with.  While support for SAML and RADIUS integrations with 3rd parties offer many alternatives,  where things get really exciting is with the built-in support for Workspace ONE UEM.  When looking at the Inbound/Outbound graphic above, mechanisms like, "Certificate," "Mobile SSO For Android," "Mobile SSO For iOS," and "Device Compliance," result from the integration between Workspace ONE Access and Workspace ONE UEM.



Informing Conditional Access Policies With Device Status Insight From UEM


When Workspace ONE Access and UEM are integrated Horizon access can be predicated on enrollment or even device compliance. This leads to a much more discerning, richer set of conditional access policies.  Essentially, we're taking WS1 Access conditional access policies and juicing them with UEM insight, leading to more informed polices to drive contextual authentication.

Stolen Image From Andreano Lanusse

This progression towards zero trust begins with the various certificate based authentication options supported by the integration of UEM and WS1 Access.   Going back to the Inbound/Outbound graphic of the previous section, the arrows for, "Mobile SSO for iOS," "Mobile SSO for Android," and "Certificates," for Win10 and macOS, are enabled through the integration of WS1 UEM and Access.   While these methods are enforced through Access, the certificates are delivered through UEM, effectively mandating device enrollment in UEM for access to Horizon.  Further, "Device Compliance," can only work in conjunction with one of these authentication methods.  So, in the case of modern management, we're talking about a combination of Certificate auth through WS1 Access, certs delivered through UEM, as well as UEM device compliance policies for Win10 and macOS.  















While device compliance policies wonderfully highlight the ability to interrogate devices with UEM, WS1 UEM enrollment actually MAKES devices more secure.  It's not just about interrogation, but also the ability to help the device course correct and achieve a secure posture.  The nitty gritty, under appreciated work that is, none the less, absolutely critical to security, like patching, firewall configuration,  device encryption and general configuration management falls right in the wheelhouse of WS1 UEM.  So along with vouching for the state of the device it's also literally making it more secure.  This management and control is further extended through an integrated deployment of Carbon Black, a Next-Gen Antivirus solution for Win10 and macOS.


Carbon Black


While UEM management addresses security concerns from the perspective of system configuration and maintenance, Carbon Black addresses security head on when it comes to fighting off hackers, malware and Ransomware.  Core to the suite is cloud based Next-Gen antivirus and behavioral EDR, with an option to fall back to more traditional signature protection.  Carbon Black's NGAV and EDR entail the application of machine learning and AI against data aggregated from millions of customer endpoints. We're talking over 500 TB of endpoint data, over 1 Trillion events a day, getting reported to and processed in the Carbon Black cloud.  This insight is then brought to bare when controlling behavior on endpoint devices.













While cloud is core to Carbon Black's security insight, it has the added benefit of making Carbon Black easier to deploy and manage.  For a typical customer there's zero on-premises infrastructure to be concerned with.  You have a cloud tenant to configure and an agent to deploy to your Win10 or macOS devices and that's the extent of your concern.   For Horizon VDI environments you can simply add the agent to your gold images and you're off to the races. For endpoint devices Workspace ONE UEM itself can easily distribute the agent to managed endpoints.  

Workspace ONE Intelligence and VMware Carbon Black: Automating Device Quarantine Feature Walkthrough

Even more exciting is the ability to trigger actions in Workspace ONE UEM based on threats detected by Carbon Black on managed endpoint devices.  So, for example, if a threat is detected on a device not only can Carbon Black respond, but additional measures can be automatically executed through WS1 UEM to remediate the endpoint.   This is made possible by WS1 Intelligence and the ruthless automation it can enable for  WS1 environments.  


Workspace ONE Intelligence - Gelling It Together Even Further

For this ideal remote access Horizon scenario, Workspace ONE Intelligence introduces ruthless automation while also informing conditional access policies with User Risk Scores and Login Risk Scores.  As mentioned above, we can trigger automated workflows within Intelligence based on threats detected by Carbon Black.  We can also trigger this automation based on device info gather from WS1 UEM, which includes over 200 data points.  Should you require data not collected by UEM out of the box, you can collect additional attributes using custom Sensors for your modern management scenarios.  Sensors enable this extensibility using PowerShell scripts on Win10 or bash, python and Zsh scripts on macOS.  










The data collected within the Intelligence data lake drives ruthless automation that ensures Win10 and macOS devices are properly configured.   This data is also leveraged to generate User Risk Scores and Login Risk Scores ingested by conditional access policies.  In this manner, WS1 Intelligence Risk Analytics enable WS1 Access to calibrate contextual authentication with data regarding anomalous or risky behavior.   


VMware Anywhere Workspace 



Cloud instances of WS1 and Carbon Black offer existing Horizon customers a clear path forward for enhancing security.  However, if Horizon isn't viable but you still have a remote use case you'd like to enhance with the security capabilities  discussed so far, then you probably want to check out VMware's Anywhere Workspace. Workspace ONE and Carbon Black are core to the Anywhere Workspace solution and can enhance VMware Secure Access with some of the same benefits they lend to Horizon. Secure Access marries together Workspace ONE with VMware's SASE solution based on SD-WAN by VeloCloud, offering an alternative that overlaps with remote Horizon access but, more notably, enhances connectivity and security for remote endpoints from the cloud. 


Where Secure Access first differs from Horizon is that instead of providing remote connectivity to a desktop or RDS host back in the datacenter, you're running applications locally on your modern managed Win10 or macOS devices. Workspace ONE UEM can provision these applications as well as provide them remote access back to your trusted network through Workspace ONE UEM's Per-App VPN.  With this model a TLS session is automatically established back to your trusted network for specific applications based on device compliance policies.  This is ideal for a traditional client/server application running locally on your endpoint or perhaps a browser hitting an internal site.  Per-App VPN has always distinguished itself from traditional VPN solutions by limiting VPN connectivity to specific defined apps, rather than the whole device.  Further, it simplifies access because there's no need for a user to manually launch a VPN client. Instead, a TLS session is automagically established on behalf of the users when the enabled app is launched.

Deploying VMware Workspace ONE Tunnel: Workspace ONE Operational Tutorial

Per-App VPN has been part of the AirWatch portfolio for over half a decade, supporting modern management use cases for years.  Secure Access innovates by delivering this Per-App VPN capability through VMware's SASE offering, merging WS1 with VMware's SD-WAN solution. (Velo-Cloud). With this model instead of supporting Per-App VPN through VMware Tunnel on a UAG appliance sitting in the customers DMZ, the VMware Tunnel Service is hosted on behalf of the customer within SASE PoPs.   In a nutshell, VMware Tunnel is hosted as a service, in containers, simultaneously across various SASE PoPs.  Per-App connections are routed from the Tunnel app on endpoint devices to the closest SASE PoP, with most users able to find one within 10 milliseconds of latency.  Once traffic hit's this PoP the benefits of VeloCloud SD-WAN are extended to this VPN access, with optimized connectivity to corporate data centers as well as SaaS and cloud service providers.    




Along with enhancing network connectivity we're getting security enhancement from within the SASE PoP through Cloud Web Security.  This new offering introduces features like SSL inspection, URL filtering and content filtering.  So with the VMware Secure Access model you're not only farming out management of VPN concentrators or UAG instances, you're also moving traditional security security services from on-premises to the cloud.  Running these services within the SASE PoPs circumvents the need for hair pinning internet traffic back through your on-premises network for inspection, certainly a boon for remote performance.  

Though there's overlap between Horizon and VMware Secure Access capabilities, they are very different solutions with different strengths and caveats.  If you're looking to offer a highly curated Windows experience, particularly one that supports a traditional client/server app hosted internally, Horizon is compelling.  All that nitty gritty, unsexy, and persnickety Windows management, in particular customization of Windows applications, is centrally handled and managed by Horizon in a model that's over a decade old. Further with Horizon itself supporting SAML, you're extended the full breadth of WS1 Access capabilities when protecting legacy Windows applications. That said, VMware Secure Access is certainly an intriguing proposition, offering optimized connectivity to corporate networks and the cloud while moving security services closer to remote users.  Ideally, as a customer I'd want Horizon around for the more meticulous Windows requirements, while leveraging Secure Access for everything else.  

 

Final Thoughts


A couple months ago I presented this best case scenario for remote Horizon access to a session full of jaded, cynical and curmudgeonly IT veterans.  As we digested the current state of the entire VMware EUC stack regarding remote access, I think our collective experience was similar to a parent who has just realized, "holly cow, my baby has grown up and baby is bad!"  While VDI over the last 5 years, at its score, has stayed largely the same, albeit with tons of polish and stability enhancements, the methods for securing its remote consumption have very much changed and evolved. A decade ago it was all about, "slap a horizon client on whatever you want, no data will be at rest on that remote device, so, don't worry, be happy." Fast forward to 2021, we can now ensure that a device remotely accessing Horizon is absolutely secure and virus free, while authenticating a user from that device according to a wide range of contextual authentication options. This is all achieved leveraging mature and proven solutions delivered from the cloud, services that not only radically improved remote Horizon security but are also the foundation of VMware's new Anywhere Workspace offering.  



VMworld 2021 Announcement

Several VMworld 2021 announcement regarding futures certainly shore up the already impressive story covered in this post.  Continuous authentication, enhanced conditional access policies and support for Horizon on SASE show a lot of promise and further reenforce overall confidence in the VMware remote access vision.  Additionally, cloud driven enhancements for simplifying on-premises Horizon management further elucidates a general trend of, "if you can't bring desktops to the cloud, bring cloud services to the desktops."  I only failed to mention these announcements till now because, as a drearily sane engineer from healthcare IT, if a technology wasn't at least 6 months old, I just couldn't take it seriously.  Along those lines,  everything covered in this post up till this section is grounded in the here and now of what is GA'd and available.  Yes, there are some shiny improvements on the way, but there's plenty to be accomplished with the stack as it stands today.