Last year I published, "Ruthless Automation With Workspace ONE Intelligence," an article highlighting the impressive automation capabilities of Intelligence. Well, in this post I'm going to detail adaptations to WS1 Intelligence that provide even ruthlesser automation! Huzzah! Using Postman webhooks and VMware's Unified Access Gateway you can amplify the sophistication and reach of Intelligence Custom Connectors. While any solution supporting a REST API may benefit from either enhancement, a Horizon on-premises environment benefits from both, making it an ideal use case to demonstrate. Traditionally Horizon has been out of reach from Intelligence automation but Postman webhooks and UAG's web reverse proxy capabilities combine to close the gap and enable the use of Custom Connectors for Horizon.
In the illustrated solution a REST API call is triggered by a defined event within the Intelligence data lake, as with any Custom Connector implementation. However, the call made from Intelligence is to a Postman webhook Url rather than directly to the Horizon environment. The webhook triggers an entire collection to run from the Postman cloud against the Horizon environment, an activity that's tracked and managed through a Postman Monitor. This allows Intelligence to trigger much more sophisticated REST API calls that are chained together and build upon each other, shifting complex logic to the Postman cloud where it's executed and tracked for fractions of a penny. Further, the reach of these calls from Intelligence are extended to an on-premises environment by using UAG as a web reverse proxy. This is critical for providing access to the Horizon REST API from the Postman cloud. The video below demonstrates both enhancements working in concert to integrate Intelligence and Horizon on-premises.
Getting Up To Speed On Postman
Creating the Custom Connector detailed in this post definitely requires familiarity with Postman and REST APIs. Fortunately, the Postman website includes a Learning Center with incredibly helpful walk-throughs. Within minutes of reviewing this site I got my hands dirty with essentially the, "hello world," of Postman requests, postman-echo/get. This call leverages an open API server that doesn't require any kind of authentication, providing a very accessible introduction to REST API calls from the Postman client.
Along with the Learning Center itself, there's enablement available from Valentin Despa on YouTube. He has a 3 part video series called, "Introduction To APIs," providing an excellent overview of the how and the why of REST APIs and API clients like Postman. Then there's his 6 part, "Intro To Postman," series which I absolutely loved. After working through this series I found myself dangerous enough to start hacking together my desired solution.
The series teaches that accessing a REST API from Postman can be as simple as executing a request against a single URL. However, for more complex operations you can chain multiple calls together in a collection. This allows you to take output from one call, then distill and leverage it during the execution of subsequent calls. Variables are passed from call to call, with JavaScript running within the Tests and Pre-request Scripts associated with each call. In a nutshell, your collection is a series of calls executed in a specific order, with chunks of JavaScript potentially performed before and after each call. Despa covers chaining in episode 5, "Chain Requests."
Finally, since the Tests and Pre-request Script scripting is based on JavaScript, well, there's a whole internet out there to help you work through that. While I've executed Hello World in countless languages and have certainly gotten hot and heavy with VBScript and PowerShell, I had no prior experience with JavaScript. However, through Google-fu I was introduced to foreach loops and if statements, along with some variable management, and that was enough for me to get cooking with JavaScript. I think anyone with scripting experience could find themselves getting dangerous with Postman pretty quickly if they were motivated.
The Horizon REST API
Info on the Horizon REST API is available directly from the Horizon Connection Server by pointing your browser to https://<Your-Connection-Server-FQDN>/rest/swagger-ui.html. However, there's a must see article available from VMware's Tech Zone, "Using The VMware Horizon Server REST API," written by Chris Halstead. It provides an introduction to the Horizon REST API along with demonstrations on how to use its endpoints, "in combination to achieve your goals." Along with tons of useful information, it includes a link to sample collections that can be directly imported into your Postman workspace. The linked resource, available on VMware {code}, is called, "Postman Collection - Horizon REST API." With Postman already open on my machine I clicked on the button, "Run In Postman," and voila, I had over a 100 preconfigured calls to work with.
Dang!!! Talk about making folks dangerous quick. With a free Postman account you can import these samples and begin making calls against your local Horizon environment in a matter of minutes. Just update a handful of collection level variables and you're off to the races. These variables are required to successfully execute a call to the login endpoint on the Connection Server. A successful call returns a token from the Horizon environment that is assigned to a global variable which in turned is used by the rest of the sample calls for authorization. While some sample calls require additional information/parameters, many are immediately available once you've executed the login call successfully, such as all the Monitor samples. Other calls, arguably the more interesting ones, require additional info. For instance, the disconnect endpoint requires an active session ID from the Horizon environment to target its action. Chaining calls together to execute more complex actions like this is what we'll review next.
The Basic Logic Behind My Collections
All four collections associated with the Custom Connector detailed in this post follow the same basic logic, so we'll review just one of them in detail. The collection, "Disconnect Horizon Session," is made up of 5 different calls to the Horizon REST API, each of which was copied from Halstead's samples. The collection begins with a call to the login endpoint that uses the token returned to authorize the next 4 calls. Based on an AD username fed to the collection - more to come on that a bit later - the second call retrieves a list of AD accounts from the Horizon environment, finds the matching AD username, then passes the associated user_id to the next call via a global variable. This 3rd call retrieves a list of sessions from Horizon and finds the session associated with the targeted user_id. The matching session yields a session id that's key to executing the final two calls to the send-message and disconnect endpoints.
The first 3 calls are the real work horses of the collection, performing the critical task of locating the session ID to target. All the logic happens in either the Pre-request Script or the Tests associated with each call. For instance, here's the JavaScript used with the call to ad-users-or-groups:
In a nutshell, we're taking the response of our call to ad-users-or-groups endpoint and saving it to jsonData. Then we're fetching the global variable, "user" and adding that value to a local variable called targetUser. Finally, using a foreach function, each object stored within jsonData is walked through while comparing its AD account name with the target username. If there's a match, the ID associated with that matching AD account is copied to a global variable called user_id. This user_id global variable is then consumed by the next call to the sessions endpoint. The sessions endpoint call uses pretty much identical JavaScript logic.
Looks familiar, right? The names have changed, but the logic is identical. The response to the sessions endpoint call is copied to jsonData. Then each object returned is searched for a matching user_id. When a match is found that objects session ID is copied to the global variable SessionHunt. And then the fun begins, with the session ID getting fed to the next call to the endpoint session-message.
Finally, there's the actual disconnect. Similar to the send-message call, the SessionHunt global variable is used to target the action.
And there you have it. Waka! Waka! 5 REST API calls, 2 foreach loops, two if statements, a handful of variables later and you've got yourself a sweet little collection for automating the task of messaging and disconnecting a specific user. An entire collection like this can be executed in sequence by right clicking on the collection and selecting the option, "Run Collection."
Now, to make these actions accessible from Workspace ONE Intelligence a first step is to make the Horizon REST API available to the outside world. While there's countless solutions for achieving this, I'm going to turn to one of my favorite and dearest pieces of technology, VMware's Unified Access Gateway.
Making Calls Remotely Against An On-Premises Horizon Environment Through UAG
While not the most popular of use cases, Unified Access Gateway (UAG) can act as a web reverse proxy. It's been a feature for years now, originally developed to provide access to on-premises vIDM environments, but now available for any on-premises resource. For my lab UAG plays the key role of making the Horizon REST API accessible to Postman, more specifically Postman Monitors that live in the cloud and are triggered by webhooks.
Fortunately, the configuration as a reverse proxy is fairly straightforward. The trickiest part is configuring the proxy pattern. To narrow down the reverse proxy functionality to only the REST API destination URL I went with this for a proxy pattern: (/rest(.*))
This prevents the reverse proxy from exposing the entirety of the Horizon Connection server to the outside world. Instead, only access to the REST API is possible when hitting the UAG appliance with a URI path that's matched to the destination url for the Horizon REST API.
UAG's web reverse proxy capabilities provide Postman Monitors access to the Horizon Connection Server's REST API, allowing us to run collections against the Horizon environment whenever they're triggered by Intelligence. With a collection in Postman configured and reverse proxy solution in place, the next step is to create a webhook to trigger collections that runs across the UAG appliance.
Creating the Webhook To Your Postman Collection
While we can make calls directly to 3rd party REST APIs using a WS1 Intelligence Custom Connector, we can only make a single call at a time based on data already located within Intelligence. There's no option to probe these 3rd party REST APIs, collect some input, then process it in additional follow up calls. However, that's exactly what we need in order to do anything interesting with the Horizon REST API: chain multiple calls together. For instance, with the collection I walked through earlier, we're executing 5 different calls, passing variables from the first 3 calls to the final 2. To accommodate this challenge, we can leverage Postman webhooks to trigger a run of an entire collection stored in the Postman cloud.
Creating a webhook generates a Url that can be called upon by a WS1 Custom Connector to trigger the collection associated with the webhook. Further, we can pass variables from the Intelligence data lake to the collection in the process of making a call to the webhook. In the case of the collection detailed earlier in this post, WS1 Intelligence passes an AD username to the collection through the webhook. While there's official documentation on webhooks in the Postman Learning Center, "Triggering Runs With Webhooks," I found this short and concise recorded presentation on Youtube, "Postman Webhooks," to be really helpful. (There's also a very interesting, though much longer, youtube video on Postman webhooks called, "Automate All The Things With Webhooks.")
As you can see in the video a webhook is created leveraging the Postman API and an endpoint called webhooks. Making this call successfully requires a workspace ID, an API key for your Postman account, and a UID for the collection you want to trigger with the webhook. Locating your workspace ID is easy enough, as you can see in the guidance provided here. Generating an API key is fairly straight forward and is one of the first things covered in the official documentation for the Postman API. Once you have this key generated and copied you can use it to obtain the required collection UID using the Postman API's collections endpoint. To make a successful call against this endpoint you need to include the API key in the header, populating it as a value for the key, "x-api-key."
With this proper header key in place execution of the call generates a response with info about all your collections, including the UID for the specific collection you want to trigger with your webhook. With the collection UID and workspace ID in hand you can create your webhook, populating the body of your request with the UID and adding the workspace ID as a parameter. (As with the call to the collections endpoint you'll need to include the API key in the header.) Successful execution will yield a webhook Url that can be called upon to trigger your collection. In the example below, a webhook Url of, https://newman-api.getpostman.com/run/13724510/69dbc0d3-0be9-4038-bf83-6c96da23dfe0, has been created and associated with the collection.
When making a call to this webhook behind the scenes your leveraging Postman Monitors. These provide you the added bonus of a paper trail/tracking of collection execution. For each webhook you create there'll be a corresponding Monitor within your Postman workspace.
When trying to figure out what went wrong with collection execution, or, more optimistically, what went right, you can drill into the events detailed under each monitor to get play by play action. Below, you can see all the calls that were made as a result of the collection getting triggered by its associated webhook at 2:34pm.
You can also get more in-depth, play by play insight, by clicking on console log.
So, as if having the ability to trigger collections with a webhook Url wasn't enough, you also get the tracking and performance visibility normally afforded by Postman Monitors. Next, we'll create a Custom Connector that makes a call to our Postman webhook, completing a circuit between the WS1 Intelligence cloud and the on-premises Horizon environment.
Creating A Custom Connector To The Webhook
While WS1 Intelligence provides out-of-the-box integrations with UEM, ServiceNow and Slack, for years now it's offered the option of using Custom Connectors to integrate with any solution that supports a REST API. A Custom Connector can be setup to make calls to a Postman webhook by following the same guidance that's always applied to Custom Connector creation. Accordingly, useful guidance can be found is a post by Andreano Lanusse and Adam Hardy called, "Workspace ONE Custom Connector Samples." Along with providing incredibly useful samples the article lays out the steps for creating your own Custom Connectors. The basic process is to craft an API call in Postman, save a successful result of the call, export the call as a json collection, then import the exported json into Intelligence while creating a Custom Connector. Accordingly, I went to Postman and created new collection called, "Disconnect Horizon Desktop - Execute webhook," placing in it a single call to the webhook Url that triggers the, "Disconnect Horizon Session," collection detailed earlier.
We can pass variables from WS1 Intelligence through a webhook. In this example we're passing an AD username from Intelligence as a value for, "username2." The triggered collection is designed to ingest this parameter and target its search accordingly. Before exporting this collection, you need to execute this call successfully, then save the result as a sample.
Go with the, "Collection v2.1," option and the exported json will download. Next, go to the WS1 Intelligence console, navigate to Integrations --> Outbound Connectors, and select add custom connector. For a base URL, you'll enter in the base url for your webhook, https://newman-api.getpostman.com.
Next, you're prompted to import your exported collection. Consistently I've run into challenges importing my own hand made custom connectors at this point with an error message of, "Invalid Postman JSON file: Content-Type header must be present."
This pitfall is referenced in the sample custom connector guidance article, which cautions, "Note: Consider adding headers as Content-Type: application/json. If you do not add headers as the content type JSON, the APIs can default to XML and XML does not work with custom connections." Accordingly, one way I've gotten around this challenge is by copying the header from the working samples and inserting them into my custom connectors. So it's all about replacing the default header on these exported collections from what's displayed here:
"method": "POST",
"header": [
],
"body": {
With this:
"method": "POST",
"header": [
{
"key": "Content-Type",
"name": "Content-Type",
"value": "application/json",
"type": "text" }
],
"body": {
Once I made this edit to my exported collections the imports completed successfully. In the end, after following this entire process for each of the collections a webhook was created for, I had these actions available from my outbound connector within Intelligence:
While each action leverages a different collection, all actions traverse the same basic path:
Intelligence --> Postman webhook --> UAG --> Horzon REST API
To summarize, you have Intelligence triggering the Postman webhook based on reporting and automation configured within Intelligence. The calls within the collection are executed from the Postman cloud, traversing the UAG web reverse proxy to the internal Horizon Connection Server. Information about the environment is ascertained through a handful of initial calls and then leveraged by subsequent calls to target the automations within the internal Horizon environment.
Security Considerations
Exploring an option like this is destined to bring up security concerns. Below are a few I've run across as well as some relevant considerations.Storing credentials in Postman: Yes, scary indeed, particularly given that Horizon REST API credentials require root access for Horizon administration. However, any credentials stored in a Postman variable in your collections will be, "encrypted on the server-side before storage." Further, Postman has recently introduced support for MFA when you register using a Google based account. While both encryption and MFA take the edge of this concern, it should also be considered that the REST API credential account doesn't necessarily have any special AD rights.
Accepting Horizon Admin Credentials Through Public URL: Having to open up an administrative REST API of your internal Connection Server to the external world is certainly a bit nerve wracking. However, the Professional and Enterprise Postman customers have the option to run their monitors with static IPs. So, through firewall rules you can limit access to your UAG appliance to the public IPs used by Postman Monitors. That certainly reduces your risk. Also, while it hasn't been built yet, there's definitely Postman customers asking for the ability to leverage certificate auth for Postman Monitors. (I have seen client certificate authentication work through UAG for Postman requests from laptops, but it's not supported from Monitors yet.)
Triggering Administrative Actions Through Webhooks: I'll forgive anyone for being nervous about raining down ruthless automation from the sky based on calls to webhooks. However, my understanding is that webhooks are often known to rely on security by obscurity. The Postman webhook Urls are pretty long and ugly and I'm not sure how easily they're ascertained. I've had monitors running for over a month now and I haven't seen a single unsolicited request. Further, these webhooks aren't exposing folks to any credentials or direct access to Horizon. Bad guys can make calls to them for cheep thrills or random maliciousness but the chances of them getting any meaningful access to anything doesn't strike me as high. They're simply calls to perform administrative tasks with impact that depends on what's been automated. In addition, all these calls are tracked through Postman Monitors so you would have a paper trail. All that said, if push comes to shove, there definitely appears to be some do it yourself options for securing webhooks.
I'm Still Freaked Out: Yeah, I get it. I think if security is a real sticking point for your team you could always develop your own full blown REST API. While developing your API would not be for the faint of heart, this post should provide a clear path forward to guide your development.
I certainly respect there's security considerations and concerns to address before implementing these adaptations. However, I think the subject is much more debatable than it seems at first blush and for some folks the benefits could out weigh the risks. Is the juice worth the squeeze? Well, depending on your use case, the juice could be awfully sweet. (If everything about these Custom Connector adaptations sounds awesome to you, but security is a real gotcha, I'd love for you to leave some comments, particularly around what use cases you have in mind.)
Final Thoughts
The integration between WS1 Intelligence and Horizon detailed in this article is complicated and a lot to take in. In a cantankerous mood you might go so far as to say, "it's a hot mess." But, you know what's often the case with hot messes? They hot, and this solution is absolutely, utterly, freaking gorgeous! Driving automation against a Horizon environment based on a data lake in the cloud? Hot!!! Further, there's potential for the adaptations leveraged to span far beyond the Horizon use case. UAG can extend the reach of Intelligence to any REST API within an on-premises environment. Postman webhooks can increase the sophistication of REST API calls made to any 3rd party solution. Combined together these adaptations significantly expand the reach and efficacy of Intelligence Custom Connectors.
Finally, as complex as the Horizon integration is, the overall objective is very much in line with the trajectory of VMware's EUC stack. "If you can't bring virtual desktop to cloud, bring cloud to the desktop," seems to be the battle cry for the entire Horizon suite, with more and more functionality getting shifted to the cloud even if workloads must remain on-premises. Past success with SaaS based EUC solutions like UEM, Access and Intelligence not only enhance Horizon security, but also represent a shift to cloud management VMware's is striving to emulate for the Horizon stack. For that matter, all of VMware seems to be charging in that direction, including vSphere itself. It that light, the solution detailed in this article seems more like acceleration to a very probable destiny rather than some off the wall innovation. While this seemingly destined future isn't here today, in the meantime, if you've got the will for this functionality there's a way.
No comments:
Post a Comment