What's Changed?
Admins can deploy Tunnel Container to their Linux distribution of choice, so long as it's running Docker and is SSH accessible. It's a significant shift away from the Unified Access Gateway Appliance (UAG), the preferred Tunnel deployment mechanism for years now. UAG is a hardened Linux appliance, currently AlmaLinux based, that includes support for all Omnissa edge services. Though it's an incredibly robust and reliable solution, there are some drawbacks. It doesn't offer support for in-place upgrades, with version changes requiring the deployment of new virtual appliances. Further, to maintain supportability, any modifications directly to the appliances system state is strictly forbidden. So, no additional OS hardening is allowed and there's no support for installing additional agents. At the other end of the spectrum is Tunnel Container, a model where the Tunnel service is decoupled and abstracted away from the underlying OS using Docker. With Tunnel Container we lose the safety and predictability of a preconfigured, hardened and tested appliance, but gain the freedom to use our Linux distribution of choice and harden the OS anyway we see fit.
Unlike Unified Access Gateway, with Tunnel Container deployments we're no longer leveraging PowerShell, ini files and appliance images. Further, there's no dependencies on underlying hypervisors. Instead, we're leveraging the cross platform Tunnel CLI, dux, to deploy Tunnel Container. Based off configurations defined in a ts_manifiest.yaml file dux connects directly to the target Linux hosts via SSH to remotely stand up and configure the Tunnel gateway service. There's no need for WS1 admins to understand Docker or kubernetes, with the dux cli handling this complexity on the admin's behalf.
Though Tunnel Container adoption introduces significant change, as far as the WS1 Tunnel deployment as whole goes, lots of things stay exactly the same.
What's The Same?
Once deployed, Tunnel Container provides the same old Tunnel gateway service we've been working with for over a decade. The networking requirements are almost exactly the same, save for port connectivity requirements required for administration. Communication and security between the end user devices and Tunnel service is exactly the same as it has been. Further, as with UAG based instances of Tunnel gateway, the Tunnel Container based service communicates with the UEM tenant through outbound 443 traffic to retrieve basic configuration instructions and status updates regarding trusted endpoints. Finally, as with UAG based deployments, there's Basic or Cascade Mode deployments to choose from.
![]() |
| Basic mode deployment with Tunnel Container |
From the perspective of a WS1 admin solely working through the UEM console, there's absolutely no difference between a Tunnel gateway instance delivered through UAG or Tunnel Container. In fact, you could have both a UAG based instance and Tunnel Container based instance of the Tunnel service working side by side under the same Tunnel Configuration. Accordingly, if you handled load balancing properly, you could deploy Tunnel Container based instances of Tunnel services to your existing environments then decommission UAG based instances without a downtime, a non-disruptive blue-green migration.
![]() |
| Tunnel Configuration within the WS1 UEM console |
While Tunnel Container may seem rather exotic and new to traditional WS1 admins, most of their prior experience with Tunnel is still very relevant and germane. If I was introducing Tunnel Container to a Workspace ONE newbie, I'd say the most challenging thing about Tunnel Container isn't understanding container management, but getting up to speed on the way Workspace ONE Tunnel has functioned for nearly a decade and half now. On the other hand, if you're a grizzled WS1 admin that's worked for Tunnel for years, shoot, Tunnel Container adoption is going to be fairly straight forward and, at the risk of sounding really nerdy, pretty fun and interesting.
A Quick Demo Of The Deployment
Before walking through the nuts and bolts of a Tunnel Container deployment, here's a quick video demonstration of what it looks like.
Next, we'll walk through the setup requirements on AlamaLinux.
Preparing AlmaLinux 10.1 for Tunnel Container
Preparing AlmaLinux with the bare minimum requirements to host Tunnel Container boils down to 4 tasks: basic OS setup, SSH key setup, Docker installation and firewall configuration.
Basic OS Setup
After downloading the 10.1 iso from https://almalinux.org/get-almalinux/, I uploaded it to my vSphere environment. Then I created a new VM with the iso attached through the virtual CD/DVD drive.
Upon powering up there was an option to boot from the ISO and begin the installation.
Eventually, I found my way to the Installation Summary menu where I could go on to define key configuration for the OS setup.
The most involved steps were regarding networking and the configuration of the default ethernet controller. From the Installation Summary menu I clicked on, "Network and Hostname," then highlighted the default ethernet controller and clicked, "Configure." First, I disabled IPv6.
Then I navigated to IPv4 Settings and configured a static IP address along with other network settings.
Navigating back to the Installation Summary screen, I went with the option to enable the root account. Then, to simplify the administration required to support SSH sessions from dux, I enabled the option, "Allow root SSH login with password."
SSH Key Setup
After the OS install was complete, I confirmed I could login through SSH using the root account:
ssh root@10.0.0.190
In anticipation of Tunnel Container deployment requirements, from the macOS I intended to run dux from, I generated a SSH key using the following command:
ssh-keygen -t ed25519 -C “root@10.0.0.190”Then I copied this ssh key to the AlmaLinux instance from my macOS using ssh-copy-id:
ssh-copy-id root@10.0.0.190
At this point my SSH sessions started working against the AlmaLinux instance without requiring a password. While this automatic login using a ssh key is lovely in and of itself, it's critical to automation when deploying Tunnel Container through dux.
dnf --refresh update
dnf upgrade
dnf install yum-utils
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
dnf install docker-ce docker-ce-cli containerd.io docker-compose-plugin
Then, I started and enabled the Docker service with:
sudo systemctl start docker
sudo systemctl enable docker
Last, I confirmed it was running with:
In anticipation of the port connectivity requirements for Tunnel gateway, I opened up tcp port 8443 with the following commands:
sudo firewall-cmd --zone=public --add-port=8443/tcp --permanent
sudo firewall-cmd --reload
firewall-cmd --list-ports
Setting Up dux for Command And Control
Dux is a cross platform command line interface for deploying Tunnel Container. It's currently supported on macOS, or RHEL-based linux distributions, like AlmaLinux, Rocky or CentOS. For my lab I leveraged macOS and the brew based install option detailed in the official Tunnel Container documentation. To install HomeBrew (brew) you can follow the instructions detailed on the HomeBrew home page.
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
Then, to actually install dux:
brew tap wsonetunnel/tunnel
brew install dux
A good way to test a successful install is the version command for dux:
dux version
To progress with the deployment the next step was to execute the dux init command to create a sample template, a ts_manifest.yml file that will define the Tunnel Container deployment.
dux init
The template is created within the dux directory created by the dux install. The directory path varies based on dux version and macOS type but is confirmed as part of the dux init execution.
After executing dux init, I navigated to /usr/local/var/opt/omnissa/dux and found the freshly created manifest file.
When executing deployments with dux, the dux utility will parse values from this YAML file and deploy Tunnel Container accordingly. This ts_manifest.yml file is very similar to the ini files traditionally used for UAG deployments.
Red Is The New Black And YAML Is The New INI
UAG appliances can be automatically deployed via PowerShell, an appliance ISO file and an ini configuration file. Tunnel Containers, on the other hand, are deployed through the dux cli, tunnel container image and ts_manifest.yml. In a nutshell, for UAG veterans, YAML is the new ini. Ts_manifest.yml is configured similarly to the UAG ini files, with settings like UEM instance, group ID, and authentication settings for the UEM tenant. However, since it's a container deployment model, there's no need to define configuration settings associated with standing up an appliance, like gateway addresses, DNS servers or subnet masks. For context, here's the ts_manifest.yml for the deployment demonstrated in the video above and detailed in the next section. I've pulled out all the comments to make it easier to understand.
As you can see, the configuration starts with some basic tenant info, in this case the UEM tenant URL and group_id. Then for credentials to authenticate to the tenant with I'm leveraging the new OAuth option offered through Tunnel Container. This involves generating an OAuth client under Groups And Settings -> Configurations -> OAuth Client Management.
Under the tunnel_server parent key I've provided the name of the Tunnel Container image downloaded from Customer Connect and placed in the, "images," subdirectory of the dux folder at /usr/local/var/opt/omnissa/dux. As far as the ssh_login_credentials go, this section is based on the SSH key generation I performed when standing up AlmaLinux earlier. These credentials are required for dux to reach out to the target Linux instance and setup the container. The actual Linux host to connect to is specified under the host section, where I've provided the ip address of the Alma Linux instance while also defining it's server role as Basic by setting it to 0. Finally, I've enabled the option to run performance tuning script, generated by dux init, by setting the value of perf_tune to 1.
For more information on the configuration of ts_manifest.yml, there's the official documentation on Tunnel Container. Even more relevant are the comments included within ts_manifest.yml when it's first generated with dux init. While I pulled the comments out for this overview, they're very relevant and should be the first place to look for guidance.
Deploying A Tunnel Instance Through dux
To get Tunnel Container delivered and configured on a Linux endpoint you leverage the dux deploy command. Before pushing the container out you can check that the manifest is correctly configured using the dry run -d option.
dux deploy -d
With a successful dry run, you can skip ahead to the actual deployment with confidence. To rid yourself of pesky yes prompts, you can leverage the -y option.
dux deploy -y
You can also confirm the expected port is open on the endpoint. In the case of this deployment we're looking for port 8443. We can confirm this with:
ss -tulpn
Knowing for a fact that port 8443 is now open on the Linux host we can run a curl command against that port from the macOS endpoint.
curl -v telnet://10.0.0.190:8443
Next, we can validate network communication between the new Tunnel instance and WS1 UEM by logging into the WS1 UEM console and navigating to the Tunnel Configuration. From Tunnel Configuration page you can click on the option, "Test Connection"
While this, "Test Connection," option validates connectivity between the Tunnel gateway service and the UEM tenant, it doesn't confirm that the Tunnel gateway service will be accessible from endpoint devices attempting to reach out to the Tunnel hostname. So, theoretically, you could have a successful test connection but have a Tunnel hostname and port and that's not even accessible to the outside world. To test the accessibility of the tunnel host url to the outside world, we could rely on good old fashion telnet commands or network troubleshooting tools of choice to validate things like DNS or port connectivity from the outside world. Or there's my favorite option, a tool I built myself at troubleshoot.evengooder.com.
Troubleshoot.evengooder.com will not only confirm port connectivity from the outside world, but it will also report attributes from the ssl cert used on the VPN port Tunnel leverages. This SSL cert is pulled down from the WS1 UEM tenant to the Tunnel instance and has tell-tale attributes like an expected SSL thumbprint, expiration date and, "Acceptable Client Cert CA Name," which typically indicates the WS1 UEM tenant.
For an overview of on using UAG Troubleshooter to test Tunnel, check out the article, Tunnel Edge Scan For Workspace ONE UEM.
Ongoing Maintenance And Support
Dux is going to be your primary mechanism for monitoring, troubleshooting and lifecycle management. To see the command line options available, you can leverage the built in help function of dux.
dux -h
There's also the official Tunnel Container documentation as well as the readme file for the dux cli.
Of all the commands, the dux status command is probably the most relevant one to start off with.
dux status -j
For collecting logs from the Tunnel gateway instances remotely, quite a nifty feature, there's the dux logs command.
dux logs
As you can see, it copies the logs directly to the machine your running dux on, a nice little improvement over how log collection is handled on UAG appliances.
Adopting This Process To Ubuntu Server 24.04 LTS
To show off the versatility of this solution I'm going to walk through the adoption of this process for Ubuntu. If we configure Ubuntu similarly to the AlmaLinux instance, there will be no need to change ts_manifest.yml at all. Like the AlmaLinux setup, I'd break down the Ubuntu setup process to 3 tasks: basic OS setup, ssh access and the Docker install.
Basic OS Setup
The version I'm leveraging is Ubuntu 24.04 LTS, downloaded from the Ubuntu website. After uploading the iso image to the vSphere data store I configured a VM to connect to the iso at power up. At boot up I was presented with an option to install Ubuntu Server.
For type of installation, I went with the default of Ubuntu Server.
I skipped the option to upgrade to Ubuntu Pro, then went on to configure the servers name and a user account name. I also went with the defaults for storage configuration. As with the AlmaLinux setup, one of the most involved actions during setup was the network configuration.
Next, I went with the option to Install OpenSSH server so that I would have the ability to ssh into the Ubuntu instance.
SSH Access
sudo passwd root
Then, to provide the root account with ssh access, I made an edit to /etc/ssh/sshd_config, changing PermitRootLogin to yes.
sudo nano /etc/ssh/sshd_config
Then I restarted ssh with the following command:
sudo systemctl restart ssh
Next, I generated the ssh key with:
ssh-keygen -t ed25519 -C “root@10.0.0.190”
Then copied it to the Ubuntu host with:
ssh-copy-id root@10.0.0.190
At this point, I was successfully able to ssh into the Ubuntu instance as root using the ssh key.
Fortunately, installing Docker on Ubuntu is really easy, especially when using the convenience script option detailed within the Docker documentation titled, Install Docker Engine on Ubuntu. With this option, the install boils down to two command lines:
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
I Hate Change But This Model Is Pretty Damn Cool
I'm old, don't like change and, in fact, didn't like change even when I was young. So, leave my things alone and please don't mess with my things, and get off my lawn! When I first heard about Tunnel Container I curmudgeonly shrugged it off, while mumbling something about UAG being good enough. However, after working with Tunnel Container firsthand, I am now a believer. Yeah, there are definitely customers who may continue to stick with the UAG based deployment model for Tunnel. But for folks interested in advanced Tunnel features or the flexibility to use their Linux distribution of choice, there's little reason not to give Tunnel Containers a try.
Additional Resources
Here are some incredibly useful additional resources:
















