Citrix ADC on AWS – Deploying HA Setup

Reading Time: 9 minutes

Hello everybody and sorry for being a little bit quiet recently on the blog but COVID-19 happened and there is still a ton of work to-do. Guess you can feel me because we are all sitting in the same boat 🙂 At the beginning of this year I jumped into a cool project where I had to migrate the F5 ICA-Proxy of a customer to a Citrix ADC high availability pair in AWS. I have done this before on Microsoft Azure and learned the hard way that stuff is working different in a public cloud. Shotout to Daniel Weppeler & Ben Splittberger: That was some “fun” until everything was running 😉 I was expecting building the HA setup on AWS with the knowledge I collected during the deployment on Azure would be easier but unfortunately everything is different on AWS. It really took me some time putting the pieces together. While writing this blog post we already migrated all the users to the Citrix Gateway running on AWS (across two zones) and everything is running smoothly (fail-over included). I guess some of the readers are now thinking: “Why the hell are you doing High-Availability in a public cloud? Its so much easier to do GSLB across to zones!” Well that was out of question because the customer already bought the VPX licenses (Standard > No GSLB) and in addition they didn’t want to maintain two dedicated appliances where the configuration is not synchronized across both nodes. Alright lets get started on how to deploy HA which is a no-brainer for on-premise deployments but so much different when running your instances on AWS.

Overview

If we take a look at the architecture diagram from the Citrix documentation we can see that the VPX instance has connected three network interfaces. Don’t try to deploy an one-arm deployment. This will not work! Many people like one-arm configurations because its the most easy one and you don’t need to think about the routing or maybe even make use of PBR. In AWS a three-arm configuration this is the recommended way. This means we need to create the following networks and attach it to the instance:

1.) Management
2.)VIP (Client Facing)
3.) SNIP (Backend Communication)

From the technical perspective it would be enough to work with two interfaces (Management & Data Traffic) but its recommended to segregate between public and private networks. I never tried it and focused on the 3-arm configuration. In addition its important to understand that there is a huge difference between:

a.) deploy high availability in the same zone
b.) deploy high availability across different zones

In this post we are going to focus on option b. Keep in mind that before firmware 13.0.41x the fail-over was handled different with the ENI (elastic network interface) migration on the AWS site. Make sure you are running a newer build because this method is deprecated. The ENI is nothing else as network interface attached to the VM, just picture a vNIC on VMware or any other hypervisor If you are considering to work with private IP addresses (VIP) you need to have firmware 13.0 build 67.39 and onwards. Do not get confused yet more about that later.

Ways of Deployment

Before we can start configuring the ADC we need to provision the instances in our AWS VPC. If you never heard of VPC this stands for “Virtual Private Cloud” and it is a logical isolated section where you can run your virtual machines. Lets assume our VPC is located in the segment “10.161.69.0/24”. This would mean we can have subnets in that ip address range. Since we strive for a deployment across two zones we would need the following subnets in our VPC.

NameIPv4 CIDRIP-RangeDescription
Management (Private-A)10.161.69.32/2810.161.69.33 – 10.161.69.46NSIP
Services (Public-A)10.161.69.0/2710.161.69.1 – 10.161.69.30VIPs
Transfer (Private-A)10.161.69.48/2810.161.69.49 – 10.161.69.62SNIP
Citrix Infrastructure (Private-A)10.161.69.64/2610.161.69.65 – 10.161.69.126VDAs

Management (Private-B)10.161.69.160/2810.161.69.161 – 10.161.69.174NSIP
Services (Public-B)10.161.69.128/2710.161.69.129 – 10.161.69.158VIPs
Transfer (Private-B)10.161.69.176/2810.161.69.177 – 10.161.69.190SNIP
Citrix Infrastructure (Private-B)10.161.69.192/2610.161.69.193 – 10.161.69.254VDAs
Please be aware that you can not use the first four ip addresses in a subnet because this is reserved for AWS!

Regarding the provisioning of the instances you have several options. I am not naming all but I think these are the most common ones:

I am not going into detail about the provisioning process itself but if your goal is to deploy a HA-pair across two AWS zones you need to work with the HA INC mode. If you never heard of the INC mode let me give you a short summary. If your NSIPs are located in different subnets you need to enable the INC (Independent Network Configuration) mode when creating the high availability setup. With the enabled INC mode not everything from the configuration will get synced across both nodes, in this case exceptions would be for example:

  • Subnet IPs
  • Routes
  • VLANS
  • Route Monitors

Since our primary and secondary node are hosted in different data centers you need to work with INC. When building a HA-pair in a single AWS zone you will not need INC. Just make sure that you do not mix up the order of the instance interfaces.

Working

VPX Instance #1

eth1: Management (NSIP)
eth2: Frontend (VIP)
eth3: Backend (SNIP)

VPX Instance #2

eth1: Management (NSIP)
eth2: Frontend (VIP)
eth3: Backend (SNIP)

Not-Working

VPX Instance #1

eth1: Management (NSIP)
eth2: Frontend (VIP)
eth3: Backend (SNIP)

VPX Instance #2
eth1: Management (NSIP)
eth2: Backend (SNIP)
eth3: Frontend (VIP)

Prerequisites

  • Make sure that you create the needed IAM role with the following permissions. This is mandatory and without this the fail-over in AWS will simply not work. Attach the created role to both ADC instances.
  • The NSIP addressed for each instance must be configured on the default primary ENI (Elastic Network Interface)

High Availability with Elastic IP Addresses

You will need to work with Elastic IP addresses (EIP) if you want to publish a VIP to the Internet. This will be our Citrix Gateway vServer and should be reachable from any location around the globe. The Elastic IP address is a reserved public IPv4 address (static) and is associated with an EC2 instance. They are called elastic because you can detach & attach them to another instance what is exactly what we are going to need when initiating a fail-over and the secondary node will promoted to primary. To make the flow more visual I created the following diagram of the architecture.

(1) A client is accessing the Citrix Gateway service. The user is browsing to “apps.corp.com” which is resolving to the AWS Elastic IP address 18.158.10.199.

(2) The Virtual IP which is hosting the Citrix Gateway Service is located in the private ip range of the VPC subnet “Services (Public-A)”. In this case: 10.161.69.13.

(3) After the successful authentication the use the user will launch a published application. The ICA/HDX connection to the VDA is happening over the SNIP “10.161.69.53” which is located in the network “Transfer (Private-A)”.

(4) If you take a closer look at the VIP (10.161.69.13) you can see that the Virtual IP is the same in both zones. To make the routing work we need to work with ip sets on the ADC. The IPset will allow us to use additional IP address for this service. From an AWS perspective you will have the IP “10.161.69.13” on the network interface in eu-central-1(a) and the IP “10.161.69.148” on the network interface in eu-central-1(b).

If a HA fail-over is happening the EIP will be migrated to the instance in Zone B. We did several testings and the HDX session (TCP-based) will reconnect in under 10 seconds.

In this example our backend servers and VDAs are located in the network “10.161.69.64/26” and “10.161.69.192/26”. Make sure that you create the needed routing entry’s on the ADC. Since we are working with HA-INC this needs to happen independent in each zone. The Gateway IP will always be the gateway address from your transfer network in each zone. You can see there are a lot of additional networks in the routing table because most of the VDAs are still running in the on-premise datacenter of the customer.

Routing Table in Zone A

Now lets take a closer look on the configuration that you know how the final configuration looks like.

  • Here we can see the ip mapping of the Elastic IP-Address to the VIP
  • Summary of the ADC Instance in Zone A. You can see the assigned private IPv4 addresses
  • Summary of the ADC Instance in Zone B. You can see the assigned private IPv4 addresses
  • Overview of the Citrix Gateway Virtual Servers
  • Handling of IPset

Hints:

  • If you are having issues with the failover process there is a dedicated logfile available. This is located under: /var/log/cloud-ha-daemon.log
  • If you are not configuring an IPset the configuration will not synchronize. Make sure to configure this before checking the secondary node.

High Availability with Private IP Addresses

In the previous chapter I covered how to publish a VIP to the Internet with the help of Elastic IP addresses. In some use-cases you might just want to load balance a service and access is it from inside the VPC or maybe even in another subnet which is connected via AWS Direct Connect or a Site-To-Site VPN. Lets assume our requirement is to not publish the StoreFront LB to the internet we can work with private ip addresses as well. This can be done on the same ADC appliances but you will need firmware 13.0.67.39 or higher. I assume High-Availability in INC mode is already configured and the IAM-Role applied to both instances. Before we can create an internal private VIP we need to determine a subnet which can be used for our vServers. The requirement is that this network is not overlapping with the VPC network. To translate this in our example: IP Addresses of the network range “10.161.69.0/24” can not be used. We need to create a “dummy” network.

NameIPv4 CIDRIP-RangeDescription
LB-Internal10.161.47.192/2610.161.47.193 – 10.161.47.254VIPs for Internal Load Balancing
SNIP: 10.161.47.254

It is important to add a SNIP for this LB-Internal network otherwise the internal communication will break! Do not miss this step because is not mentioned in the documentation. If you skip this part and you create a Session Profile for Citrix Gateway with the StoreFront LB VIP you never will be able to reach the StoreFront service. You will receive error messages like internal server error after the authentication process.

After the network range is defined we need to create a route inside the VPC which is pointing to the primary ADC instance.

The last configuration on the AWS side is to disable Source/Dest.Check on the ENI of the primary instance which should be used to access the VIPs. We are going to disable this on the ENI of the network “Transfer (Private-A)”. If you are asking yourself why this only needs to be done on the primary instance: during the fail-over this will be handled automatically configured on the secondary (new primary) and the routing table will be modified as well (IAM-Role needed).

With the prerequisites in place we now we can create our first Load Balancing vServer with a private ip address. This can be done without adding IPsets. Nothing to take care of.

After creating the LB vServer you can already access the VIP inside the VPC. For networks outside the VPC for example a client network in the HQ or a branch office you need to point your routing for the “LB-Internal” network to AWS. Contact the networking team if its not you having control over everything.

Summary

First of all I want to say thank you to Farhan Ali and Arvind Kandula from Citrix, who helped me to get the final configuration working. I hope this post is helping people out there to understand the deployment architecture on AWS and gives a more realistic view of the deployment steps beside the Citrix documentation. In the end its not that hard to configure but you need to know how everything is playing together. If you have any questions or improvements please feel free to contact me.

Leave a Reply

Your email address will not be published. Required fields are marked *