Always On VPN IKEv2 Load Balancing with KEMP LoadMaster

Always On VPN IKEv2 Load Balancing with KEMP LoadMasterInternet Key Exchange version 2 (IKEv2) is an IPsec-based VPN protocol with configurable security parameters that allows administrators to ensure the highest level of security for Windows 10 Always On VPN clients. It is the protocol of choice for deployments that require the best possible protection for communication between remote clients and the VPN server. IKEv2 has some unique requirements when it comes to load balancing, however. Because it uses UDP on multiple ports, configuring the load balancer requires some additional steps for proper operation. This article demonstrates how to enable IKEv2 load balancing using the Kemp LoadMaster load balancer.

IKEv2 and NAT

IKEv2 VPN security associations (SAs) begin with a connection to the VPN server that uses UDP port 500. During this initial exchange, if it is determined that the client, server, or both are behind a device performing Network Address Translation (NAT), the connection switches to UDP port 4500 and the connection establishment process continues.

IKEv2 Load Balancing Challenges

Since UDP is connectionless, there’s no guarantee that when the conversation switches from UDP 500 to UDP 4500 that the load balancer will forward the request to the same VPN server on the back end. If the load balancer forwards the UDP 500 session from a VPN client to one real server, then forwards the UDP 4500 session to a different VPN server, the connection will fail. The load balancer must be configured to ensure that both UDP 500 and 4500 from the same VPN client are always forwarded to the same real server to ensure proper operation.

Port Following

To meet this unique requirement for IKEv2 load balancing, it is necessary to use a feature on the KEMP LoadMaster load balancer called “port following”. Enabling this feature will ensure that a VPN client using IKEv2 will always have their UDP 500 and 4500 sessions forwarded to the same real server.

Load Balancing IKEv2

Open the web-based management console and perform the following steps to enable load balancing of IKEv2 traffic on the KEMP LoadMaster load balancer.

Create the Virtual Server

  1. Expand Virtual Services.
  2. Click Add New.
  3. Enter the IP address to be used by the virtual server in the Virtual Address field.
  4. Enter 500 in the Port field.
  5. Select UDP from the Protocol drop-down list.
  6. Click Add this Virtual Service.

Always On VPN IKEv2 Load Balancing with KEMP LoadMaster

Add Real Servers

  1. Expand Real Servers.
  2. Click Add New.
  3. Enter the IP address of the VPN server in the Real Server Address field.
  4. Click Add This Real Server.
  5. Repeat the steps above for each VPN server in the cluster.

Always On VPN IKEv2 Load Balancing with KEMP LoadMaster

Repeat all the steps above to create another virtual server using UDP port 4500.

Always On VPN IKEv2 Load Balancing with KEMP LoadMaster

Enable Layer 7 Operation

  1. Click View/Modify Services below Virtual Services in the navigation tree.
  2. Select the first virtual server and click Modify.
  3. Expand Standard Options.
  4. Uncheck Force L4.
  5. Check Transparency (additional configuration may be required – details here).
  6. Select Source IP Address from the Persistence Options drop-down list.
  7. Choose an appropriate value from the Timeout drop-down list.
  8. Choose an appropriate setting from the Scheduling Method drop-down list.
  9. Click Back.
  10. Repeat these steps on the second virtual server.

Always On VPN IKEv2 Load Balancing with KEMP LoadMaster

Enable Port Following

  1. Click View/Modify Services below Virtual Services in the navigation tree.
  2. Select the first virtual server and click Modify.
  3. Expand Advanced Properties.
  4. Select the virtual server using UDP 500 from the Port Following drop-down list.
  5. Click Back.
  6. Repeat these steps on the second virtual server.

Always On VPN IKEv2 Load Balancing with KEMP LoadMaster

Demonstration Video

The following video demonstrates how to enable IKEv2 load balancing for Windows 10 Always On VPN using the KEMP LoadMaster Load Balancer.

Summary

With the KEMP LoadMaster load balancer configured to use port following, Windows 10 Always On VPN clients using IKEv2 will be assured that their connections will always be delivered to the same back end VPN server, resulting in reliable load balancing for IKEv2 connections.

Additional Information

Windows 10 Always On VPN Certificate Requirements for IKEv2

Windows 10 Always On VPN Protocol Recommendations for Windows Server RRAS

Always On VPN Multisite with Azure Traffic Manager

Always On VPN Multisite with Azure Traffic ManagerEliminating single points of failure is crucial to ensuring the highest levels of availability for any remote access solution. For Windows 10 Always On VPN deployments, the Windows Server 2016 Routing and Remote Access Service (RRAS) and Network Policy Server (NPS) servers can be load balanced to provide redundancy and high availability within a single datacenter. Additional RRAS and NPS servers can be deployed in another datacenter or in Azure to provide geographic redundancy if one datacenter is unavailable, or to provide access to VPN servers based on the location of the client.

Multisite Always On VPN

Unlike DirectAccess, Windows 10 Always On VPN does not natively include support for multisite. However, enabling multisite geographic redundancy can be implemented using Azure Traffic Manager.

Azure Traffic Manager

Traffic Manager is part of Microsoft’s Azure public cloud solution. It provides Global Server Load Balancing (GSLB) functionality by resolving DNS queries for the VPN public hostname to an IP address of the most optimal VPN server.

Advantages and Disadvantages

Using Azure Traffic manager has some benefits, but it is not with some drawbacks.

Advantages – Azure Traffic Manager is easy to configure and use. It requires no proprietary hardware to procure, manage, and support.

Disadvantages – Azure Traffic Manager offers only limited health check options. Today, only HTTP, HTTP, and TCP protocols can be used to perform endpoint health checks. There is no option to use UDP or PING, making monitoring for IKEv2 a challenge.

Note: This scenario assumes that RRAS with Secure Socket Tunneling Protocol (SSTP) or another third-party TLS-based VPN server is in use. If IKEv2 is to be supported exclusively, it will still be necessary to publish an HTTP or HTTPS-based service for Azure Traffic Manager to monitor site availability.

Traffic Routing Methods

Azure Traffic Manager provide four different methods for routing traffic.

Priority – Select this option to provide active/passive failover. A primary VPN server is defined to which all traffic is routed. If the primary server is unavailable, traffic will be routed to another backup server.

Weighted – Select this option to provide active/active failover. Traffic is routed to all VPN servers equally, or unequally if desired. The administrator defines the percentage of traffic routed to each server.

Performance – Select this option to route traffic to the VPN server with the lowest latency. This ensures VPN clients connect to the server that responds the quickest.

Geographic – Select this option to route traffic to a VPN server based on the VPN client’s physical location.

Multivalue – Select this option when endpoints must use IPv4 or IPv6 addresses.

Subnet – Select this option to map DNS responses to the client’s source IP address.

Configure Azure Traffic Manager

Open the Azure management portal and follow the steps below to configure Azure Traffic Manager for multisite Windows 10 Always On VPN.

Create a Traffic Manager Resource

  1. Click Create a resource.
  2. Click Networking.
  3. Click Traffic Manager profile.

Create a Traffic Manager Profile

  1. Enter a unique name for the Traffic Manager profile.
  2. Select an appropriate routing method (described above).
  3. Select a subscription.
  4. Create or select a resource group.
  5. Select a resource group location.
  6. Click Create.

Always On VPN Multisite with Azure Traffic Manager

Important Note: The name of the Traffic Manager profile cannot be used by VPN clients to connect to the VPN server, since a TLS certificate cannot be obtained for the trafficmanager.net domain. Instead, create a CNAME DNS record that points to the Traffic Manager FQDN and ensure that name matches the subject or a Subject Alternative Name (SAN) entry on the VPN server’s TLS and/or IKEv2 certificates.

Endpoint Monitoring

Open the newly created Traffic Manager profile and perform the following tasks to enable endpoint monitoring.

  1. Click Configuration.
  2. Select HTTPS from the Protocol drop-down list.
  3. Enter 443 in the Port field.
  4. Enter /sra_%7BBA195980-CD49-458b-9E23-C84EE0ADCD75%7D/ in the Path field.
  5. Enter 401-401 in the Expected Status Code Ranges field.
  6. Update any additional settings, such as DNS TTL, probing interval, tolerated number of failures, and probe timeout, as required.
  7. Click Save.

aovpn_traffic_manager_multisite_001

Endpoint Configuration

Follow the steps below to add VPN endpoints to the Traffic Manager profile.

  1. Click Endpoints.
  2. Click Add.
  3. Select External Endpoint from the Type drop-down list.
  4. Enter a descriptive name for the endpoint.
  5. Enter the Fully Qualified Domain Name (FQDN) or the IP address of the first VPN server.
  6. Select a geography from the Location drop-down list.
  7. Click OK.
  8. Repeat the steps above for any additional datacenters where VPN servers are deployed.

Always On VPN Multisite with Azure Traffic Manager

Summary

Implementing multisite by placing VPN servers is multiple physical locations will ensure that VPN connections can be established successfully even when an entire datacenter is offline. In addition, active/active scenarios can be implemented, where VPN client connections can be routed to the most optimal datacenter based on a variety of parameters, including current server load or the client’s current location.

Additional Information

Windows 10 Always On VPN Hands-On Training Classes

Deployment Considerations for DirectAccess on Amazon Web Services (AWS)

Organizations are rapidly deploying Windows server infrastructure with public cloud providers such as Amazon Web Services (AWS) and Microsoft Azure. With traditional on-premises infrastructure now hosted in the cloud, DirectAccess is also being deployed there more commonly.

Supportability

Interestingly, Microsoft has expressly stated that DirectAccess is not formally supported on their own public cloud platform, Azure. However, there is no formal statement of non-support for DirectAccess hosted on other non-Microsoft public cloud platforms. With supportability for DirectAccess on AWS unclear, many companies are taking the approach that if it isn’t unsupported, then it must be supported. I’d suggest proceeding with caution, as Microsoft could issue formal guidance to the contrary in the future.

DirectAccess on AWS

Deploying DirectAccess on AWS is similar to deploying on premises, with a few notable exceptions, outlined below.

IP Addressing

It is recommended that an IP address be exclusively assigned to the DirectAccess server in AWS, as shown here.

Deployment Considerations for DirectAccess on Amazon Web Services (AWS)

Prerequisites Check

When first configuring DirectAccess, the administrator will encounter the following warning message.

“The server does not comply with some DirectAccess prerequisites. Resolve all issues before proceed with DirectAccess deployment.”

The warning message itself states that “One or more network adapters should be configured with a static IP address. Obtain a static address and assign it to the adapter.

Deployment Considerations for DirectAccess on Amazon Web Services (AWS)

IP addressing for virtual machines are managed entirely by AWS. This means the DirectAccess server will have a DHCP-assigned address, even when an IP address is specified in AWS. Assigning static IP addresses in the guest virtual machine itself is also not supported. However, this warning message can safely be ignored.

No Support for Load Balancing

It is not possible to create load-balanced clusters of DirectAccess servers for redundancy or scalability on AWS. This is because enabling load balancing for DirectAccess requires the IP address of the DirectAccess server be changed in the operating system, which is not supported on AWS. To eliminate single points of failure in the DirectAccess architecture or to add additional capacity, multisite must be enabled. Each additional DirectAccess server must be provisioned as an individual entry point.

Network Topology

DirectAccess servers on AWS can be provisioned with one or two network interfaces. Using two network interfaces is recommended, with the external network interface of the DirectAccess server residing in a dedicated perimeter/DMZ network. The external network interface must use either the Public or Private Windows firewall profile. DirectAccess will not work if the external interface uses the Domain profile. For the Public and Private profile to be enabled, domain controllers must not be reachable from the perimeter/DMZ network. Ensure the perimeter/DMZ network cannot access the internal network by restricting network access in EC2 using a Security Group, or on the VPC using a Network Access Control List (ACL) or custom route table settings.

External Connectivity

A public IPv4 address must be associated with the DirectAccess server in AWS. There are several ways to accomplish this. The simplest way is to assign a public IPv4 address to the virtual machine (VM). However, a public IP address can only be assigned to the VM when it is deployed initially and cannot be added later. Alternatively, an Elastic IP can be provisioned and assigned to the DirectAccess server at any time.

An ACL must also be configured for the public IP that restricts access from the Internet to only inbound TCP port 443. To provide additional protection, consider deploying an Application Delivery Controller (ADC) appliance like the Citrix NetScaler or F5 BIG-IP to enforce client certificate authentication for DirectAccess clients.

Network Location Server (NLS)

If an organization is hosting all of its Windows infrastructure in AWS and all clients will be remote, Network Location Server (NLS) availability becomes much less critical than with traditional on-premises deployments. For cloud-only deployments, hosting the NLS on the DirectAccess server is a viable option. It eliminates the need for dedicated NLS, reducing costs and administrative overhead. If multisite is configured, ensure that the NLS is not using a self-signed certificate, as this is unsupported.

Deployment Considerations for DirectAccess on Amazon Web Services (AWS)

However, for hybrid cloud deployments where on-premises DirectAccess clients share the same internal network with cloud-hosted DirectAccess servers, it is recommended that the NLS be deployed on dedicated, highly available servers following the guidance outlined here and here.

Client Provisioning

All supported DirectAccess clients will work with DirectAccess on AWS. If the domain infrastructure is hosted exclusively in AWS, provisioning clients can be performed using Offline Domain Join (ODJ). Provisioning DirectAccess clients using ODJ is only supported in Windows 8.x/10. Windows 7 clients cannot be provisioned using ODJ and must be joined to the domain using another form of remote network connectivity such as VPN.

Additional Resources

DirectAccess No Longer Supported in Microsoft Azure

Microsoft Server Software Support for Azure Virtual Machines

DirectAccess Network Location Server (NLS) Guidance

DirectAccess Network Location Server (NLS) Deployment Considerations for Large Enterprises

Provisioning DirectAccess Clients using Offline Domain Join (ODJ)

DirectAccess SSL Offload and IP-HTTPS Preauthentication with Citrix NetScaler

DirectAccess SSL Offload and IP-HTTPS Preauthentication with F5 BIG-IP

Planning and Implementing DirectAccess with Windows Server 2016 Video Training Course

Implementing DirectAccess with Windows Server 2016 Book

%d bloggers like this: