Always On VPN Load Balancing for RRAS in Azure

Always On VPN Load Balancing for RRAS in AzurePreviously I wrote about Always On VPN options for Microsoft Azure deployments. In that post I indicated that running Windows Server with the Routing and Remote Access Service (RRAS) role for VPN was an option to be considered, even though it is not a formally supported workload. Despite the lack of support by Microsoft, deploying RRAS in Azure works well and is quite popular. In fact, I recently published some configuration guidance for RRAS in Azure.

Load Balancing Options for RRAS

Multiple RRAS servers can be deployed in Azure to provide failover/redundancy or to increase capacity. While Windows Network Load Balancing (NLB) can be used on-premises for RRAS load balancing, NLB is not supported and doesn’t work in Azure. With that, there are several options for load balancing RRAS in Azure. They include DNS round robin, Azure Traffic Manager, the native Azure load balancer, Azure Application Gateway, or a dedicated load balancing virtual appliance.

DNS Round Robin

The easiest way to provide load balancing for RRAS in Azure is to use round robin DNS. However, using this method has some serious limitations. Simple DNS round robin can lead to connection attempts to a server that is offline. In addition, this method doesn’t accurately balance the load and often results in uneven distribution of client connections.

Azure Traffic Manager

Using Azure Traffic Manager is another alternative for load balancing RRAS in Azure. In this scenario each VPN server will have its own public IP address and FQDN for which Azure Traffic Manager will intelligently distribute traffic. Details on configuring Azure Traffic Manager for Always On VPN can be found here.

Azure Load Balancer

The native Azure load balancer can be configured to provide load balancing for RRAS in Azure. However, it has some serious limitations. Consider the following.

  • Supports Secure Socket Tunneling Protocol (SSTP) only.
  • Basic health check functionality (port probe only).
  • Limited visibility.
  • Does not work with IKEv2.
  • Does not support TLS offload for SSTP.

More information about the Azure Load Balancer can be found here.

Azure Application Gateway

The Azure Application Gateway can be used for load balancing RRAS SSTP VPN connections where advanced capabilities such as enhanced health checks and TLS offload are required. More information about the Azure Application Gateway can be found here.

Load Balancing Appliance

Using a dedicated Application Delivery Controller (ADC), or load balancer is a very effective way to eliminate single points of failure for Always On VPN deployments hosted in Azure. ADCs provide many advanced features and capabilities to ensure full support for all RRAS VPN protocols. In addition, ADCs offer much better visibility and granular control over VPN connections. There are many solutions available as virtual appliances in the Azure marketplace that can be deployed to provide RRAS load balancing in Azure.

Summary

Deploying Windows Server RRAS in Azure for Always On VPN can be a cost-effective solution for many organizations. Although not a formally supported workload, I’ve deployed it numerous times and it works quite well. Consider using a dedicated ADC to increase scalability or provide failover and redundancy for RRAS in Azure whenever possible.

Additional Information

Windows 10 Always On VPN Options for Azure Deployments

Windows 10 Always On VPN and RRAS in Microsoft Azure

Windows 10 Always On VPN with Microsoft Azure Gateway

DirectAccess Manage Out with ISATAP and NLB Clustering

DirectAccess Manage Out with ISATAP and NLB ClusteringDirectAccess connections are bidirectional, allowing administrators to remotely connect to clients and manage them when they are out of the office. DirectAccess clients use IPv6 exclusively, so any communication initiated from the internal network to remote DirectAccess clients must also use IPv6. If IPv6 is not deployed natively on the internal network, the Intrasite Automatic Tunnel Addressing Protocol (ISATAP) IPv6 transition technology can be used to enable manage out.

ISATAP Supportability

According to Microsoft’s support guidelines for DirectAccess, using ISATAP for manage out is only supported for single server deployments. ISATAP is not supported when deployed in a multisite or load-balanced environment.

Not supported” is not the same as “doesn’t work” though. For example, ISATAP can easily be deployed in single site DirectAccess deployments where load balancing is provided using Network Load Balancing (NLB).

ISATAP Configuration

To do this, you must first create DNS A resource records for the internal IPv4 address for each DirectAccess server as well as the internal virtual IP address (VIP) assigned to the cluster.

DirectAccess Manage Out with ISATAP and NLB Clustering

Note: Do NOT use the name ISATAP. This name is included in the DNS query block list on most DNS servers and will not resolve unless it is removed. Removing it is not recommended either, as it will result in ALL IPv6-enabled hosts on the network configuring an ISATAP tunnel adapter.

Once the DNS records have been added, you can configure a single computer for manage out by opening an elevated PowerShell command window and running the following command:

Set-NetIsatapConfiguration -State Enabled -Router [ISATAP FQDN] -PassThru

DirectAccess Manage Out with ISATAP and NLB Clustering

Once complete, an ISATAP tunnel adapter network interface with a unicast IPv6 address will appear in the output of ipconfig.exe, as shown here.

DirectAccess Manage Out with ISATAP and NLB Clustering

Running the Get-NetRoute -AddressFamily IPv6 PowerShell command will show routes to the client IPv6 prefixes assigned to each DirectAccess server.

DirectAccess Manage Out with ISATAP and NLB Clustering

Finally, verify network connectivity from the manage out host to the remote DirectAccess client.

Note: There is a known issue with some versions of Windows 10 and Windows Server 2016 that may prevent manage out using ISATAP from working correctly. There’s a simple workaround, however. More details can be found here.

Group Policy Deployment

If you have more than a few systems on which to enable ISATAP manage out, using Active Directory Group Policy Objects (GPOs) to distribute these settings is a much better idea. You can find guidance for creating GPOs for ISATAP manage out here.

DirectAccess Client Firewall Configuration

Simply enabling ISATAP on a server or workstation isn’t all that’s required to perform remote management on DirectAccess clients. The Windows firewall running on the DirectAccess client computer must also be configured to securely allow remote administration traffic from the internal network. Guidance for configuring the Windows firewall on DirectAccess clients for ISATAP manage out can be found here.

ISATAP Manage Out for Multisite and ELB

The configuration guidance in this post will not work if DirectAccess multisite is enabled or external load balancers (ELB) are used. However, ISATAP can still be used. For more information about enabling ISATAP manage out with external load balancers and/or multisite deployments, fill out the form below and I’ll provide you with more details.

Summary

Once ISATAP is enabled for manage out, administrators on the internal network can remotely manage DirectAccess clients wherever they happen to be. Native Windows remote administration tools such as Remote Desktop, Windows Remote Assistance, and the Computer Management MMC can be used to manage remote DirectAccess clients. In addition, enterprise administration tools such as PowerShell remoting and System Center Configuration Manger (SCCM) Remote Control can also be used. Further, third-party remote administration tools such as VNC, TeamViewer, LogMeIn, GoToMyPC, Bomgar, and many others will also work with DirectAccess ISATAP manage out.

Additional Information

ISATAP Recommendations for DirectAccess Deployments

DirectAccess Manage Out with ISATAP Fails on Windows 10 and Windows Server 2016 

DirectAccess Client Firewall Rule Configuration for ISATAP Manage Out

DirectAccess Manage Out and System Center Configuration Manager (SCCM)

Contact Me

Interested in learning more about ISATAP manage out for multisite and external load balancer deployments? Fill out the form below and I’ll get in touch with you.

DirectAccess Manage Out and System Center Configuration Manager (SCCM)

The seamless and transparent nature of DirectAccess makes it wonderfully easy to use. In most cases, it requires no user interaction at all to access internal corporate resources while away from the office. This enables users to be more productive. At the same time, it offers important connectivity benefits for IT administrators and systems management engineers as well.

Always Managed

DirectAccess Manage Out and System Center Configuration Manager (SCCM)DirectAccess clients are automatically connected to the corporate network any time they have a working Internet connection. Having consistent corporate network connectivity means they receive Active Directory group policy updates on a regular basis, just as on-premises systems do. Importantly, they check in with internal management systems such as System Center Configuration Manager (SCCM) and Windows Server Update Services (WSUS) servers, enabling them to receive updates in a timely manner. Thus, DirectAccess clients are better managed, allowing administrators to more effectively maintain the configuration state and security posture for all their managed systems, including those that are predominantly field-based. This is especially crucial considering the prevalence WannaCry, Cryptolocker, and a variety of other types of ransomware.

DirectAccess Manage Out

DirectAccess Manage Out and System Center Configuration Manager (SCCM)When manage out is configured with DirectAccess, hosts on the internal network can initiate connections outbound to remote connected DirectAccess clients. SCCM Remote Control and Remote Desktop Connection (RDC) are commonly used to remotely connect to systems for troubleshooting and support. With DirectAccess manage out enabled, these and other popular administrative tools such as VNC, Windows Remote Assistance, and PowerShell remoting can also be used to manage remote DirectAccess clients in the field. In addition, enabling manage out allows for the proactive installation of agents and other software on remote clients, such as the SCCM and System Center Operation Manager (SCOM) agents, third-party management agents, antivirus and antimalware software, and more. A user does not have to be logged on to their machine for manage out to work.

IPv6

DirectAccess manage out requires that connections initiated by machines on the internal network to remote-connected DirectAccess clients must be made using IPv6. This is because DirectAccess clients use IPv6 exclusively to connect to the DirectAccess server. To enable connectivity over the public IPv4 Internet, clients use IPv6 transition technologies (6to4, Teredo, IP-HTTPS), and IPv6 translation components on the server (DNS64 and NAT64) enable clients to communicate with internal IPv4 resources. However, DNS64 and NAT64 only translate IPv6 to IPv4 inbound. They do not work in reverse.

Native or Transition?

It is recommended that IPv6 be deployed on the internal network to enable DirectAccess manage out. This is not a trivial task, and many organizations can’t justify the deployment for just this one specific use case. As an alternative, IPv6 can be configured with an IPv6 transition technology, specifically the Intrasite Automatic Tunnel Addressing Protocol (ISATAP). ISATAP functions as an IPv6 overlay network, allowing internal hosts to obtain IPv6 addresses and routing information from an ISATAP router to support manage out for DirectAccess clients.

ISATAP

When DirectAccess is installed, the server is automatically configured as an ISATAP router. Guidance for configuring ISATAP clients can be found here. Using ISATAP can be an effective approach to enabling DirectAccess manage out for SCCM when native IPv6 is not available, but it is not without its drawbacks.

• Using the DirectAccess server for ISATAP is only supported with single server DirectAccess deployments.
• Using the DirectAccess server for ISATAP does work when using Network Load Balancing (NLB) with some additional configuration, but it is not supported.
• Using the DirectAccess server for ISATAP does not work when an external load balancer is used, or if multisite is enabled.

ISATAP with Load Balancing and Multisite

It is technically possible to enable DirectAccess manage out for SCCM using ISATAP in load-balanced and multisite DirectAccess deployments, however. It involves deploying a separate ISATAP router and some custom configuration, but once in place it works perfectly. I offer this service to my customers as part of a consulting engagement. If you’re interested in restoring DirectAccess manage out functionality to support SCCM remote control, RDC, or VNC in load-balanced or multisite DirectAccess deployments, fill out the form below and I’ll provide you with more information.

Additional Resources

ISATAP Recommendations for DirectAccess Deployments
DirectAccess Manage Out with ISATAP Fails on Windows 10 and Windows Server 2016
DirectAccess Client Firewall Rule Configuration for ISATAP Manage Out
Video: Windows 10 DirectAccess in action (includes manage out demonstration)

Migrating DirectAccess from NLB to External Load Balancer


Introduction

Migrating DirectAccess from NLB to External Load BalancerMultiple DirectAccess servers can be deployed in a load-balanced cluster to eliminate single crucial points of failure and to provide scalability for the remote access solution. Load balancing can be enabled using the integrated Windows Network Load Balancing (NLB) or an external physical or virtual load balancer.

NLB Drawbacks and Limitations

NLB is often deployed because it is simple and inexpensive. However, NLB suffers from some serious drawbacks that limit its effectiveness in all but the smallest deployments. For example, NLB uses network broadcasts to communicate cluster heartbeat information. Each node in the cluster sends out a heartbeat message every second, which generates a lot of additional network traffic on the link and reduces performance as more nodes are added. Scalability is limited with NLB too, as only 8 nodes are supported, although the practical limit is 4 nodes. Further, NLB supports only round-robin connection distribution.

External Load Balancer

A better alternative is to implement a dedicated physical or virtual load balancing appliance. A purpose-built load balancer provides additional security, greater scalability (up to 32 nodes per cluster), improved performance, and fine-grained traffic control.

Migrate from NLB to ELB

It is possible to migrate to an external load balancer (ELB) after NLB has already been configured. To do this, follow the guidance provided in my latest blog post on the KEMP Technologies blog entitled “Migrating DirectAccess from NLB to KEMP LoadMaster Load Balancers”.

Migrating DirectAccess from NLB to External Load Balancer

Additional Resources

DirectAccess Deployment Guide for KEMP LoadMaster Load Balancers

Migrating DirectAccess from NLB to KEMP LoadMaster Load Balancers

Load Balancing DirectAccess with KEMP LoadMaster Load Balancers

DirectAccess Load Balancing Tips and Tricks Webinar with KEMP Technologies

DirectAccess Single NIC Load Balancing with KEMP LoadMaster Load Balancer

Configuring the KEMP LoadMaster Load Balancer for DirectAccess NLS

Enable DirectAccess Load Balancing Video

Implementing DirectAccess with Windows Server 2016 Book

Deploying DirectAccess in Microsoft Azure

Introduction

DirectAccess Now a Supported Workload in Microsoft AzureMany organizations are preparing to implement DirectAccess on Microsoft’s public cloud infrastructure. Deploying DirectAccess in Azure is fundamentally no different than implementing it on premises, with a few important exceptions (see below). This article provides essential guidance for administrators to configure this unique workload in Azure.

Important Note: There has been much confusion regarding the supportability of DirectAccess in Azure. Historically it has not been supported. Recently, it appeared briefly that Microsoft reversed their earlier decision and was in fact going to support it. However, the Microsoft Server Software Suport for Microsoft Azure Virtual Machines document has once again been revised to indicate that DirectAccess is indeed no longer formally supported on Azure. More details can be found here.

Azure Configuration

The following is guidance for configuring network interfaces, IP address assignments, public DNS, and network security groups for deploying DirectAccess in Azure.

Virtual Machine

Deploy a virtual machine in Azure with sufficient resources to meet expected demand. A minimum of two CPU cores should be provisioned. A VM with 4 cores is recommended. Premium storage on SSD is optional, as DirectAccess is not a disk intensive workload.

Network Interfaces

It is recommended that an Azure VM with a single network interface be provisioned for the DirectAccess role. This differs from on-premises deployments where two network interfaces are preferred because deploying VMs in Azure with two NICs is prohibitively difficult. At the time of this writing, Azure VMs with multiple network interfaces can only be provisioned using PowerShell, Azure CLI, or resource manager templates. In addition, Azure VMs with multiple NICs cannot belong to the same resource group as other VMs. Finally, and perhaps most importantly, not all Azure VMs support multiple NICs.

Internal IP Address

Static IP address assignment is recommended for the DirectAccess VM in Azure. By default, Azure VMs are initially provisioned using dynamic IP addresses, so this change must be made after the VM has been provisioned. To assign a static internal IP address to an Azure VM, open the Azure management portal and perform the following steps:

  1. Click Virtual machines.
  2. Select the DirectAccess server VM.
  3. Click Network Interfaces.
  4. Click on the network interface assigned to the VM.
  5. Under Settings click IP configurations.
  6. Click Ipconfig1.
  7. In the Private IP address settings section choose Static for the assignment method.
  8. Enter an IP address for the VM.
  9. Click Save.

Deploying DirectAccess in Microsoft Azure

Public IP Address

The DirectAccess VM in Azure must have a public IP address assigned to it to allow remote client connectivity. To assign a public IP address to an Azure VM, open the Azure management portal and perform the following steps:

  1. Click Virtual machines.
  2. Select the DirectAccess server VM.
  3. Click Network Interfaces.
  4. Click on the network interface assigned to the VM.
  5. Under Settings click IP configurations.
  6. Click Ipconfig1.
  7. In the Public IP address settings section click Enabled.
  8. Click Configure required settings.
  9. Click Create New and provide a descriptive name for the public IP address.
  10. Choose an address assignment method.
  11. Click Ok and Save.

Deploying DirectAccess in Microsoft Azure

Deploying DirectAccess in Microsoft Azure

Public DNS

If the static IP address assignment method was chosen for the public IP address, create an A resource record in public DNS that resolves to this address. If the dynamic IP address assignment method was chosen, create a CNAME record in public DNS that maps to the public hostname for the DirectAccess server. To assign a public hostname to the VM in Azure, open the Azure management portal and perform the following steps:

  1. Click Virtual machines.
  2. Select the DirectAccess server VM.
  3. Click Overview.
  4. Click Public IP address/DNS name label.Deploying DirectAccess in Microsoft Azure
  5. Under Settings click Configuration.
  6. Choose an assignment method (static or dynamic).
  7. Enter a DNS name label.
  8. Click Save.

Deploying DirectAccess in Microsoft Azure

Note: The subject of the SSL certificate used for the DirectAccess IP-HTTPS listener must match the name of the public DNS record (A or CNAME) entered previously. The SSL certificate does not need to match the Azure DNS name label entered here.

Network Security Group

A network security group must be configured to allow IP-HTTPS traffic inbound to the DirectAccess server on the public IP address. To make the required changes to the network security group, open the Azure management portal and perform the following steps:

  1. Click Virtual machines.
  2. Select the DirectAccess server VM.
  3. Click Network interfaces.
  4. Click on the network interface assigned to the VM.
  5. Under Settings click Network security group.
  6. Click the network security group assigned to the network interface.
  7. Click Inbound security rules.
  8. Click Add and provide a descriptive name for the new rule.
  9. Click Any for Source.
  10. From the Service drop-down list choose HTTPS.
  11. Click Allow for Action.
  12. Click Ok.

Deploying DirectAccess in Microsoft Azure

Note: It is recommended that the default-allow-rdp rule be removed if it is not needed. At a minimum, scope the rule to allow RDP only from trusted hosts and/or networks.

DirectAccess Configuration

When performing the initial configuration of DirectAccess using the Remote Access Management console, the administrator will encounter the following warning message.

“One or more network adapters should be configured with a static IP address. Obtain a static address and assign it to the adapter.”

Deploying DirectAccess in Microsoft Azure

This message can safely be ignored because Azure infrastructure handles all IP address assignment for hosted VMs.

The public name of the DirectAccess server entered in the Remote Access Management console must resolve to the public IP address assigned to the Azure VM, as described previously.

Deploying DirectAccess in Microsoft Azure

Additional Considerations

When deploying DirectAccess in Azure, the following limitations should be considered.

Load Balancing

It is not possible to enable load balancing using Windows Network Load Balancing (NLB) or an external load balancer. Enabling load balancing for DirectAccess requires changing static IP address assignments in the Windows operating system directly, which is not supported in Azure. This is because IP addresses are assigned dynamically in Azure, even when the option to use static IP address assignment is chosen in the Azure management portal. Static IP address assignment for Azure virtual machines are functionally similar to using DHCP reservations on premises.

Deploying DirectAccess in Microsoft Azure

Note: Technically speaking, the DirectAccess server in Azure could be placed behind a third-party external load balancer for the purposes of performing SSL offload or IP-HTTPS preauthentication, as outlined here and here. However, load balancing cannot be enabled in the Remote Access Management console and only a single DirectAccess server per entry point can be deployed.

Manage Out

DirectAccess manage out using native IPv6 or ISATAP is not supported in Azure. At the time of this writing, Azure does not support IPv6 addressing for Azure VMs. In addition, ISATAP does not work due to limitations imposed by the underlying Azure network infrastructure.

Summary

For organizations moving infrastructure to Microsoft’s public cloud, formal support for the DirectAccess workload in Azure is welcome news. Implementing DirectAccess in Azure is similar to on-premises with a few crucial limitations. By following the guidelines outlined in this article, administrators can configure DirectAccess in Azure to meet their secure remote access needs with a minimum of trouble.

Additional Resources

Implementing DirectAccess in Windows Server 2016
Fundamentals of Microsoft Azure 2nd Edition
Microsoft Azure Security Infrastructure
DirectAccess Multisite with Azure Traffic Manager
DirectAccess Consulting Services

DirectAccess Load Balancing Tips and Tricks Webinar

KEMP Technologies LoadMaster Load BalancerEnabling load balancing for DirectAccess deployments is crucial for eliminating single points of failure and ensuring the highest levels of availability for the remote access solution. In addition, enabling load balancing allows DirectAccess administrators to quickly and efficiently add capacity in the event more processing power is required.

DirectAccess includes support for load balancing using integrated Windows Network Load Balancing (NLB) and external load balancers (physical or virtual). External load balancers are the recommended choice as they provide superior throughput, more granular traffic distribution, and greater visibility. External load balancers also more scalable, with support for much larger DirectAccess server clusters, up to 32 nodes. NLB is formally limited to 8 nodes, but because it operates at layer 2 in the OSI model and relies on broadcast heartbeat messages, it is effectively limited to 4 nodes.

The KEMP Technologies LoadMaster load balancer is an excellent choice for load balancing the DirectAccess workload. To learn more about configuring the LoadMaster with DirectAccess, join me for a free live webinar on Tuesday, August 16 at 10:00AM PDT where I’ll discuss DirectAccess load balancing in detail. I will also be sharing valuable tips, tricks, and best practices for load balancing DirectAccess.

DirectAccess Load Balancing Tips and Tricks Webinar

Don’t miss out. Register today!

Additional Resources

DirectAccess Load Balancing Overview

Load Balancing DirectAccess with the KEMP Loadmaster Load Balancer

Maximize your investment in Windows 10 with DirectAccess and the KEMP LoadMaster Load Balancer

KEMP LoadMaster DirectAccess Deployment Guide

DirectAccess Load Balancing Video

DirectAccess Load Balancing VideoConfiguring load balancing in DirectAccess is essential for eliminating single points of failure and ensuring the highest level of availability for the solution. The process of enabling load balancing for DirectAccess can be confusing though, as it involves the reassignment of IP addresses from the first server to the virtual IP address (VIP) for the cluster.

In this video I demonstrate how to enable DirectAccess load balancing and explain in detail how IP address assignment works for both Network Load Balancing (NLB) and external load balancers (ELB).

DirectAccess and Citrix NetScaler Webinar

DirectAccess and Citrix NetScaler Webinar

Updated 5/2/2016: The webinar recording is now available online here.

Join me on Tuesday, April 26 at 11:00AM EDT for a live webinar to learn more about integrating the Citrix NetScaler Application Delivery Controller (ADC) with Microsoft DirectAccess. During the webinar, which will be hosted by Petri IT Knowledgebase, you will learn how to leverage the NetScaler to enhance and extend native high availability and redundancy capabilities included with DirectAccess.

Eliminating single points of failure is crucial for enterprise DirectAccess deployments. DirectAccess includes technologies such as load balancing for high availability and multisite for geographic redundancy, but they are somewhat limited. DirectAccess supports integration with third-party solutions like NetScaler to address these fundamental limitations.

DirectAccess Multisite Geographic Redundancy with Microsoft Azure Traffic ManagerNetScaler is an excellent platform that can be configured to improve upon native DirectAccess high availability and redundancy features. It provides superior load balancing compared to native Windows Network Load Balancing (NLB), with more throughput and better traffic visibility, while at the same time reducing resource utilization on the DirectAccess server.

For multisite DirectAccess deployments, the NetScaler can be configured to provide enhanced geographic redundancy, providing more intelligent entry point selection for Windows 8.x and Windows 10 clients and granular traffic control such as weighted request distribution and active/passive site failover.

DirectAccess and Citrix NetScaler WebinarIn addition, the NetScaler can be configured to serve as the DirectAccess Network Location Server (NLS), providing essential high availability for this critical service and reducing supporting infrastructure requirements.

Click here to view the recorded webinar.

Configuring Multicast NLB for DirectAccess

Introduction

DirectAccess in Windows Server 2012 R2 includes support for load balancing using either Windows Network Load Balancing (NLB) or an external physical or virtual load balancer. There are advantages and disadvantages to each, but NLB is commonly deployed due to its cost (free!) and relative ease of configuration. NLB has three operation modes – Unicast, Multicast, and IGMP Multicast. It may become necessary to change the NLB operation mode depending on the environment where DirectAccess is deployed. This article describes when and how to make those changes.

Default Configuration

When NLB is first configured, the default cluster operation mode is set to Unicast. In this configuration, all nodes in the NLB cluster share the same MAC address. The NLB kernel mode driver prevents the switch from learning the MAC address for any node in the cluster by masking it on the wire. When a frame is delivered to the switch where the NLB cluster resides, without a MAC address to switch port mapping the frame is delivered to all ports on the switch. This induces switch flooding and is by design. It is required for all nodes in the cluster to “see” all traffic. The NLB driver then determines which node will handle the request.

NLB on Hyper-V

Unicast NLB typically works without issue in most physical environments. However, enabling NLB when the DirectAccess server is running on a virtual machine requires some additional configuration. For Hyper-V, the only thing that is required is to enable MAC Address Spoofing on the virtual network adapter as I discussed here. No other changes are required.

NLB on VMWare

For VMware environments, it will be necessary to change the cluster operation mode from unicast to multicast. This is because the VMware hypervisor proactively informs the virtual switch of the virtual machine’s MAC address on startup and during other virtual networking events. When this occurs, all traffic for the NLB Virtual IP Address (VIP) will be delivered to a single node in the cluster. In multicast operation mode, all nodes in the NLB cluster retain their original MAC address and a unique MAC address is assigned to the cluster VIP. As such, there’s no need to prevent the switch from learning the virtual machine’s MAC address.

Configuring Multicast NLB

To enable Multicast NLB, first enable load balancing for DirectAccess using the Remote Access Management console as usual. DO NOT perform the initial configuration of NLB outside of the Remote Access Management console! Before adding another member to the array, open the Network Load Balancing Manager, right-click the cluster and choose Cluster Properties. Select the Cluster Parameters tab and change the Cluster operation mode to Multicast.

Configuring Multicast NLB for DirectAccess

When opening the Network Load Balancing Manager locally on the DirectAccess server, you may receive the following error message:

“Running NLB Manager on a system with all networks bound to NLB might
not work as expected. If all interfaces are set to run NLB in “unicast”
mode, NLB manager will fail to connect to hosts.”

Configuring Multicast NLB for DirectAccess

If you encounter this error message it will be necessary to run the NLB Manager on another host. You can install the NLB Manager on a Windows Server 2012 R2 system by using the following PowerShell command.

Install-WindowsFeature RSAT-NLB

Optionally you can download and install the Windows Remote Server Administration Tools (RSAT) on a Windows desktop client and manage NLB remotely.

Once this change has been made you can add additional DirectAccess servers to the array using the Remote Access Management console.

Additional Configuration

If you cannot communicate with the cluster VIP from a remote subnet, but can connect to it while on the same subnet, it might be necessary to configure static ARP entries on any routers for the subnet where the NLB cluster resides. Often this is required because routers will reject responses to ARP requests that are from a host with a unicast IP address but have a multicast MAC address.

DirectAccess and the Free Kemp Technologies LoadMaster

Kemp Technologies Load BalancersBeginning with Windows Server 2012, DirectAccess includes native support for external load balancers. Where high availability is required (which is most deployments!) the use of an external load balancer (physical or virtual) has many advantages over Windows Network Load Balancing (NLB).

While NLB is easy to configure, it is not without serious drawbacks. NLB relies on network broadcasts, which limits its effectiveness in some environments. In addition, NLB supports only a single load distribution mode, which is round robin. NLB also lacks a convenient monitoring interface.

A dedicated load balancing solution provides more robust load balancing and better, more granular traffic control than NLB. Along with this greater control comes increased traffic visibility, with most solutions providing details and insight in to node health, status, and performance. Many solutions also offer Global Server Load Balancing (GSLB) support, which enhances geographic redundancy and offers improvements when performing automatic site selection in multisite deployments.

Often the barrier to adoption for a dedicated external load balancer is cost. Many of the leading solutions are incredibly powerful and feature-rich, but come with a substantial price tag. The Kemp Technologies LoadMaster Load Balancers solution is an excellent, cost-effective alternative and works quite well providing load balancing support for DirectAccess. And to make things even more interesting, they recently announced a completely FREE version of their commercial load balancing product.

The Free Kemp Technologies LoadMaster Load Balancer is fully functional and supported for use in production environments. It provides full layer 4-7 support and includes reverse proxy, edge security, web application firewall (WAF) functionality, and GSLB. It can be installed on most major virtualization platforms including Microsoft Hyper-V, VMware, and more. The free LoadMaster is also available in Kemp Technologies LoadMaster Load Balancer on the Microsoft Azure Public Cloud Platform, as well as the VMware and Amazon public cloud platforms.

The free LoadMaster does have some restrictions, however. For example, you cannot create high availability clusters of LoadMasters. Also, the free LoadMaster is limited in terms of network throughput (20Mbps) and SSL/TLS transaction per second (50, using 2048 bit keys). There is also a limit on the number of virtual servers you can create (1000). The free LoadMaster must also have access to the Internet as it must be able to call home to validate its license every 30 days. You can find a complete model comparison matrix between the free and commercial Kemp LoadMasters Kemp LoadMaster Comparison Chart.

As the free version of the Kemp LoadMaster does not support clustering, technically you still have a single point of failure. However, it can still deliver a net improvement in stability and uptime, as the LoadMaster is a purpose-built platform that requires much less servicing and maintenance than a typical Windows server.

DirectAccess Deployment Guide for Kemp LoadMaster Load BalancersFor detailed information about configuring the Kemp LoadMaster to provide load balancing services for DirectAccess, be sure to download the DirectAccess Deployment Guide for Kemp LoadMaster Load Balancers. And if you end up liking the free Kemp LoadMaster load balancer (and I’m confident you will!) you can always upgrade to the full commercial release at any time.

For more information about the free Kemp LoadMaster load balancer, click Free Kemp LoadMaster Load Balancer.

%d bloggers like this: