Blog

HCX Migration Event Details

As mentioned in my previous post, I have updated my HCX server to very latest version 4.0. Now it’s time to test some of the new features including the Mobility Migration Event details.

Introduction

Before we jump into a demonstration let’s quickly recap what the Mobility Migration Event Details feature provides:

  • It provides a detailed migration workflow information and it shows the state of migration and how long a migration remains in a certain state as well as a long a migration has been succeeded or failed.
  • It provides the detailed information for individual Virtual Machines that has been migrating from source to destination site. It’s working for Bulk Migration, HCX RAV, and OS Assisted Migration.

So now let’s see how the procedure works.

Procedure and Tests

I decided I am going to migrate two VMs back from my on-prem Lab to VMC on AWS. The two VMs are deb10-App01 and deb10-web01.

As you can see, I have already created a site paring between my source on-prem Lab and VMC on AWS datacenter.

I have also established a Service Mesh between my on-premise lab environment and one of the Lab SDDC.

The Service Mesh is established between my lab and VMC. I have enabled Application Path Resiliency and TCP Flow conditioning

The Service Mesh is a construct that associate two Compute Profiles.

Now let’s select the VMs for migration. For this exemple, I have chosen to migrate two VMs with the Bulk Migration option.

In order to launch the migration, I have to click on Migrate.

Next I am going to select the VMs I want to migrate.

I clicked the add button to move VM to a Mobility Group.

I now have to provide the information for Transfer and Placement. So basically I have selected the only possible resource pool = Compute-ResourcePool and datastore = WorkloadDatastore. I also switch the migration profile to Bulk Migration and the Folder to VMs migrated to Cloud.

Next Step is to validate the migration option by selecting the two VMs and clicking the Validate button.

The result display a few warnings only related to the VMtools I have installed into my Debian 10 VMs but the validation is Successful.

So I am gonna go ahead and start the migrations by clicking the green Go button.

and confirm it by clicking Validate.

In the beginning it shows only 0% BAse sync as the process starts.

We can click on the group info to see more information.

If I click on the group itself I can the see the list of Virtual Machines that are migrated.

After a few seconds we are starting to see the first event in the windows. If I click on individual VMs I can see the detailed events that is happening as the migration is taking place.

On the lower hand inside, you can see there is a separate section that provides the event information.

This section is divided in multiple part. Currently we see the Transfer Events section. There is a specific color coding there to distinguish tasks that are running on-premise and the one on destination. The darker blue shows the information collected on the target site.

It is possible to update regularly the list of events by clicking on EVENTS.

As the Base Sync in initiated, we can see the remaining time that’s stay to transfer the virtual machine. This is really handful when the size is very large to be aware of the time remaining to complete the transfer.

As the transfer event finishes, meaning the transfer of the VM is completed, we now see a Switch Over events section. This is visible for all of the Virtual Machines.

We can confirm that the switch over is ongoing from the first line.

After the witch over is finished, the latest events are Cleanup Events.

If I go back the Group Info, it shows me that on migration is finished and the other one is ongoing.

All the details of events is now listed in all sections.

All my Virtual Machines are now migrated and we saw a detailed events and real state of the migration of individual VMs.

This concludes this post thank you for watching it.

HCX 4.0 is now available.

A new version of my favorite solution for migrating to the Cloud has been recently released and I can wait to test it.

Let’s have a look at it…

What’s new in HCX 4.0

First of all what is on the list of new features for this version, there are multiple great enhancements around Migration, Network Extension, Service Mesh configuration and Usability.

Migration Enhancements

  • Mobility Migration Events details: The HCX Migration interface will display detailed event information with time lapse of events from the start of the migration operation.
  • NSX Security Tag Migration: Transfers any NSX Security tags associated with the source virtual machine when selected as an Extended Option for vSphere to vSphere migrations. See Additional Migration Settings
  • Real-time Estimation of Bulk Migration – HCX analyzes migration metrics and provides an estimation of the time required to complete the transfer phase for every configured Bulk migration. The estimate is shown in the progress bar displayed on the Migration Tracking and Migration Management pages for each virtual machine migration while the transfer is in underway. For more information, see Monitoring Migration Progress for Mobility Groups.
  • OS Assisted Migration Scaling – HCX now supports 200 concurrent VM disk migrations across a four Service Mesh scale out deployment. A single Service Mesh deploys one Sentinel Gateway (SGW) and its peer Sentinel Data Receiver (SDR), and continues to support up to 50 active replica disks each. In this Service Mesh scale out model for OSAM, the HCX Sentinel download operation is presented per Service Mesh. See OS Assisted Migration in Linux and Windows Environments.
  • Migrate Custom Attributes for vMotion  – The option Migrate Custom Attributes is added to the Extended Options selections for vMotion migrations. 
  • Additional Disk Formats for Virtual Machines – For Bulk, vMotion, and RAV migration types, HCX now supports these additional disk formats: Thick Provisioned Eager Zeroed, Thick Provisioned Lazy Zeroed. 
  • Force Power-off for In-Progress Bulk Migrations – HCX now includes the option to Force Power-off in-progress Bulk migrations, including the later stages of migration.

Network Extension Enhancements

  • In-Service Upgrade – The Network Extension appliance is a critical component of many HCX deployments, not only during migration but also after migration in a hybrid environment. In-Service upgrade is available for Network Extension upgrade or redeploy operations, and helps to minimize service downtime and disruptions to on-going L2 traffic. See In-Service Upgrade for Network Extension Appliances
  • Note: This feature is currently available for Early Adoption (EA). The In-Service mode works to minimize traffic disruptions from the Network Extension upgrade or redeploy operation to only a few seconds or less. The actual time it takes to return to forwarding traffic depends on the overall deployment environment.
  • Network Extension Details – HCX provides connection statistics for each extended network associated with a specific Network Extension appliance. Statistics include bytes and packets received and transferred, bit rate and packet rate, and attached virtual machine MAC addresses for each extended network. See Viewing Network Extension Details.

Service Mesh configuration Enhancements

HCX Traffic Type Selection in Network Profile – When setting up HCX Network Profiles, administrators can tag networks for a suggested HCX traffic type: Management, HCX Uplink, vSphere Replication, vMotion, or Sentinel Guest Network. These selections then appear in the Compute Profile wizard as suggestions of which networks to use in the configuration. See Creating a Network Profile.

Usability Enhancements

  • HCX now supports scheduling of migrations in DRAFT state directly from the Migration Management interface.
  • All widgets in the HCX Dashboard can be maximized to fit the browser window.
  • The topology diagram shown in the Compute Profile now reflects when a folder is selected as the HCX Deployment Resource.
  • In the Create/Edit Network Profile wizard, the IP Pool/Range entries are visually grouped for readability. 

Upgrading from 3.5.3 to 4.0

The process is straightforward. You just have to go to the HCX console and check the System Updates.

Click the Version on the list of available version to start updating. This will begin by updating the HCX Manager.

Once you have upgraded the HCX Manager, the HCX Network Extension appliances and Interconnect Appliances have to be upgraded.

For that we have to switch to the Service Mesh and Select Appliances Tab. On the right columns ‘Available Version’ you will see the latest build number available : 4.0.0-17562724 with a small flag NEW!

Now you just have to select the appliances (don’t select WAN Optimization as it will updated differently) and click UPDATE APPLIANCE.

I selected the Interconnect (IX) and choose UPDATE APPLIANCE

A confirmation window with a warning message that is recapping the process is displayed. Just select the Force Update appliance and Click UPDATE button.

A message confirming the launch of the upgrade is displayed.

This message is confirming the process starts and also that the remote appliances will be upgrade as well

After a few seconds, the process starts updating the appliance and you can see the successive tasks and operations in the vCenter Recent Tasks.

First of all it will deploy the new appliance from the OVF

Just after you will see that the appliance is reconfigured.

Step 2 is reconfiguring the appliance

Next the system will finish the update job and power on the newly deployed appliance.

The Tasks tab of the Service Mesh in HCX Console is also detailing in real time all the required steps followed by system to upgrade the appliance.

This window is showing the Network extension tasks

A the end of the operation, you can confirm that all tasks completed successfully if they have a green arrow on the left.

The final result for the IX appliance (Interconnect) with all successful steps

A confirmation that the process is finished successfully will also appear in the console.

You can confirm the process has been successful by reading the Upgrade complete message

Keep in mind that when you want to Start updating your appliance, it also upgrade the remote peer sites (could be a Cloud target like VMC on AWS or AVS or a target vCenter environment like VCF).

All my Appliances are now upgraded and tunnels shows as Up in the console.

Now that I have finalised the upgrades it’s time to evaluate the cool new features. I invite you to discover it in my next post!

Introducing Multi-Edge SDDC on VMC on AWS

The latest M12 release of SDDC (version 1.12) came with a lot of interesting storage features including vSAN compression for i3en, TRIM/UNMap (I will cover it in a future post) as well as new networking features like SDDC Groups, VMWare Transit Connect, Time-based Scheduling of DFW rules and many more.

One that typically stands out for me is the Multi-Edge SDDC capabilities.

Multi-Edge SDDC (or Edge Scaleout)

By default, any SDDC is deployed with a single default Edge (actually this a pair of VMs) which size is based on the SDDC sizing (Medium size by default). This edge can be resized to Large when needed.

Each Edge has three logical connections to the outside world: Internet (IGW), Intranet (TGW or DX Private VIF), Provider (Connected VPC). These connections share the same host Elastic Network Adapter ENA and it’s limits.

In the latest M12 version of VMC on AWS, VMC is adding Multi-Edge capability to the SDDC. This gives the customer the ability to add additional capacity for North-South network trafic by simply adding additional Edges.

The goal of this feature is to allow multiple Edge appliances to be deployed, therefore removing some of the scale limitations by:

  • Using multiple host ENAs to spread network load for traffic in/out of the SDDC,
  • Using multiple Edge VMs to spread the CPU/Memory load.
The Edge Scale out feature consists in the creation of a pair of additional EDGEs where specific traffic type can be steered to

In order to be able to enable the feature, additional network interfaces (ENA) are going to be provisioned in the AWS network and additional compute capacity are created.

It’s important to mention that you do need additional hosts in the management clusters of the SDDC to be able to support it. So this feature is coming with an additional cost.

Multi-Edge SDDC – Use Cases

The deployment of additional Edges allow for an higher network bandwidth for the following use cases:

  • SDDC to SDDC connectivity
  • SDDC to natives VPCs
  • SDDC to on-premises via Direct Connect
  • SDDC to the Connected VPC

Keep in mind that for the first three, VMWare Transit Connect is mandatory to allow the increased network capacity by deploying those multiple Edges. As a reminder, Transit Connect is a high-bandwidth, low latency and resilient connectivity option for SDDC to SDDC communication in an SDDC groups. It also enables an high bandwidth connectivity to SDDCs from natives VPCs. If you need more information on it, my colleague Gilles Chekroun has an excellent blog post here.

Multiple Edges permits to steer certain traffic sets by leveraging Traffic Groups.

Traffic Groups

Traffic Groups is a new concept which is similar in a way to the source Based Routing. Source based routing allow to select which route (next hop) to follow based on the source IP addresses. This can be an individual IP or complete subnet.

With this new capability customer can now choose to steer certain traffic sets to a specific Edge.

At the time you will create a traffic group, an additional active edge (with a standby edge) is going to be deployed on a separate host. All Edge appliances are deployed with an anti-affinity rule to ensure only one Edge per host. So there need to be 2N+2 hosts in the cluster (where N=number of traffic groups).

Each additional Edge will then handles traffic for its associated network prefixes. All remaining traffic is going to be handled by the Default Edge.

Source base routing is configured with prefixes defined in prefix lists than can be setup directly in the VMC on AWS Console.

To ensure proper ingress routing from AWS VPC to the right Edge, the shadow VPC route tables are also updated with the prefixes.

Multi-Edge SDDC requirements

The following requirements must be met in order to leverage the feature:

  • SDDC M12 version is required
  • Transit Connect for SDDC to SDDC or VPC or SDDC to on-prem
  • SDDC resized to Large
  • Enough capacity in the management cluster

A Large SDDC means that management appliances and Edge are scaled out from Medium to Large. This is now a customer driven option that doesn’t involve technical Support anymore as it’s possible to upsize an SDDC directly from the Cloud Console.

Large SDDC means an higher number of vCPUs and memory for management components (vCenter, NSX Manager and Edges) and there is a minimal 1 hour downtime for the upscaling operations to finish, so it has to be planned during a Maintenance Window.

Enabling a Multi-Edge SDDC

This follow a three step process.

First of all, we must define a Traffic Group that is going to create the new Edges (in pair). Each Traffic group creates an additional active/standby edge. Remember also the “Traffic Group” Edges are always Large form-factor.

Immediately you will see that 2 additional edge Nodes are going to be deployed. The New Edges have a suffix name with tg in it.

Next step you have to define a prefix list with specific prefixes and associate a Prefix List. It will contain the source IP adresses of SDDC virtual machines that will use the newly deployed Edge.

After some minutes, you can confirm that the Traffic groups is ready:

NB: NSX-T configures source based Routing with the prefix you define in the prefix list on the CGW as well as the Edge routers to ensure symmetric routing within the SDDC.

You just need to click on Set to enter the prefix list. Enter the CIDR range, it could a /32 if you just want to use a single VM as a source IP.

NB: Up to 64 prefixes can be created in a Prefix List.

When you done entering the subnets in the prefix list, Click Apply and Save the Prefix List to create it.

Last step is to associate the Prefix List to the Traffic Group with Association Map. To do so click on Edit.

Basically we now need to tell what prefix list to use for the traffic group. Click on ADD IP PREFIX ASSOCIATION MAP:

Then we need to enter the Prefix List and give a name to the Association Map.

Going forward any traffic that matches that prefix list will be utilising the newly deployed Edge.

Monitoring a Multi-Edge SDDC

Edge nodes cannot be monitored on VMC Console but you can always visualise the Network Rate and consumption through the vCenter web console.

When we look at the vCenter list of Edges, the default Edge has no “-tg” in its name. So basically the NSX-Edge-0 is the default. As long as we add the new traffic group, the traffic is going to manage the additional traffic and liberate the load on this default Edge.

The NSX-Edge-0-tg-xxx is the new one and we can see an increase on this new Edge in the traffic consumption on it now because of the new traffic going to flow over it:

What happened also after the new scale edge is deployed, is that the prefix list is using the new Edge as its next-hop going forward. This is also propagated to the internal route tables from Default Edge as well as CGW route table.

All of these feature is exposed over the API Explorer, for the traffic Groups definition in the NSX AWS VMC integration API. Over the NSX VMC Policy API for the prefix list definition perspective.

In conclusion, remember that Multi Edge SDDC doesn’t increase Internet capacity nor VPN or NAT capacity. Also that there is a cost associated to it because of the additional hardware requirements.

Configure VPN from VMC to WatchGuardTM Firebox Cloud – Part 4 (Final)

In the previous posts of this series on Configuring a VPN connection from WatchGuard TM Firebox to VMC, I have showed you how to setup the Firebox and how to establish a VPN with a native VPC.

In this last post, I will attach the SDDC to the WatchGuard Firebox instance with a IPSEC route-based VPN leveraging BGP to allow for dynamic routes exchange.

With this configuration, any compute and management segments created inside the SDDC will be the advertised into the BGP session established with the Firebox in the transit VPC.

Firebox to SDDC IPSec VPN configuration

Phase 1 – VPN’s VPC configuration

First of all I need to collect the public IP address of my SDDC. This is possible by logging to the VMC Console and going to the Networking and Security tab, and Selecting the Overview window:

The IP of VPN is displayed as VPN Public IP. This Public IP is unique for any other VPN being established.

N.B.: You can also request additional public IP addresses to assign to workload VMs to allow access to these VMs from the internet. VMware Cloud on AWS provisions the IP address from AWS.

Next I’ll collect the BGP local ASN number of the SDDC. Just like IP addresses, ASNs (Autonomous System Numbers) have to be unique on the Internet and the SDDC utilises two numbers: one for the route-based VPN and one for Direct Connect

To do that I Click on Edit Local ASN option in the VPN window:

Clicking EDIT LOCAL ASN displays the Local ASN of the SDDC as shown here:

The local ASN of any brand new SDDC is by default at 65000. You can change it to a value in the range 64521 to 65535 (or 4200000000 to 4294967294).

N.B.: Keep in mind that the remote BGP ASN number need to be different.

Now it’s time to create a new Customer Gateway and map it to the SDDC settings.

Create a New Customer Gateway

For that I need to go back to the AWS console and Go to the VPC Dashboard and Select Customer Gateways under VIRTUAL PRIVATE NETWORK Menu on the left.

I Click Create Customer Gateway and choose Dynamic as a routing option, and add the public Elastic-IP address of the SDDC public IP.

I also need to specify the BGP ASN to the SDDC value (65000 by default). Note that it has to be different from the BGP ASN of the Firebox in the transit VPC.

Phase 2 – FireBox’s VPN Configuration

Now I have to setup the VPN configuration on the Firebox itself. For this, I connect back to the Fireware Web UI:

  1. Open a web browser and go to the public IP address for your instance of Firebox Cloud at: https://<eth0_public_IP>:8080
  2. Log in with the admin user account. Make sure to specify the passphrase you set in the Firebox Cloud Setup Wizard.

Select VPN, BOVPN Virtual Interfaces on the left and click the lock to open the settings window.

Enter a name for the interface (eg. BoSddc) and switch the Remote Endpoint Type to Cloud VPN or Third-Party Gateway.

In the Gateway Settings-> Credential Method, Enter a Use Pre-Shared Key (note the key as you will have to use it in the SDDC setup):

In the Gateway Settings–>Gateway Endpoint–>Click ADD.

Select Local Gateway–>Interface: Select Physical: External

Specify the gateway ID for tunnel authentication: Select By IP address: 34.210.196.xxx (this is the public Elastic-IP of the Watchguard Firebox)

Select Remote Gateway–>Specify the remote gateway IP address for a tunnel: a the Static IP Address has to be set to the Public IP address of the SDDC:

Next Step, Select Advanced tab and Click OK

Configure Phase 1 of IPSEC Proposal

Check ‘Start Phase1 tunnel when it is inactive‘ and Keep the ‘Add this tunnel to the BOVPN-Allow policies‘ checked.

The Phase 1 Settings should be as follow:
1. Version: IKEv1
2. Mode: Main
3. Uncheck NAT Traversal

N.B.: NAT Traversal is enabled by default but if your WatchGuard device is not behind a NAT/PAT device, please deselect NAT Traversal.

Dead Peer Detection:
a. Traffic idle timeout: 10
b. Max retries: 3

Transform Settings–>Click ADD:

1. Authentication: SHA1
2. Encryption: AES(128-bit)
3. SA Life: 8 hours
4. Key Group: Diffie-Hellman Group 2

Click OK.

Remove any pre-existing Phase 1 Transform Settings eg. SHA1-3DES.

Configure Phase 2 of IPSEC Proposal

Go to VPN–>Phase2 Proposals–>Click ADD

Name: AWS-ESP-AES128-SHA1
Description: AWS Phase 2 Proposal
Type: ESP
Authentication: SHA1
Encryption: AES(128-bit)
Force Key Expiration: Select ‘Time’ -> 1 hours

Click SAVE.

Go to VPN–>BOVPN Virtual Interfaces–>Select BoSddc–>Click EDIT

Phase 2 Settings–>Perfect Forward Secrecy:

Check ‘Enable Perfect Forward Secrecy’: Diffie-Hellman Group 2
IPSec Proposals–>Click on existing proposal–>Click REMOVE
Select ‘AWS-ESP-AES128-SHA1’ from the drop-down menu–>Click ADD

Click SAVE.

Configure BGP dynamic routing.

Go to VPN–>BOVPN Virtual Interfaces–>Select BoSddc–>Click EDIT

For the VPN Routes settings, I keep ‘Assign virtual interface IP addresses‘ option checked for the Interface option.

Then I setup the Local IP address to: 169.254.85.186 and Peer IP address or netmask to: 255.255.255.252

then Click SAVE.

Go to Network–>Dynamic Routing and Check ‘Enable Dynamic Routing’.

Click on ‘BGP’: Check ‘Enable’

I have to Add the below BGP dynamic routing configuration commands in the box:

router bgp 65001    N.B.: Use this command only once at the beginning of the BGP config as this the local ASN number that the Firebox will use for any VPNs.

NOw it’s time to add the configuration for the second BGP neighbor that we need to configure for the SDDC:

neighbor 169.254.85.185 remote-as 65000
neighbor 169.254.85.185 activate
neighbor 169.254.85.185 timers 10 30

Click SAVE.

Phase 3 – VMC on AWS SDDC’s VPN Configuration

VMC on AWS allows to create up to 4 IPSEC route-based VPN tunnels to be established between Firebox/VPC and your SDDC. To create the VPN on the SDDC side, you first have to Connect to the SDDC console.

Then you need to Go to the Networking & Security tab.

Select Network -> VPN and Click on the Route Based tab.

Click ADD VPN.

Next, you have to enter the following configuration settings:

  • First give a name to the IPSec VPN (eg. TOFirebox).

Select Local Public IP1 of the SDDC: this is the public IP address of the SDDC. As the Remote Public IP, Select the Elastic IP that was assigned to the public interface of the Watchguard Firebox FW. The Remote private IP is automatically entered.

For the BGP Local IP/Prefix Length, choose the following: 169.254.85.185/30.

The BGP Remote IP is the Local IP configured previously in the VPN Routes of the BOVPN Virtual interfaces: 169.254.85.186.

BGP Neighbor ASN has to be he remote ASN of the WatchGuard Firebox: 65001.

  • Tunnel Encryption: AES128
  • Digest Algorithm: SHA-1
  • PFS: Enabled
  • Diffie-Hellman: Group 2
  • IKE Encryption: AES128
  • IKE Digest:  SHA-1
  • IKE Type: V1

After a few seconds, we can see that the VPN is up!

Configure VPN from VMC to WatchGuardTM Firebox Cloud – Part 3

In this Part, I will show you how to configure an IPsec VPN from the “spoke” native VPC to the Firebox instance deployed in the transit VPC. This permits to leverage the Watchguard Firewall instance in the transit VPC as a filtering device from any trafic coming outside (SDDC, spoke VPC, on-prem).

Phase 1 – VPC’s VPN Configuration

In order to configure the VPN in the VPC, I need to do some preparation in the native VPC which consists in creating a Customer Gateway, a Virtual Private Gateway and attach them together.

To do so, let’s first Connect to the AWS console again!

Select IAM User and enter ID of your AWS account

Log in with the user account that have the administrative privileges on this account.

Create a Customer Gateway

NI have to go to the VPC Dashboard and Select Customer Gateways under VIRTUAL PRIVATE NETWORK Menu on the left.

I click Create Customer Gateway and choose Dynamic as a routing option, add the public Elastic-IP address of the FW. Specify the BGP ASN to a value different from the potential peer.

Create a Virtual Private Gateway

I’ll now create a brand new Virtual Private Gateway and attach it to the spoke VPC created earlier.

The VGW appears as detached:

I’ll select it and in the Actions drop-down menu, I select Attach to VPC option:

It now shows as attached:

Create a VPN Connection

Now I will create the VPN Connection by associating the VGW to the Customer Gateway that I have created:

Once the VPN connection available, I select Download Configuration. This will open the following window:

I select Watchguard, inc. as a Vendor and click Download button. A file containing all the configuration is created. I am going to use it to configure the Firebox now.

Phase 2 – FireBox’s VPN Configuration

First I need to Connect to Fireware Web UI by opening a web browser to the public IP address of the Firebox Cloud instance
https://<eth0_public_IP>:8080

I log in with the admin user account and I make sure to specify the passphrase I have set in the Firebox Cloud Setup Wizard.

Then I Select VPN, BOVPN Virtual Interfaces on the left and click the lock icon.

First I started by following the instruction in the VPN configuration file downloaded earlier.

So I enter the interface name and switch the Remote Endpoint Type to Cloud VPN or Third Party Gateway.

In the Gateway Settings-> Credential Method, I have entered the Use Pre-Shared Key stated in the file:

In the Gateway Settings–>Gateway Endpoint–>Click ADD:. Select Local Gateway–>Interface:

I need now to Specify the gateway ID for tunnel authentication. I select By IP address: here I enter the following 34.210.196.xxx (this is the public Elastic-IP of the firebox).

Now I have to Select Remote Gateway–>Specify the remote gateway IP address for a tunnel to Static IP and enter the public IP of my SDDC.

Select Advanced–>Click OK

I have checked ‘Start Phase1 tunnel when it is inactive‘ and kept the ‘Add this tunnel to the BOVPN-Allow policies‘ checked.

We need to select the following for Phase 1 Settings:

1. Version: IKEv2
2. Mode: Main
3. Uncheck NAT Traversal

NAT Traversal is enabled by default but if your WatchGuard device is not behind a NAT/PAT device, please deselect NAT Traversal.

For the Dead Peer Detection, choose the following values:

a. Traffic idle timeout: 10
b. Max retries: 3

Next we have to change the Transform Settings by clicking ADD et setup the following values:

1. Authentication: SHA1
2. Encryption: AES(128-bit)
3. SA Life: 8 hours
4. Key Group: Diffie-Hellman Group 2

Click OK and Remove any pre-existing Phase 1 Transform Settings (eg. SHA1-3DES).

Now we need to configure Phase 2 of IPSEC Proposal.

I need to Go to VPN–>Phase2 Proposals–>Click ADD:

  • Name: AWS-ESP-AES128-SHA1
  • Description: AWS Phase 2 Proposal
  • Type: ESP
  • Authentication: SHA1
  • Encryption: AES(128-bit)
  • Force Key Expiration: Select ‘Time’ -> 1 hours

Click SAVE.

  1. Go to VPN–>BOVPN Virtual Interfaces–>Select vpn-054bfd003f8ac9d2d-1–>Click EDIT

Phase 2 Settings–>Perfect Forward Secrecy:

Check ‘Enable Perfect Forward Secrecy’: Diffie-Hellman Group 2
IPSec Proposals–>Click on existing proposal–>Click REMOVE
Select ‘AWS-ESP-AES128-SHA1’ from the drop-down menu–>Click ADD

Click SAVE.

Phase 3 – Configure BGP Routing

It’s now time to configure BGP dynamic routing.

  1. Go to VPN–>BOVPN Virtual Interfaces–>Select vpn-054bfd003f8ac9d2d-1–>Click EDIT
  2. VPN Routes:

In the Interface window, keep ‘Assign virtual interface IP addresses‘ option checked:

Click SAVE.

Go to Network–>Dynamic Routing

Check ‘Enable Dynamic Routing’

Click on ‘BGP’ tab:

Check ‘Enable

Add the BGP dynamic routing configuration commands in the box as seen above.

We have to add the line: router bgp 65001  but only once at the beginning of the BGP config.

Click SAVE.

Phase 4 – Check tunnel is established

Go back to AWS Console to check VPN are established:

AWS allows the creation of a second tunnel to be established between the spoke VPC and the Firebox instance. To create the second VPN session, create a second tunnel by following the same instruction as above with the parameters described in the configuration file downloaded earlier.

That concludes the Part 3 of this post. In the next final Part, I will show you how to establish a VPN from SDDC to the Firebox instance in the transit VPC.

Configure VPN from VMC to WatchGuardTM Firebox Cloud – Part 2

Phase 2 – Deploy the WatchGuard Firebox instance

In Part 1 of this blog post, we have deployed a new transit VPC with two subnets and a route table configured accordingly.

Now it’s time to deploy a WatchGuard FW cloud EC2 instance in the transit VPC. This is possible from the EC2 dashboard:

  • After logging on the AWS Console with my personal AWS account, I have selected Services > EC2.
  • In the EC2 Dashboard, I can easily launch a new instance by Clicking on Launch instance (easy :=)),
  • I have selected AWS Marketplace and type ‘firebox’ in the search window and have decided to pick the Watchguard Firebox Cloud (Hourly) AMI.
  • You will get the pricing details and Click Continue
  • Select the smallest available instance with free tier t2.micro instance type and click Next: Configure Instance details
  • The configure Instance Details step opens.
  • From the Network drop-down list, select your transit VPC :
  • From the Subnet drop-down list, select the public subnet to use for eth0.
    The subnet you select appears in the Network Interfaces section for eth0.
  • To add a second interface, in the Network interfaces section, click Add Device.
    Eth1 is added to the list of network interfaces.
  • Click Next: Add Storage
  • Use the default storage size (5 GB). 
  • Click Next: Add Tags
  1. Click Next: Configure Security Group. By default, the instance uses a security group that functions as a basic firewall. This security group restricts following ports: HTTPS (TCP 8080), SSH, TCP 4118 (WatchGuard Firewalls may allow remote management using WSM (WatchGuard System Manager) over ports 4117, 4118 TCP).
  1. Click Review and Launch.
    The configured information for your instance appears.
  2. Click Launch.
    The key pair settings dialog box opens.

Phase 3 – Finish configuring the instance of the Firebox

In this phase we will finish configuring the EC2 instance of our Firebox.

Once the firewall is deployed, from the EC2 Dashboard, Click on the instance option, the new instance should appear as here:

Disable Source/Destination Checks

By default, each EC2 instance completes source/destination checks. For the networks on your VPC to successfully use your instance of Firebox Cloud for NAT, you must disable the source/destination check for the network interfaces assigned to the Firebox Cloud instance.

Disabling source/destination checks for the public interface is quite simple:

  • From the EC2 Management Console, select Instances > Instances.
  • Select the instance of Firebox Cloud.
  • Select Actions > Networking > Change Source/Dest. Check. The confirmation message includes the public interface for this instance.
  • Click Yes, Disable.
    The source and destination checks are disabled for the public & private interface.

Assign an Elastic IP Address to the External Interface

You must assign an Elastic IP (EIP) address to the eth0 interface for the instance of Firebox Cloud. You can use any available EIP address. To make sure you assign it to the correct interface, find and copy the eth0 interface ID of your instance of Firebox Cloud.

To find the eth0 interface ID for your instance of Firebox Cloud:

  1. From the EC2 Management Console, select Instances.
  2. Select the instance of Firebox Cloud.
    The instance details appear.
  3. Click the eth0 network interface.
    More information about the network interface appears.
  4. Copy the Interface ID value.

To associate the Elastic IP address with the eth0 interface:

  1. From the EC2 Management Console, select Network & Security > Elastic IPs.
  2. Select an available Elastic IP address.
  1. Select Actions > Associate Elastic IP Address.
    The Associate Elastic IP Address page opens.

If you have created 2 sub-interfaces, You can associate two different publics IPs to the interface:

Run the Firebox Cloud Setup Wizard

After you deploy the Firebox Cloud instance, you can connect to Fireware Web UI through the public IP address to run the Firebox Cloud Setup Wizard. You use the wizard to set the administrative passphrases for Firebox Cloud.

  1. Connect to Fireware Web UI for your Firebox Cloud with the public IP address:
    https://<eth0_public_IP>:8080
  2. Log in with the default Administrator account user name and passphrase:
    • User name — admin
    • Passphrase — The Firebox Cloud Instance ID

The Firebox Cloud Setup Wizard welcome page opens.

  • Click Next.
    The setup wizard starts.
  • Review and accept the End-User License Agreement. Click Next.
  1. Specify new passphrases for the built-in status and admin user accounts.
  2. Click Next.
    The configuration is saved to Firebox Cloud and the wizard is complete.

This is the end of Part 2, in Part 3 we are going to configure the IPSEC route based VPN between the Firebox instance and both a native VPC and a VMC on AWS SDDC.

Configure VPN from VMC to WatchGuardTM Firebox Cloud – Part 1

When I look back I realise I have been working at VMware for about 9 months and I have spent a tremendous amount of time dealing with a high number of requests, questions and issues with my customers.

One that particularly stands out is around integrating VMC on AWS with a Firewall hosted in a transit VPC for security purpose.

One of my customer recently was asking me if it was possible to create a VPN from VMC to a WatchguardTM Firebox Cloud Firewall. So I decided I would give it a try.

In this guide, I will first show you how to set up a route-based VPN from the WatchguardTM firewall to an AWS VGW in a native VPC.

In the last part, I will show how to configure an IPSEC route-based VPN from VMC on AWS to the same instance of WatchguardTM firewall hosted in a transit VPC.

Network Architecture diagram

Transit VPC with VPNs attachment to VMC and a native VPC
Transit VPC with VPNs attachment to VMC and a native VPC

AWS Deployment phase

Phase 1 -Configure an AWS transit VPC

Let me give first some definition: A virtual private cloud (VPC) is a virtual network dedicated to your AWS account. It is logically isolated from other virtual networks in the AWS Cloud. You can launch your AWS resources, such as Amazon EC2 instances, into your VPC.

First, I need to configure an AWS VPC with at least two subnets. It’s possible to use the VPC Wizard to create a VPC with public and private subnets or create it manually.

If you choose the wizard, you will have to terminate the NAT instance that was automatically created for the VPC by the VPC Wizard because the instance of Firebox Cloud will provide NAT functions for subnets in this VPC.

I will be using the manual method:

Create a new VPC

When I create a VPC, I must specify a range of IPv4 addresses for the VPC in the form of a Classless Inter-Domain Routing (CIDR) block. I decided to choose a CIDR block for my VPC of 172.30.0.0/16.

Now I will have to Create a public subnet with a CIDR block equivalent to a subset of the VPC CIDR range:

Choose a CIDR block for your public subnet like 172.30.11.0/24.

CREATE A PRIvate Subnet

Next step is to Create a private subnet from the VPC CIDR range in the same zone as the public subnet (CIDR block of private subnet cannot overlap with public subnet):

Choose a CIDR block for your private subnet like 172.30.20.0/24.

Create an Internet Gateway

We will now deploy an AWS Internet Gateway (IGW) from the VPC Dashboard. From the VPC Dashboard, Click Internet Gateways menu on the left:

Attach the new IGW to the transit VPC by clicking on the attach to VPC button and from the Actions drop-down menu, select the transit VPC and Click Attach.

The IGW is seen as attached to the VPC that was created:

Create a Route Table

Next, we will create a route table for the Transit VPC: from the VPC Dashboard, select Route Tables menu and Create Route table as shown:

The route table must be associated with the transit VPC as highlighted above. Once you provide a name for the route table and select the Transit VPC from drop-down menu, Click Create.

Next step is to create a default route for the new transit VPC route table. Select the Routes tab and Click Edit.

Add a 0.0.0.0/0 destination that point to the IGW previously created.

Next, from the same window, select the subnet associations tab and select the Edit Button and Select the public subnet created earlier. Once done, click Save.

Next you are going to Create a native “spoke” VPC (this is a VPC attach to the firebox through a VPN where we will run some EC2 instances to test access to the SDDC):

This is the end of this Part 1.

In Part 2 we are going to deploy the Watchguard VM in the transit VPC.