Hosted Direct Connect (DX) shared across multiple VPCs
Intelematics leverages the flexibility and modern services of Amazon Web Services (AWS) to share a single hosted AWS Direct Connect (DX) deployment across multIple AWS VPC's
Intelematics is 100 percent owned by RACV, and has pioneered the introduction of telematics and connected vehicle services by supporting vehicle manufacturer programs. Programs include car monitoring systems that even 15 years ago, could detect accidents. Being at the forefront of technology, Intelematics has implemented scalable and reliable solutions and over many years and has earned the trust of some of the world’s most respected automotive brands.
Today, they help millions of drivers all over the world by leveraging cloud-based architecture to enable secure connected vehicle and traffic data solutions and focus on the needs of our customers. Their flexibility, innovation, support infrastructure and agnostic approach to hardware provides an effective solution for transport and city planners, automotive manufacturers, fleets, automotive clubs, government and industry bodies.
A component of Intelematics' SUNA solution consists of many remote encoders/decoders that provide broadcast and monitoring information for radio stations all across Australia. Some of these devices are located in very remote locations and use legacy xDSL access technologies for connectivity back to Intelematics data centres in Melbourne/Sydney.
There was a requirement for an initial 6 AWS VPCs to have access to the remote encoder devices via the MPLS during the cloud migration phase, as well as the end state solution. Commercial and timing constraints meant that Intelematics needed to use the current MPLS Service Provider's (SP) product offerings.
The following solutions were considered and evaluated:
Option 1: VPN attachment to AWS TGW.
Comment: Ruled out as the SP did not offer a managed FW service meaning Intelematics would need to provide their own FW co-located which was against the DC exit strategy.
Option 2: Dedicated DX with transit VIF attachment.
Comment: Ruled out due to cost. DX VIF terminated on SP managed router.
Option 3: Hosted DX with private VIF, DXG and VGW in each VPC.
Comment: Not preferred option as does not scale past 10 VPCs (AWS limit number of VGW per DXG). DX VIF terminated on SP managed router.
Option 4: Hosted DX with private VIF, DXG and single VGW in “Network VPC”.
Comment: Chosen option. VGW in single VPC employed to share hosted VIF for all VPCs using AWS TGW and NAT Gateway. DX VIF terminated on SP managed router.
An AWS “Network” Account was created to provide connectivity services to all other Intelematics application VPCs across various accounts. This account consisted of the following AWS network related services:
Virtual Private Gateway
Direct Connect Gateway
Private VIF (hosted)
The hosted VIF provided by the SP DX partner was created with a Direct Connect Gateway and associated with the Virtual Private Gateway attached to the Network Account VPC using dynamic routing (BGP). In this way the SP on-prem router only received a single prefix (i.e. the CIDR of the Network VPC). A Transit Gateway (TGW) was employed to provide the “hub” function of a hub and spoke network topology. Application VPCs form the “spokes” (refer to the architecture diagram appended). The TGW provides the required connectivity to application VPCs and consists of two route tables:
The Spoke route table provides connectivity to all spoke VPCs. This route table is used by the return path for IP packets from external non AWS networks (i.e. SP MPLS). Hence only the Network VPC attachment is associated with this route table. AWS Propagations (shown in green in the diagram below) from the Application VPC attachments are configured to populate the route table with application VPC CIDRs.
The External route table provides connectivity to non AWS ranges. Static routes to the Network VPC are configured for external network CIDRs. All Application VPC attachments are associated with this table. RFC 1918 private address ranges are also “black-holed” to ensure “spoke to spoke” VPC traffic is dropped as per security policy. Without this “hair-pinning” at the Network VPN NAT Gateways could be used to route packets “spoke to spoke”.
The Network VPC private subnets (associated with the TGW attachment) contains a default static route to the applicable NAT-GW in the Availability Zone. This ensures connectivity to the SP MPLS with a source NAT to the private IP of the NAT-GW interface.
This topology was chosen due to the type of DX procured by Intelematics from the SP. The hosted DX does not support Transit VIF and thus cannot be connected directly to the TGW. The NAT solution employed in the Network VPC allows the DX to be used by all Application spoke VPCs as it circumvents the non transitive nature of a VPC. The alternative was to use a Dedicated DX with Transit VIF, however this was deemed to not be cost effective.
High Availability is realised using AWS best practice methodology such as multi-AZ deployment for applications and services. The Network VPC has NAT-GW instances configured in all 3 AZs in ap-southeast-2 region. There does still exist single points of failure for the Direct Connect connectivity. Only a single DX was procured from Summit Internet due to cost considerations.
DX VIF terminated on SP managed router.
High Level Architecture diagram
The diagram below shows the overall Intelematics' AWS network with focus on the Network VPC thats hosts the TGW to provide the hub function for the hub and spoke topology. The hosted VIF is associated with the VGW attached to the Network VPC. Appropriate static routes and propagations are employed on the TGW to provide spoke VPC connectivity to MPLS, but not to each other.
Client Application to MPLS Destination
This diagram shows the traffic flow for a spoke application VPC instance initiating a connection to the SP MPLS destination via the TGW and Network VPC and Direct Connect. The NAT-GW performs a source NAT to its private IP ensuring a return path is available.
This NAT-GW solution used allowed for only the the Network VPC CIDR to be advertised to the SP MPLS network regardless of the CIDRs of the spoke VPCs. This means as the number spoke VPCs increases, the SP’s managed router does not require an configuration update for BGP prefix filtering.
The use of source NAT for AWS TCP client connections to the MPLS network also increase security as it limits the MPLS view of Intelmatics' AWS topology.
This network solution was also feasible as Intelematics' application connectivity (from a TCP point of view), are initiated from AWS spoke VPC instances and thus there are no inbound TCP connections to AWS from the MPLS network. Inbound connections to spoke VPCs are possible over the hosted VIF, however they would require a reverse proxy function function to be employed in the Network VPC.