How to create a secure virtualized Data Center on Aruba Cloud Pro A comprehensive journey through Aruba Cloud Pro, the IaaS Cloud Computing solution provided by Aruba that lets you create a virtual infrastructure with a pay-per-use pricing model

How to create a secure virtualized Data Center on Aruba Cloud Pro

This article is the first of a series of posts in which I will tell my experience with Aruba Cloud Pro, the IaaS solution made available by Aruba to create virtual machine data centers with a pay-per-use pricing model and an interesting series of features.

It’s important to clarify that we’re not talking about a service that can compete with competitors such as Google Cloud Platform, Amazon AWS or MS Azure, even if the approach is not too different from in terms of overall logic: a comprehensive management platform, entirely accessible via the web, that can be used to create, configure and deploy a series of cloud-based services that can be integrated with each other, as well as increased and/or decreased (as well as expanded and/or reduced) as needed.

More specifically, the product range includes:

  • Cloud Servers, i.e. the Virtual Machines created on VMware or Hyper-V hypervisors with redundancy.
  • Virtual Switches, switch-like devices that can be used to create private networks between the various servers.
  • Public IPs, to grant external (WAN) access.
  • Unified Storages, virtual storage units to have a shared space between the various Cloud Servers.
  • Balancers, which can be used to distribute the workload balancing between two or more Cloud Servers (with identical configuration settings and stored data).

There are also a series of accessory services that can be used to support the products listed above, including: bare-metal backup, predefined templates, FTP access to upload customized virtual disks and / or export those of the created virtual machines, etc.

DISCLAIMER: This website is not affiliated with Aruba; this article represents the free opinion of the author and has not been commissioned or sponsored in any way.

Infrastructure

Those available services allowed me to build a Data Center based on a typical edge-origin architecture featuring the following elements:

  • 1 pfSense Firewall and VPN Server (one of the two Aruba Cloud standard firewall VM templates, the other one being Endian)
  • 1 NGINX Reverse Proxy on a CentOS 7.x virtual machine
  • 1 IIS Web Server on a Windows Server 2016 virtual machine
  • 1 SQL Server Database on a second Windows Server 2016 virtual machine
  • 1 NGINX Web Server on another CentOS 7.x virtual machine
  • 1 MariaDB Database on a third CentOS 7.x virtual machine

As we can see, we’re talking about a typical Windows + Linux hybrid environment, which I often like to adopt (and suggest to my customers) because it allows me to have the best of both worlds: .NET 2.x & 4.x apps and services on a full Windows architecture, LAMP/LEMP/MEAN stacks on Linux, and .NET Core (micro)services using Windows and/or Linux depending on the external packages availability.

It’s worth noting that I often use CentOS and NGINX over other Linux-based alternatives such as Ubuntu and Apache: if you want to know why, I strongly suggest to take a look at my article series related to CentOS and NGINX, where I do my best to pull off some useful performance and security considerations and comparisons between those products and some alternative choices, where the formers often get the lead.

Networking

When you deploy a virtual server, Aruba Cloud automatically assigns a public IP address and a network card directly connected to the internet, so that each server has direct access to the internet (and can also being accessed).

For obvious security reasons I’ve chosen to ditch such approach and adopt a WAN + LAN networking configuration orchestrated by the firewall: more specifically, I’ve configured all the public IPs to the firewall, in order to make it the only VM accepting incoming connections from the WAN, putting all the other servers to a secured LAN so that they could communicate with each other without being exposed to the internet.

Implementing an internal LAN on Aruba Cloud was actually pretty straightforward thanks to a Virtual Switch, which serves this exact purpose; I just had to configure the LAN network on pfSense and activate its DHCP server so that it can act as a gateway and handle the whole subnet, as it can be seen in the screenshot below:

How to create a secure virtualized Data Center on Aruba Cloud Pro

Once done, I could add a secondary Network Interface Card to each of my Cloud Servers to make them able to connect to the LAN Virtual Switch:

How to create a secure virtualized Data Center on Aruba Cloud Pro

I’ve also set up the required inbound and outbound NAT rule so that each server could still be able to be accessed and access the internet, while being protected by incoming connections at the same time (see Inbound NAT and Outbound NAT paragraphs below); after doing all that I could finally disable (or remove) the default Network Interface Card pointing to the WAN network on all of the Cloud Servers to ensure that they couldn’t be directly accessed from the public internet anymore.

For additional info about this specific section, take a look at the following in-depth post:

  • pfSense – WAN, LAN and NAT configuration

External connections via VPN

Since pfSense natively integrates a VPN server feature, I took the chance to configure it to allow system administrators (including myself) to be able to securely connect to that LAN via a VPN client and access those servers using Remote Desktop.

To implement the VPN I’ve chosen the OpenVPN protocol, probably the most secure to date among those made available by pfSense, with a configuration designed for client-server mode. The configuration of the clients is extremely simple, since pfSense allows you to download the configuration scripts for each user, which allow you to automatically configure the client, as well as – for the less experienced – a customized file installer including the OpenVPN Client and all the settings needed to connect. Obviously neither the script nor the installer file autogenerated by pfSense contain the actual user credentials, which are essential for making the connection.

For additional info about how to implement and configure a OpenVPN Server on pfSense, take a look at the following in-depth post:

  • pfSense – Setup and Configure a OpenVPN Server

Common issues and fixes

Although it was a rather easy configuration, there were some issues, mostly related to some specific problems of the various systems used; in this paragraph I’ve tried to summarize the most troublesome of them, hoping that the solutions I’ve found could also help other system administrators.

Windows File & Folder Sharing

Among the various things I had to configure in my Windows-based virtual server I wanted to activate the file system sharing through the internal LAN, so that servers and VPN users (the system administrators) could access the shared folders using the Windows File Explorer. In order to do that, I had to perform a number of tasks, such as:

  • Share some folders between the various Windows servers, thus enabling their File and Printer Sharing feature on their NIC interface(s).
  • Add a couple Firewall rules on pfSense to allow traffic from both the LAN and OpenVPN interfaces to any LAN destination.
  • Open the Windows Firewall ports for file sharing (135-139 and 445 TCP/UDP), which can be easily done by allowing the File and Printer Sharing and File and Printer Sharing over SMBDirect apps to communicate through Windows Firewall.

That was all it took… or so I thought before being greeted by the following errors when I tried to access a shared folder:

An error occurred while connecting to address \\<LAN-IP>\<SHARED-FOLDER>\.

The operation being requested was not performed because the user has not been authenticated.

You can’t access this shared folder because your organization’s security policies block unauthenticated guest access. These policies help protect your PC from unsafe or malicious devices on the network.

I’ve had the above messages alternating on the two Windows machines, without being able to connect to the shared folders.

These errors was caused by a non-trivial configuration issue that took some valuable time to fix; the second message greatly helped me to understand where the problem actually was since it led me to check the local group policies. In very short terms, there was an issue with the Windows default Guest account, which was enabled and also blocked from accessing the network.

For more details on how I managed to fix this issue, take a look at the following post:

  • Windows – Allow UNC File Sharing through a LAN or VPN connection

Multiple IP Addresses

Aruba Cloud Pro provides 1 free public IP address for each server that we add to our servers pool, which is also configured within a server’s Network Interface Card. This is  However, since we do not want to have our servers directly connected to the internet,

This basically means that we can handle all of them

Once defined, those Virtual IPs can be used in a number of useful ways, such as:

  • Redirecting WAN traffic to a specific server connected to our LAN using one or more Inbound NAT rules (see Inbound NAT paragraph below).
  • Translate the web requests coming from our internal servers to one (or more) WAN IP address(es), to make them able to access the internet with our WAN IP addresses (see Outbound NAT paragraph below).

Inbound NAT

NAT is an acronym for “Network Address Translation”, which is a technique for remapping an IP address space into another by modifying network address information in the IP header of packets while they are in transit across a traffic routing device. In our given scenario, since we do have two different networks (WAN and LAN), we need to use NAT whenever we want to make the traffic originating from the internet to reach our server (inbound NAT) and vice-versa (outbound NAT).

Based on such premise, whenever we want to “open” one (or more) TCP and/or UDP ports from our servers and make it (or them) accessible from the web, we need to  define a Inbound NAT rule that will translate the request addressed to a given WAN IP address/port to an internal LAN IP address/port.

Here’s an example of a common inbound NAT rule configured on pfSense to “route” all the requests targeting the WAN IP address  port 3389 (Remote Desktop Protocol) to reach our internal server using its LAN IP address (10.0.1.11):

How to create a secure virtualized Data Center on Aruba Cloud Pro

It’s worth noting that Inbound NAT uses STATIC NAT on a PER PORT basis: this basically means that the inbound requests from the internet will match on an IP level as well as a per port basis.

Outbound NAT

Since the Cloud Servers are only configured within the internal LAN and didn’t have a WAN connection anymore, in order to make them able to access the internet I also had to configure an Outbound NAT rule for each of them. As we explaining in the previous paragraph, we need to define outbound NAT rules whenever we want to  translate the traffic originating from our servers (i.e. from the LAN) to the internet  (i.e. to the WAN) so that it will be detected as coming from that given WAN IP.

Conversely from Inbound NAT, which use STATIC NAT on a PER PORT basis, Outbound NAT is defined through global NAT, also know as NAT overload or Port Address Translation (PAT): the “global” part of the rule is configured as the firewall’s outside interface, meaning that all traffic originating from the servers behind the firewall (LAN) going out to the internet (WAN) will appear to come from that “global” address instead.

It’s worth noting that Outbound NAT does not control which interface traffic will leave, only how traffic is handled as it exits: if you need to control which interface traffic will exit, you need to use policy routing or Static Routes.

In pfSense there are basically four methods to configure outbound NAT:

  • Automatic Outbound NAT: the default scenario, where all traffic that enters from a LAN (or LAN type) interface will have NAT applied, meaning that  it will be translated to the firewall’s WAN IP address before it leaves. Although not always ideal, such method is good enough for most scenarios where we do want to grant internet access to *all* our internal servers and have their request detected as coming from our WAN IP address(es).
  • Hybrid Outbound NAT rule generation: this method works just like the previous one, but it also allows the administrator to define additional rules to override  the default behaviour: this is an excellent choice if we want to stick to the default logic with few exceptions.
  • Manual Outbound NAT rule generation: this method will allow the administrators to manually define all the outbound NAT rules, including editing (or deleting) the default ones. For the sake of convenience, as soon as we select this method pfSense will populate the list of rules with the equivalent of the automatic rules, thus allowing us to keep, edit or delete them as we please.

Conclusion

That’s it, at least for now: I hope that this post will help other System Administrators that are looking for a way to securely host their virtual infrastructure on the Aruba Cloud Pro environment.

About Ryan

IT Project Manager, Web Interface Architect and Lead Developer for many high-traffic web sites & services hosted in Italy and Europe. Since 2010 it's also a lead designer for many App and games for Android, iOS and Windows Phone mobile devices for a number of italian companies. Microsoft MVP for Development Technologies since 2018.

View all posts by Ryan

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.