Decided to poke my head into the world of IPv6. So I’ve set up a small virtual network to demostrate IPv6 and UAG DirectAccess. It’s all run on a single vSphere ESXi node for the moment using Windows 2008 R2 and Windows 7. It also uses a virtual vyatta router. There are three virtual switches, two representing “internal” LAN segments and one on our Campus Network with real IP4 addresses. Roughly it looks like this..
The setup started out as a pure IPv4 setup, with TMG1 providing the outward NAT path from the private address space – 10.0.42.* (Private Network) and 10.0.43.* (Private Network PCs). The servers all have static IPs, with the PC and Laptop having a DHCP address. DHCP is servered by DC1, with the vyatta router having DHCP Relay enabled to allow DHCP traffic to get from DC2 to LabPC1 etc. At this point I have DC1 and DC2 (AD, DNS on both) with DC1 doing DHCP. SQL1 is the database server for vSphere vCenter (VCS1) with PCs MgmtPC1 and LabPC1.
Routing on the IPv4 side is straightforward, all servers and PCs use the vyatta router as their default gateway and the vyatta router has a default route set to the TMG1 server. This server has the usual default route set on it’s external interface – to the campus router, with no default gateway on the internal interface, however it has an additional route added to push traffic for 10.0.43.* to the vyatta router.
So that’s the easy bit. Next I added the IPv6 setup (not worrying about UAG1 at this point). First an subnet address was needed for each subnet. After a bit of poking about, I’ve hopefully got some legitimate “private” IPv6 subnets. I’ve used fc01:: as the prefix – fc00:: being the ULA (Unique Local Address) prefix and the ‘1’ indicating I’ve allocated it myself – a ‘0’ would seem to indicate it was picked from a by a central agency? The rest of the address I’ve adapted from the IPv4 address without taxing my Hex Convertor, so:
10.0.42.252 becomes fc01:0:10:42::252
10.0.43.100 becomes fc01:0:10:42::100
So fc01:0:10 is my /48 address space, :42: and :43: are my (16 bit) subnets, and the ::100 is the 64bit device address. Hopefully it’s clear this scheme makes life easy working out the static IPs for a device and also remembering the IPv6’s addresses etc.
After the static devices were given IPv6 addresses, so was the vyatta router. The next issue was getting DNS and DHCPv6 to the party. Setting up reverse DNS zones for IPv6 via the GUI on DC1 was pretty easy, so the servers with static addresses where showing both A (IPv4) and AAAA (IPv6) addresses in the forward zone and registering Ok in the reverse zones. Creating the IPv6 DHCP scopes was also simple, but slightly different from the IPv4 version – you can only create exclusions from the addresses space to be allocated, so the PCs were given pretty random IPs – ho hum. The tricky bit was actually getting the PCs to collect their settings correctly – specifically their default gateway. It turned out that I needed to setup the router to provide this, as it’s no longer DHCP’s job. So I run this for each interface on my vyatta box:
set interfaces eth0 ipv6 router-advert managed-flag true
And once I’d also enabled the dhcpv6 relay service, both my virtual PC’s picked up their details OK. Running ping -6 on the various VMs showed that IPv6 seemed to be running OK everywhere. The only problem was that IPv6 traffic is isolated inside my network.
So next was to add the Unified Access Gateway (UAG). This is essentially a TMG server with a few publishing services on top, in my case I wanted DirectAccess to enable my laptop to connect back home using IPv6 over an IPv4 tunnel. Essentially I followed the guides here – Base Configuration Guide and the DirectAccess Guide (Getting client certificates in place seemed to be a most important step!)
I adapted this to my setup, so nls1 was introduced as the network location server, and DC2 was used rather than DC1 for the Certificate server. The big difference was that my network runs ipv6 so there’s no need for the ISATAP service. This becomes apparent when configuring UAG using the Wizard…
After that, there’s just some routing to sort out. UAG is now the effectively the gateway for IPv6 traffic in my network, though this would change if I added an tunnel broker connection. So a default route was added to the vyatta router pointing at the UAG server. Also a route was added to the UAG server to push fc01:0:10::/48 traffic back to the vyatta router. During the UAG setup I also used two further subnets fc01:0:10:40::/64 for IP-HTTPS traffic and fc01:0:10:41::/96 for NAT64 and DNS64. The later provides the mechanism to get from UAG connected PCs to IPv4 resources on the network, so in order for that to work, UAG also needs a route set for 10.0/16 to the vyatta router.
So acid test time. Take the LabPC1 VM and change it’s network switch assignment to the public network, and low as if by magic it carries on working, though it’s now shipping traffic to the internal network via the UAG server using 6to4 encapsulation! It can also access some IPv4 systems (e.g. ubuntu1 which only has IPv4 configured), but this only works via DNS as this uses DNS64/NAT64 to create an IPv6 address for the IPv4 device. i.e. On labpc1 this happens
ping 10.0.43.40 – fails. Actually this tries to use the local IPv4 network to get somewhere!
ping ubuntu – an address is created – fc01:0:10:41::a00:2b28 – last bit being hex for 10.0.41.40! So ping works.
So clearly any network application on LabPC1 needs to be IPv6 aware should it need to communicate via the UAG gateway.
Job done – PHEW!