VMware – Killing paradigms and Re-inventing the mainframe

So I Listened to Paul Maritz keynote from VMworld 2011 yesterday.  He was making some bold predictions, which of course will come back to bite him!

First, of course, the death of the PC – this has always been touted, and I could easily see that in the home environment the “PC” would disappear as everyone uses a Tablet for browsing, the TV gets an internet connection for TV, film streaming, browsing etc., and err playing games, etc.  Add in a games console, and there’s little need for a real PC, unless someone in the house needs a proper app for Video editing, Spreadsheets etc.

BUT, in the business world, a PC with a decent screen and keyboard is pretty much a fixture.  Of course Maritz predicts that all this will be driven by a thin computing setup – this prediction has been done to death over decades and has never delivered – but then VMware is pushing it’s “View” product…

Anyway, the other death knell that Maritz was announcing was that of the “Client/Server” computing paradigm, mostly on the back of some clever software which using HMTL5 to deliver a Windows Desktop on a mobile device.  So hang on, we still have a client – the tablet with an App – and the server?  Further the remote or virtual desktop is still a client to a server?  And most web services have a web front end – the client – and a back end database – the server.

He also dismissed the mainframe, but what are we building with a virtualization solution?  A big multi-processor system with loads of RAM, Storage, etc.  Except we’re wasting piles of resources running multiple copies of an OS.  So won’t it be better to have one OS across the hardware and run the apps on top of that – i.e. a mainframe?

This I think is where Microsoft should be going, if they could take the lessons from desktop application virtualization,  vmotion, etc and apply them to server apps, they’d be able to lose a huge chunk of duplicated OS?  Install a Windows cluster, apply a wrapper around the server app to virtualise just that and isolate it from other applications, then treat that like a VM now, so vmotion would work to balance loads, provide fault tolerance etc.  – The Windows mainframe – or you could do with Linux and call it say “containers”?!!!

And has anyone noticed that with VMware’s Spring/Postgres, Project Octopus, Zimbra stuff they are trying to build a challenge to Microsoft/Google etc delivering Software as a service? Clearly they see the end VMware’s market dominance in virtualization!

UPDATE:  I missed Systems Centre Virtual Machine Manager 2012 includes “Server App-V”.  This is abilty to virtualise a Server Application (an IIS app, Exchange, SQL Server, etc) in order to seperate from the OS and allow it to be moved to another server to allow OS updates to occur independently of the App – awesome – so if these Virtualised Apps can be stacked up on a Single OS, we’re as close to a Windows Mainframe as makes no difference!!

Posted in Linux, Virtualisation, VMware | Leave a comment

UAG DirectAccess and IPv6 on Windows

Decided to poke my head into the world of IPv6.  So I’ve set up a small virtual network to demostrate IPv6 and UAG DirectAccess.  It’s all run on a single vSphere ESXi node for the moment using Windows 2008 R2 and Windows 7.  It also uses a virtual vyatta router. There are three virtual switches, two representing “internal” LAN segments and one on our Campus Network with real IP4 addresses.  Roughly it looks like this..

.

The setup started out as a pure IPv4 setup, with TMG1 providing the outward NAT path from the private address space – 10.0.42.* (Private Network) and 10.0.43.* (Private Network PCs).  The servers all have static IPs, with the PC and Laptop having a DHCP address.  DHCP is servered by DC1, with the vyatta router having DHCP Relay enabled to allow DHCP traffic to get from DC2 to LabPC1 etc.  At this point I have DC1 and DC2 (AD, DNS on both) with DC1 doing DHCP.  SQL1 is the database server for vSphere vCenter (VCS1) with PCs MgmtPC1 and LabPC1.

Routing on the IPv4 side is straightforward, all servers and PCs use the vyatta router as their default gateway and the vyatta router has a default route set to the TMG1 server.  This server has the usual default route set on it’s external interface – to the campus router, with no default gateway on the internal interface, however it has an additional route added to push traffic for 10.0.43.* to the vyatta router.

So that’s the easy bit.  Next I added the IPv6 setup (not worrying about UAG1 at this point). First an subnet address was needed for each subnet.  After a bit of poking about, I’ve hopefully got some legitimate “private” IPv6 subnets.  I’ve used fc01:: as the prefix – fc00:: being the ULA (Unique Local Address) prefix and the ‘1’ indicating I’ve allocated it myself – a ‘0’ would seem to indicate it was picked from a by a central agency?  The rest of the address I’ve adapted from the IPv4 address without taxing my Hex Convertor, so:

10.0.42.252 becomes fc01:0:10:42::252

10.0.43.100 becomes fc01:0:10:42::100

So fc01:0:10 is my /48 address space, :42: and :43: are my (16 bit) subnets, and the ::100 is the 64bit device address.  Hopefully it’s clear this scheme makes life easy working out the static IPs for a device and also remembering the IPv6’s addresses etc.

Continue reading

Posted in IPv6 | Tagged , , , | 5 Comments

LIS/ACU/SHS – Merger issues…

When the LIS/ACU Unified desktop project was proposed ACU and LIS were separate departments, so the idea of taking some time to produce a new unified desktop was probably OK.

Now, however, a few things have changed.  LIS and ACU have merged (well as of the end of Sept) and has already hit issues – the new Director has an ACU desktop so couldn’t access his predecessor’s files  (on the LIS systems) and cannot share calendars with the other members of his management team.  Also, Health Science have been merged into a bigger School with other departments who have LIS managed desktops – so again there are some immediate co-working issues.

So how do we fix these issues? Firstly things can be simplified by migrating all the users to a single Exchange system, whether this be the existing LIS 2003 setup or a new 2010 installation, it probably doesn’t matter, though a solution which reduces the number of changes end users see has got to help.

Secondly, there’s the shared storage issues – clearly we could try and expose each set of shared files to each of the other sets of users, or we could try and migrate data which needs to be shared to a common platform.  Options available would include:

  1. Adding users and permissions to the Windows/Novell file systems to allow access and then publishing them – IDM could clearly help with this, but either the Novell client would need to be rolled out or the Netware volumes published as CIFS/Samba shares would be needed.
  2. Using SharePoint – good but not suitable for all types of files – e.g. media and databases – if these aren’t required, then this is the natural way forward.
  3. Picking one new shared drive solution – and migrating the shared files to that location.

The most manageable solutions are 2 and 3 – it’s just a matter of picking the correct platform and location.  I would say that pushing solution 2 is the best start and then rolling out a Novell client for those needing anything else – we could turn on CIFS shares for Novell Servers, to remove the need for the client, but this is technology we’ve no previous experience of and Novell file services is something we what to retire in the future.

A further step could be a help if done with some speed, and that’s to migrate all users to the LIS desktop.   LIS have done this job before when the Exchange 5.5. to 2003 migration occurred.  So all users would be using the same systems which would help I.T. Support no end and free up support staff time for other projects.  It may also be possible to merge the Staff and Student Novell Netware trees to further consolidate and simplify the environment.  These steps would then leave the Unified Desktop Project a simpler set-up from which to migrate.

Then there’s the radical solution – ignore the complex legacy environment and create a brand new Domain with it’s own file/print/email solution and migrate all the LIS/ACU/Humanities users to this domain with Windows 7, Office 2010 etc.  Hmm – time for another blog entry!?

Posted in Unified Desktop Project | Leave a comment

Microsoft Virtualizing Windows – Why??

So Microsoft are playing catch up with VMware with their Hyper-V platform.  But why?  Are they missing a trick here?  The vast majority of the virtual machines in production are based on various versions of Windows Server, so with Hyper-V as your platform you’re running Windows Server on Windows Server.  The Hyper-V (or vSphere) system then spends effort trying to manage resources such as RAM, CPU, Network etc.  For example grabbing back RAM from the virtual machines by collecting blocks of “unused” RAM or finding common blocks to share amongst multiple VMs, or trying to timeshare CPU cycles across multiple VMs, each containing it’s own CPU cycle management.

Unlike other virtualization vendors, Microsoft should have an advantage at looking at a better solution.  They own the API’s that the applications use, so they could virtualize at this level – so the Hyper-V layer would be one instance of Windows, with one resource manager etc, the virtualized applications would then sit on top, still gaining the benefits of fault tolerance, high availablity etc that the virtualization world already offers complete OSes.  Obviously one of the applications would need to be a platform to support other OSes (Linux, old Windows platforms, and VMs for Test or development).

Microsoft have in essence done this in part – with clusters.  In practice these tend to be single application clusters, in much the same way as there’s a tendency to only install one server application on a single windows server.   Applications for this environment need also be specifically designed to cluster.  Piling loads of different applications on a single cluster just doesn’t happen.

So shouldn’t Microsoft push to have clustering a more general tool and having it supporting virtualised applications? Of course this isn’t actually a new idea, push it too it’s natural conclusion and you get ye oldee stylee mainframe composed of a number of closely coupled physical servers (blades etc) with some shared storage.  All the funnier, because Windows NT came from the designer of DEC mini computers and HP EVA storage is a development of Compaq storageworks, which again came again from DEC!

Linux should be able to do the same sort of thing, but of course would be heading back towards it’s roots to – Linux came from Unix and Unix came from Multics!  It all has a similar ring to the idea of electronically joining cars together on a motorway beging driven by a single driver in the front vehicle – which is also known as a TRAIN!

Posted in Microsoft, Virtualisation | Tagged , , | 1 Comment

Thinking out loud – is Twitter an email list?

How close is twitter to an old school mailing list?  So take a list server like majordomo and create yourself a list.  Allow people to subscribe t0 it – that’s follow sorted, allow people to see who else is on the list – so that’s seeing your followers.  emailing it yourself is updating your status, anyone else on the list posting is @you. Direct Messages would need to be dealt with by mailing the list owner (i.e. you).  You can delete users from the list – block?

If anyone is allowed to create a mailing list on the server, with an admin being allowed to delete lists, then you have a basic twitter clone.  Then you need a slim client for your desktop/phone which only pulls down email messages which have come to you from the list server – so that’s your public feed.  A decent search system should deal with #tags.

Oh and restrict messages to 140 characters!

Job done!

Facebook is much the same,  but Instant Messaging needs something a bit clever to ensure real time interaction.  So enhance an email with an additional tag “IM” and enhance email servers to contain an IM service which picks up these messages straight away…

Throw in an SMS gateway and a integrate your Voicemail and you have all your messages in one inbox, just handled by different or more flexible clients.

That just leaves SPAM as the issue – Twitter has this sussed, as it’s just a server based whitelist.  So providing your contacts are held on your mail server, then this works for email too.  The real big issue with e-mail is the fact than e-mail servers are just that – you really need the groupware approach a la Exchange/Gmail etc so that contacts can be used to filter email at the server, and the clients sync both email and contacts (and calanders) with the server.

Hmm – should knock up a prototype really shouldn’t I!

Posted in Web 2.0 | Tagged , , , , | 3 Comments