iDRAC & Dell MD1000


For the past week I’ve been looking at our new MD1000e blade enclosure. With the push towards virtualization and redundancy, blade architecture is obviously a major leap forward.... 2xBlade enclosures, clustered blade servers & SAN = 100% uptime..... in theory.

So far I’ve just been configuring the MD1000e chassis. For those who don’t know the chassis is the rack mountable unit which holds all the blade servers. It contains the PSUs, the fans and the network ports, which means your blades don’t contain any of the above and can be smaller in size. The MD1000e takes up 10u in a Dell cabinet, and can hold 16 servers. With standard poweredge servers we would like to get around 2 or 3 servers in a 10 u space.

The MD1000e has a myriad of features accessible from the web interface. My tip of the day is that when you first switch the MD1000e on and it asks “do you want to complete the configuration wizard now?”, DONT answer “no”. If you answer “no” the only way you can give the management port (CMC) and IP address is by hooking the unit up to a laptop and using a null modem lead.

Once configured, you can view the health of all your blades, start them up, shut them down etc etc. One beautiful feature is that you can assign IP addresses to the management port of any blades which might be plugged in. So you give the MD1000e 16 addresses to play with, and each time you plug a blade in, and IP will automatically be given to the blade’s iDRAC port. This is extremely important if you don’t have a kvm built into your MD1000e, or you don’t want to sit in front of 32 fans roughly equalling the power of a military class jet engine with a mouse and keyboard on your lap.

Using the iDRAC of each blade, you can connect to the server and install an OS as if you were sat in front of it.... actually you wouldn’t even be able to do it sat in front of it come to think of it. Using this you can load up an ISO or share your local drive to install 2008 or whatever OS you desire.

Getting the unit up and running initially isn’t a 2 minute job, but it’s worth the effort, and extremely impressive bit of kit!

Our blades will initially be getting server 2008 data centre edition installed, with the hyper-v role in clustered mode.

Microsoft VDI & MED-V

Whilst Microsoft seem to have published a lot of corporate vision statements about their VDI (Virtual Desktop Infrastructure) vision, they don’t actually seem to provide much in the way of technical roadmaps. As I am currently looking to deploy a full VDI, this is something I have been looking into closely, and here are some of my conclusions.



What we are looking to achieve; is a full desktop experience from a remote location. Ideally, this should be via a web browser, on any remote PC. The application for this is to allow staff to work from home or whilst on the move.
Currently a member of staff sits at their desk, and logs into an XP machine, and is presented with the pre-defined desktop, along with their redirected 'My Documents', their home area, and any crucial settings which we might choose to deploy. Of course they also have the core set of software such as Office, and MIS applications. What we want is an identical experience (or as near as possible) from a remote PC.

Microsoft's "vision" for this identifies several of their key technologies for this. Virtual Machine Manager, the System Centre Suite and Server 2008 Terminal Services. What they don’t yet seem to have is a way to put all this together to produce a working solution.

So what are the possible ways forward?

• Create a virtual PC for each member of staff who requires working from home, set the PC identical to the way any other staff PC is configured, and simply allow staff to Remote Desktop into the PC using ‘Remote Desktop. The PROS of this solution are that it’s simple to configure, and pretty much fool proof for the user. The cons are that it requires potentially a vast number of VMs, and systems management

• Allow staff to Remote Desktop into their own physical PC. PROS even easier to setup. CONS requires workstations to be left powered on, 110% chance of it not being workable!

• Use scripted HyperV, via the Virtual Machine Manager web portal, and powershell, to create VMs on the fly to start them up & shut them down according to usage. PROS sleek and sexy. CONS very complex to setup, potential for huge number of VMs

• Use Virtual Machine Manager / Hyper-v with Citrix. PROS Seems to have a reasonable amount of backing from MS partners. CONS Expensive, expensive & expensive!

We may have to revisit some of the above, but currently the leader is none of the above....it’s a product called MED-V which is currently in BETA. It’s another product “acquired” by M$ in a similar way to softgrid, so I can only assume it will become part of the System Centre Suite very soon.

The way it works (in principal) is that you create a Virtual PC (using Virtual PC 2007), and configure it as required. You then upload this to the Med-V server, which in turn clones it for clients over the net when the log onto the MedV server. This gives the user what appears as a virtual PC running on their desktop.... when running in full screen mode you are essentially sat in the office.
Obviously the setup is a little more complex as IP addressing, DNS and Active Directory all come into play to prevent conflicts, but the MED-V product does work... flawlessly so far once setup.

There are cons however. It requires a small client on the users machine, and the first time the user connects it takes an index of the users local PC. This is to reduce the amount of traffic which MEDV uses during connection. i.e. if you have a file called wibble.dll on your local pc, it won’t bother dragging it over your 500k connection every time a MEDV session requires it. This initial index takes a good 30 minutes on average. After that the user can expect to connect in the same time it takes to boot up a virtual PC.

Support for MEDV is currently non-existent, and you can expect to do a lot of self research if you are going to evaluate it. The setup process is complex and requires 2 server and at least 2 workstations, plus a virtual PC. I’m looking forward to an official release date, but with a bit of luck e may deploy the BETA to test users within a couple of months.

W32.Downadup.B Arghhhhhhhh


It's been 6 years since we were last hit by a major virus, but we have just been hit again (along with a few million others) by W32.Downadup.B

What did it do?: It removed ability of users to log-in to the domain. The following entry appeared in droves on domain controller; "The SAM database was unable to lockout the account of Administrator due to a resource error, such as a hard disk write failure (the specific error code is in the error data) . Accounts are locked after a certain number of bad passwords are provided so please consider resetting the password of the account mentioned above." with an event ID of 12294. At the peak we were probably experiencing around 100 per second.

The end result of the above was that due to the sheer number of requests, active directory was unable to authenticate genuine users....who were getting their account locked instead of getting logged in. Ironically this wasnt the main intension of the virus, just a side effect. In actual fact the virus looks to upload any passwords it finds to a remote server. In order to avoid detection the writers made it upload to a few thousand random domain names to avoid detection by the authorities.

Some more info;
http://www.microsoft.com/security/portal/Entry.aspx?Name=Worm%3aWin32%2fConficker.B

How did we get rid of it?: Firstly we had to disable every single account on the domain with local or domain/enterprise security rights. You can imagine the hassle this is going to cause in terms of service run-as issues. We also had to unplug any administrative workstations from the network and run all fixes direct from the servers.

Micrsoft have provided a tool for prevention, also provided in the form of a windows update (should you happen to do those on a weekly basis);

http://support.microsoft.com/kb/891716

They do however suggest that deployment via Group Policy is not suitable for servers, and kindly provide a handy 36!! step manual process. You can however reduce this to around 5 steps by using Symantec Antivirus, a bit of quick registry editing, and the removal tools.....probably 15 minutes per server. I'd also suggest regular password changing of any admin account you are using, just to ensure that it isnt grabbed and used by an infection.

The only method we have found thus far to remove the threat from an infected PC is to perform a full system scan using Symantec AV. Not great for the users given the load it puts on the workstations, but at least its a fix!

I'm sure this wont be the last such threat, and its been our first since moving to Active Directory.... but I'll be looking into how we can deploy windows updates using SCCM2007 on a regular basis!

Server 2008 Firewall Woes

We have just been working on a niggly problem. We were trying to connect a spare nic in a Server 2008 server to a development network. The development network is currently just a switch with nothing else connected.

As soon as the second nic was plugged in, we lost all connectivity to the server from the main nic, and even lost ability to ping the server.

After a bit of head scratching we realised that although Windows Firewall was turned off, as soon as the second nic was plugged in the Firewall turned itself back on.... thus blocking pings and RDP.

Simple solution was obviously to plug in the connection, and then disable firewall.

VMM2008 SSP - Shared ISOs

We have noticed that when Self Service Portal users are trying to alter the properties of one of their VMs, they have an option to mount an ISO. Down at the bottom of the options is a tick box which says "SHARE" rather than copy.

Normally if you mount an ISO, the ISO file will be copied into a subfolder of the VM before it is mounted. For a 4Gb ISO files (such as SuseLinux) this can take several minutes. So the obvious advantage of using the SHARE option is that it just uses the original ISO rather than creating a copy.

However....there is a catch. The option isnt available if your host if a Hyper-V server. I have logged this with MS via Technet; http://forums.microsoft.com/TechNet/showpost.aspx?postid=4167342&siteid=17

It seems to be a known issue, but as yet they havent given a resolution date.

Dev Con & Dell

We have been having a serious issue with our latest Dell 755s. These PCs come with a built in 4 slot card reader. The A: drive is reserved for a floppy, C: is the local HDD, D: is taken by the DVD drive, leaving E, F, G & H for the card reader. I imagine 90% of network admins have set the users home area to default to H: which obviously causes a major problem when the user tries to log-in to a machine with a card reader. As a short term fix we have used a MS tool called devcon, which can disable devices from the command line. Ill post an update when we find a more long term solution.

VMM2008 Slow Creating VMs


Following on from a previous post on this topic, I have finally (after around 6 weeks of trying) been able to find a resolution to the issue I previous posted about here.

It turns out this this was not actually an issue with VMM being slow installing VM Components. VMM was actually being slow at reading the drives on the host server. thankfully someone from Microsoft jumped onboard (thanks Hector Linares), and was able to confirm that VMM2008 does have this issue if any of the following conditions are met;

VIRTUAL MEDIA: If the host server has any virtual media such as a Dell DRAC or Virtual Floppy.

GPT DISK: If any drives in the host server are using GPT Partition table rather than MBR

UNINITIALIZED DISK: If any of the disks are not initialized.

OTHER MOUNTED VHDs: If a VHD has mounted incorrectly or not dismounted correctly.

In our situation the problem was actually related to GPT. The server has 3 drives, 2 set up as MBR and one set up as GPT. Converting the one GPT disk to MBR instantly fixed the problem, and VMM now creates VMs in under 10 seconds.

As a side note, GPT drives was introduced by MS in Server 2003 SP1, but is really being pushed with server 2008. See this article by MS.

Hopefully MS will have a fix soon, but as of Nov 2008 there is no way of converting an MBR disk to GPT without loosing the data.