Work is coming along nicely with the Server Room, we’ve now removed the last Cisco switch from our infrastructure and the HP 5400R series switch is deployed replacing the 2530 that was in its place; over time we’ll be bringing more fibre from our edge switches into this room as well hence the number of SFP+ ports on the 5400R. The entire front of the cabinet is now populated with hardware or a blanking panel as well (panels available from Comms Express) to keep things looking tidy. I wish there were a little more that I could do with the cables coming into the 5400R however with a very narrow rack there’s not much that can be done.
Some interesting things have come out of both Rucks and PaloAlto recently in that they offer Hyper-V compatible VMs for their services which could free up a further 3U of space and remove a further 4-6 cables out of the picture.
Having recently purchased a Dell T430 tower server which we will be using for backup and Hyper-V replica I thought I’d share some photos of what the castors (an option in place of either the rack mounting kit or the floor stand feet) look like!
The castor assembly comes in a separate box to the server and only takes a minute or two to install; I perhaps was expecting slightly larger wheels however they do a good job all the same on hard floors.
With the front to back patch panel installed and T430s rack mounted the server room is coming along quite nicely. Next up will be a tidy up of the cables at the back of the rack bringing all of the power cables to one side and the network cables to the other. Stay tuned for more in the coming weeks!
As part of an ongoing project to improve the room today we’ve been installing a set of rails for a pair of Dell PowerEdge T430 servers. You may have noticed the ‘T’ in the T430 to indicate they are tower servers but Dell provides a 5U rack conversion kit which is pretty easy to install.
One small question came up while putting the rails in – ‘Where do I mount the rails in relation to the 5U of space in the rack?’ to answer that question the bottom of the rails go at the bottom of the 5Us of space. Hopefully the image to the right illustrates this better!
Well its taken a little while but the light at the end of the RDS Farm tunnel is in sight! After heavily modifying two X-Case RM 206 HS cases we now have our first batch of cookie sheet style servers ready for use. The video above goes around a bit of a tour of the case and the photos below go into some particulars in more detail. Continue reading
While waiting for the cases/power supplies to arrive for our RDS Server Farm I thought we might as well fire one up and do a little stress testing, the first results (which look at application load times) can be seen below.
Here you see pretty much the entire Office 2007 and Adobe CS6 suite (with a few other programs thrown in for fun) load in almost no time at all.
In the background we had 6 students playing videos on YouTube (RemoteFX doing its thing) however this realy is just a test of the OCZ Vertex 4 SSD.
Following on from my first post I am going to look at what will make up by RemoteFX RDS farm including the software and hardware architecture.
First I’ve started out as you would with any small RDS farm; in this case with 4 session hosts and a single connection broker (which will also act as licence server). The 30 endpoints are pointed at the connection broker which then decides which session host they should be logging into.
In my case the servers have only 2 hops between themselves and the endpoints over a fibre optic to a local network switch and then down copper 10/100mb to the client. For the time being the endpoints are just re purposed PCs however we hope to replace them with dedicated thin clients (mainly for power saving reasons) in the next few months.
The connection broker will be hosted as a virtual machine on one of our Hyper-V servers however to make use of RemoteFX technology (will go into this in a little more depth in a later post) the session hosts will all be running directly off physical hardware. Continue reading
One alternative has always been to convert the PCs in the room to ‘fat thin clients’ with a small OS (say Windows Thin PC) and hook them up to a Terminal Server/Remote Desktop Services Farm. The biggest show stopper in this has been the lack of graphics acceleration which performing graphics intensive tasks difficult if not impossible.
Luckily Server 2008 R2 SP1 has come around with RemoteFX technology – this allows you to harness the power of graphics processing in a server. Another issue crops up though – few if any servers (from leading OEMs like HP and Dell) support graphics cards and those that do are just as expensive as normal PCs.
My solutions is – build custom servers out of AMD Fusion APUs (which combine a powerful CPU and GPU on one chip) in true ‘cookie sheet style’.
This series of posts looks at the hardware, software and endpoints (fat thin clients) that I’m going to be using in this project.
For the past few months we’ve had a donated storage server sitting in our storage room, with 8x1TB HDDs it was the perfect chance for us to supplement our daily tape backups with the speed of hard drives and move tape to monthly.
The only issue was that the server came with a rather pants Intel Core2Duo processor that didn’t even support 64bit! As such we couldn’t load our OS and Backup Software of choice (Server 2008 R2 and System Centre DPM 2012).
After a few months of waiting for budgets now we have been able to spend the £280 that it took to get some proper components into this server and the photos of it are below, full spec list is on the next page.
In this series I am going to be looking at how PCI-E SSDs can be used with VDI, I’ll be covering the hardware in use, the user experience and also why I believe PCI-E SSDs to be the best option to get your virtual desktops running as fast as possible. Continue reading