RemoteFX
Its looking like we got our sizing for our custom RDS servers right and we may well have answered (at least for own internal use) ‘how many users can you get on a RDS server?’.
The video shows our RDS farm under normal load with 24 clients remotely logged in (excluding the admin session I was using) with the CPU usage being either low or idle on occasion.
Well its taken a little while but the light at the end of the RDS Farm tunnel is in sight! After heavily modifying two X-Case RM 206 HS cases we now have our first batch of cookie sheet style servers ready for use. The video above goes around a bit of a tour of the case and the photos below go into some particulars in more detail. Continue reading
While waiting for the cases/power supplies to arrive for our RDS Server Farm I thought we might as well fire one up and do a little stress testing, the first results (which look at application load times) can be seen below.
Here you see pretty much the entire Office 2007 and Adobe CS6 suite (with a few other programs thrown in for fun) load in almost no time at all.
In the background we had 6 students playing videos on YouTube (RemoteFX doing its thing) however this realy is just a test of the OCZ Vertex 4 SSD.
Following on from my first post I am going to look at what will make up by RemoteFX RDS farm including the software and hardware architecture.
First I’ve started out as you would with any small RDS farm; in this case with 4 session hosts and a single connection broker (which will also act as licence server). The 30 endpoints are pointed at the connection broker which then decides which session host they should be logging into.
In my case the servers have only 2 hops between themselves and the endpoints over a fibre optic to a local network switch and then down copper 10/100mb to the client. For the time being the endpoints are just re purposed PCs however we hope to replace them with dedicated thin clients (mainly for power saving reasons) in the next few months.
The connection broker will be hosted as a virtual machine on one of our Hyper-V servers however to make use of RemoteFX technology (will go into this in a little more depth in a later post) the session hosts will all be running directly off physical hardware. Continue reading
Sufficed to say budgets are still tight for schools and nothing chews through the budget more than replacing a computer room full of PCs.
One alternative has always been to convert the PCs in the room to ‘fat thin clients’ with a small OS (say Windows Thin PC) and hook them up to a Terminal Server/Remote Desktop Services Farm. The biggest show stopper in this has been the lack of graphics acceleration which performing graphics intensive tasks difficult if not impossible.
Luckily Server 2008 R2 SP1 has come around with RemoteFX technology – this allows you to harness the power of graphics processing in a server. Another issue crops up though – few if any servers (from leading OEMs like HP and Dell) support graphics cards and those that do are just as expensive as normal PCs.
My solutions is – build custom servers out of AMD Fusion APUs (which combine a powerful CPU and GPU on one chip) in true ‘cookie sheet style’.
This series of posts looks at the hardware, software and endpoints (fat thin clients) that I’m going to be using in this project.