CCIEv5 CSR1000v Virtual Lab Build

In studying for the CCIEv5, the older routers that GNS3 supports won’t cut it for some of the new things being tested on. At this moment, the best and cheapest way to lab up different scenarios is to use Cisco’s CSR1000v virtual routers. However, the requirements for running several of them simultaneously can be pretty stressing on a computer, and the average desktop computer just isn’t going to cut it! This post is about my experience of deciding whether to buy an older server or build one myself for the purposes of labbing.

I started out with an Ivy Bridge system with a Z77 motherboard and a Core i3-3220 CPU (dual core, hyper-threaded). The system was maxed out with 32 GB of RAM (4x8GB) DDR3. When I would fire up 10 router instances (using the Small / 2.5 GB RAM setting), it would typically take 15 minutes for all 10 instances to be “usable” (whether I started them individually or all at once), and my CPU would stay at about 90-100% even with a blank config on all the routers.

The ESXi memory optimization trick helps tremendously (Advanced Settings > Mem > Mem.AllocGuestLargePage > 0), but with the CPU at a constant 90+% with 10 routers, there was no way I was going to be able to spin up 20 routers with the Core i3-3220.

I was considering spending up to $700 for an older server from eBay that had 64 – 128 GB of RAM, and I found a real beauty – an IBM x3850 M2 with 128GB of RAM and 4x Xeon X7460 6-core CPUs. And while having 4 X7460s (whose original retail price was $2800 each!!) would have been cool, the fact is that that platform was Core-2 (Penryn) based, and that technology is getting to be 7-8 years old now. That server was completely maxed out, both in CPU and memory, and could never be upgraded beyond what it already was. Plus I knew that it was going to generate a lot of heat, use a lot of electricity (dual 1400w power supplies), and probably be pretty loud.

If I am going to invest in a system, I want one that’s good enough for what I need right now, and can grow as I need it to (and of course doesn’t break the bank). Lower power, noise, and cooling are a bonus.

So I researched the next generation beyond the Core-2 based Xeons and found that the Nehalem/Westmere Xeons have some more features and performance, especially with regards to virtualization, and depending on the particular CPU you get, the price is pretty decent.

I ended up getting a SuperMicro X8DTI-F motherboard. It’s a dual-socket 1366 that supports the 5500 and 5600 series Xeon CPUs, and it has 12 DDR3 slots. The board can be had on eBay for between $125-$175 right now. The motherboard specs say it supports up to 4GB sticks of regular non-ECC DDR3 (desktop memory), and up to 16GB sticks of ECC Reg DDR3 (server memory). So 12x4GB would be up to 48GB of desktop memory, or 12x16GB would be up to 192GB of server memory.

But when I bought this board, the only DDR3 I had was 8GB sticks of desktop DDR3. However, I put the four 8GB sticks I have on the board and it worked! This means I can use my existing investment in DDR3 and keep adding to it up to 96 GB. I’d also be willing to bet that it will use 32GB sticks of server memory for a total of up to 384 GB (but that would be an expensive bet 🙂

For the CPU, I chose two Xeon E5520s. These are of the Nehalem generation and are quad-core with 8 threads – so two of them together on the motherboard are 8 cores / 16 threads. Here’s the real kicker — they were only $9 each on eBay! They originally retailed for nearly $400.

I was also eyeing the X5650s @ $75 each, which are of the Westmere generation and are 6-core / 12 thread. However, I ended up going with the E5520s because I wasn’t sure what the performance was going to be. I figured at $9 each, I wouldn’t be out any real money if they sucked.

But, it turns out they are really awesome for a CSR1000V lab! You don’t have to break the bank when putting together a system, and you don’t have to settle for an old loud server, either!

Right now my ESXi host has the X8DTI-F motherboard (which has 2 gigabit NICs and a separate IPMI NIC), two 1366-based Xeon E5520 CPUs, and 32 GB of RAM. The system also runs very stable with only a 530W power supply.

With this setup, I have the VMware vCenter Server VM running (with 4GB RAM allocated), two Windows 7 VMs (with 2GB each allocated), a VM running Kali Linux for Wireshark (with 2GB allocated), and 10 CSR1000v’s with 2.5GB each allocated.

I have all 14 of these VMs running simultaneously. For the 10 router instances, I configured all 10 of them with 200 loopbacks each with a simple single-area OSPF instance running.

With all of that running and configured, the ESXi host is using just over 16 GB of RAM, and about 15% CPU time (compared to the 90-100% before with the i3 and blank configuration on the routers!). When I boot all the routers simultaneously, it takes less than 5 minutes for all of them to be in a usable state. If I placed them all on an SSD, I’m sure the time would drop even more.

The two E5520s will definitely handle a full load of 20 routers, and possibly even with just 32 GB of memory as well! Though when I get to the point of doing full-scale labs I will probably bring the system up to 48GB. But that is the whole reason I decided to go with this system: it is cool and quiet, it uses my existing investment in RAM, and it can inexpensively grow as I need it to, including the availability of much more powerful CPUs.

I very highly recommend this setup to anyone who doesn’t have a lot of money to spend on a lab. The most expensive part (if you don’t already have a decent amount) will be DDR3 memory. 8GB sticks of DDR3 server memory go for about $50 each on eBay right now, so if you have no DDR3 that will add $150-300 to the cost.

It could be argued that since the 10 routers used less than 32 GB of RAM, I might have been better off just upgrading the i3 CPU to an i7 for around the same amount of money, but part of my problem was that the socket 1155 (and even the Haswell 1150s) max out at 32 GB, and I wanted to be able to add more than that eventually.

It may sound funny, but I like the fact that I can still get so excited by hardware that isn’t even the latest and greatest (this stuff is about 3-4 years old right now). I have a background in building computers, and even though I am reaching the point where I could just buy things already pre-configured, I still like the idea of building it myself.