Building an ESXi whitebox

Started by SimonV, March 22, 2015, 07:59:29 AM

Previous topic - Next topic

SimonV

Hi guys

I'm selling my old desktop this week, which served me well for eight years, and I'm now looking at replacing it with an ESXi server.
The goal would be to run one or two linux virtual machines for downloads, network services and monitoring,  and a lot of lab machines in separate VLANs which will connect to physical routers, firewalls or virtual appliances. Because I don't want to deal with noisy enterprise servers and high power costs I'll be taking the whitebox approach.

Has anyone here built their own ESXi with consumer hardware? I have been doing some reading this weekend but still have a lot of questions before I can start ordering hardware


  • Intel or AMD: AMD looks like the cheapest solution. I've been looking at this build  but with an AMD FX-8350 CPU (8 core, 4 Ghz).
  • RAM: it's the most expensive, so would 16GB suffice or better to just bite the bullet and go with 32GB at once
  • Storage: what's the recommended approach for storage? I would like to have some redundancy so was thinking of two disks in a RAID1 configuration.
    Another option would be to run iSCSI to my FreeNAS (two 2TB disks in a mirrored ZFS volume) although I have zero experience with that, and the NAS only has one NIC.
  • Networking: I believe most people use quad-port Intel NICs on their ESX's. Can anyone recommend an affordable card which is compatible with the latest ESXi build? Will I be able to do LACP and dot1q with the free version of ESXi?
All advice very much appreciated!  :matrix:

that1guy15

I didnt do a whitebox but here is a good write up that you might find valuable.
http://ethancbanks.com/2014/03/15/my-home-lab-esxi-5-5-server-build-and-the-logic-behind-it-all/

Myself I did a Dell T7500 w/ 64 gig RAM and dual Hex-Core Xeons from ebay. By far a ton cheaper and this guy runs quieter than my laptop! Less than $900 put into the server to date.
That1guy15
@that1guy_15
blog.movingonesandzeros.net

SimonV

Thanks, I just checked eBay and prices for that tower are between €500 and €1000 with 12GB of RAM, which is not too bad but you do have to factor in there's only 30 days warranty with most hardware brokers, compared to two years on new consumer hardware.

How does your storage look like? Are you using local disks or running over the network?

that1guy15

Its all local for now. I want to get a synology bad and migrate everything over but that will be a while.

Currently I have a 2TB RAID 5 internal on the built in Perci6. ESXi is installed on a 32GB SSD.

RAID came from my old T110 I purchased a while back for my CCIEv4 labbing.
That1guy15
@that1guy_15
blog.movingonesandzeros.net

SimonV

Suppose I go with one disk alone to save on HDD cost, is there any way-with the free version- to take snapshots and dump them on my NAS?


SimonV

I've been reading up on RAID options with local storage, the following article sums it up quite clearly.

http://www.packetmischief.ca/2011/03/20/choosing-a-raid-card-for-esxi/

Looking at the pricetags for true RAID cards, I think I'll just roll with a single disk for the moment or try the FreeNAS option :mrgreen:

that1guy15

Yeah I would have skipped the RAID option if the card was not built in or I had not been given one from my old job.

Snapshots should stay with the VM and its files as a snap-shot file alone is useless if I remember correctly. Im not sure if you can attach a NAS as storage within ESX. If so then just store the whole VM there. But yeah simple would be local storage.
That1guy15
@that1guy_15
blog.movingonesandzeros.net

routerdork

My NAS is attached as storage to my ESX box although it is Synology. I'm not sure what FreeNAS has but I assume somewhat the same. Just connect them up with iSCSI and you're good.

One thing I have noticed is that CUCM, CUC, etc. error out on install when the target is on my NAS. I've found that when I run into this issue it's easier to install local and then move it over after install.

My T110 is maxed out on RAM and I end up having to shut things down to do other labs. I too am looking into options but haven't yet decided what I'm going to do. Keep us posted on what you decide to go with.
"The thing about quotes on the internet is that you cannot confirm their validity." -Abraham Lincoln

SimonV

Ordered the hardware on Thursday. Everything was in stock except for the case  :doh:
Oh well, gives me time to do some paperwork over the weekend...

routerdork

I ordered a new Motherboard, CPU's, and some RAM this morning. I still need to get a new power supply. I should have everything else to put mine together. I think I'm more excited to use my T110 as a desktop than I am about the new server. My laptop doesn't have much left in it and I'm sitting at a desk most of the time anyway.  :banana:
"The thing about quotes on the internet is that you cannot confirm their validity." -Abraham Lincoln

wintermute000

#10
DO NOT TOUCH AMD there are various issues with various ESXi features.

Always go Intel NICs, no question.

iSCSI is fine for labbing but production wise (assuming 1Gb network) you'll notice the speed difference quite dramatically, I use iSCSI only for test hosts not for 'real' machines. Recommend 2 fat SSDs in RAID1 for your main storage, but feel free to run up lab VMs (or even non disk IO stuff like routers) on iSCSI. Whatever you do, run it on a separate network (even a 50 dollar dlink which is what I use is fine) on separate NICs etc. There is a lot of tuning (on the target side) + 10Gb NICs required for you to get comparable performance from iSCSI as from native or FC/FCOE....

You have 3 good options in my mind.

1.) Intel Xeon-E3 build. Pros: cheapest. Cons: maxes out @ 32Gb, thanks a lot Intel. Recommend you stick with server grade mobos and ECC RAM though if you want to cheap out you can stick Xeon E3s into a lot of consumer chipsets and it does work. With the right mobo you will get enterprise features too like IPMI / iLO / iDRAC etc., dual intel NICs, etc.

2.) Sandy/Ivy/Haswell-EP build (i.e. the X series CPUs). Pros: lets you go up to 64gb. Cons: expensive.

3.) Second hand server as what that1guy says. Pros: cheap for what you get. Cons: warranty, old gear, maybe loud and heavy depending on what exactly you get (e.g. a 1RU server will be hella loud), may kill your power bill.

You also want to have a think about whether you want the one megabox of doom approach or multiple smaller hosts. The former is better for ease/general use, the latter for labbing anything vmware or virt specific (as then you have multiple real hosts). 

routerdork

Quote from: wintermute000 on March 28, 2015, 06:48:26 PM
DO NOT TOUCH AMD there are various issues with various ESXi features.
This is good info to remember too. I looked at AMD in my research for a new system but decided on Intel due to compatibility issues with AMD and Cisco VoIP. Who knows what else would have had issues if I went that route. For the money spent I'd rather be safe than sorry.
"The thing about quotes on the internet is that you cannot confirm their validity." -Abraham Lincoln

SimonV

The hardware I ordered was AMD-based, but I based it on a couple of builds from VCP blogs & forums so I guess it should be okay. I was first looking at the Xeon build from Mellowd's site but the price difference was just too much for a lab box. I Also ordered 2 x Intel Pro PT1000 Dual port NICs.
If it turns out to be a huge PITA I can always use KVM or turn it into a more than decent desktop, or even sell it as a new desktop.

wintermute000

#13
protip: if going whitebox, make sure you get a box with intel vPro. Basically turns your built-in mobo NIC into a ghetto bmc/ilo/iDRAC (i.e. way of remotely turning the machine off and on). I don't actually use the mobo NICs on my servers (long story, lets just say be careful with community hacked slipstreamed drivers) but I do have them patched in just for this very reason alone, vsphere not recognising them doesn't matter when the vPRO is at the BIOS level.

Its nice to be able to RDP into your jumppost and remotely power off/on your extra lab servers.


Also, a cheap fascimile of a VPN gateway if you don't fancy ponying up for a SRX110 or C8xx or running an pfsense instance on your ESXi is a simple linux VM with SSH NAT. Use port tunnelling to tunnel RDP for a secure way to access your jumppost without anything fancier than putty.exe. You can also secure SSH on linux pretty well with things like fail2ban. In my case I have cable and don't want to deal with the hassles of bridging (and then explaining to the wife how there are 'two modems').

SimonV

#14
Just spent the last three hours installing ESXi to my USB stick. For some reason everything kept crashing during install or boot. Turns out it doesn't like working on an USB3 port. Switching to an USB2 port fixed it :doh:

I was already blaming AMD lol