Building an ESXi whitebox

Started by SimonV, March 22, 2015, 07:59:29 AM

Previous topic - Next topic

hizzo3

There has been lots of issues surrounding usb3. From WiFi interference to hardware not working/drives corrupting. I know I have issues with it recognizing my cellphone. Sometimes it will pick it up and run data, other times my phone will see it as ac and shut down all data.

hizzo3

#16
The new supermicro X10sdv-tln4f is on my wishlist BTW. Have to buy a house first.... But next big buy will be this or a sister board. It will be a killer VM box. 128gb ram, 2x 10Gbe, 2x 1Gbe, 1xIPMI, 8 core, 16 thread 2ghz. All in a <90w mobo (under load!).
:drool:

routerdork

That's a lot for such a small board. I like it. I imagine to max the RAM out on that guy it'll be expensive for 4x32GB sticks. But the power and noise savings would be awesome!

I ended up buying a Super Micro X8DTH-iF. I've got 48GB (3x16GB) of RAM for it so far. Been buying here and there, as I find deals it increases. I've got a matched pair of dual 6-core Xeon's. Been trying to find a good deal on a tower to fit this thing and an efficient power supply. I'll be running ESXi on a USB stick and storing everything on my Synology. My T110 has been good to me but this has the compute I need to do bigger labs and still run other VM's.
"The thing about quotes on the internet is that you cannot confirm their validity." -Abraham Lincoln

hizzo3



Quote from: routerdork on April 16, 2015, 02:15:22 PM
That's a lot for such a small board. I like it. I imagine to max the RAM out on that guy it'll be expensive for 4x32GB sticks. But the power and noise savings would be awesome!

Yep. Some people are claiming $800-900 USD is a bit much. Apparently they have never pieced these components separately. 10Gbe alone is a few hundred alone. Good buy in my book.

wintermute000

#19
If you've got one box, whats the point of using a slow synology (iSCSI I presume) as opposed to a pair of fast local SSDs?


That supermicro is using those new Xeon-Ds - curious as to whether Intel's 'low power not low performance' marketing is legit - let us know! Virtual routers love CPU esp VIRL.... nested virtualisation of IOSv over KvM over ESXi, makes even my i5 Sandy's feel slow.

routerdork

Quote from: wintermute000 on April 21, 2015, 06:55:48 AM
If you've got one box, whats the point of using a slow synology (iSCSI I presume) as opposed to a pair of fast local SSDs?
I'm doing mine this way due to having so many TB's of storage for use. My synology is maxed out with 4x 3TB and I'm currently only using 1.5TB. No need to buy any more disks when I have everything I need. My voice servers alone would require several SSD's and after what I'm paying for RAM I just don't see the need in a lab. I originally was going to use local disks and bought a RAID controller for my box but am holding off on it for now. If later on down the road I decide I need local disks I can always add them.
"The thing about quotes on the internet is that you cannot confirm their validity." -Abraham Lincoln

wintermute000

I scarcely doubt your voice servers require several SSDs. Thin provision everything onto a 512Gb SSD and you'll fit an entire CUCM cluster + Unity + Presence I bet.
Or have you done the thin provisioning maths and decided its still not enough?

I thin over-provision at 3:1 or worse and aren't even close to running out of room.

routerdork

Quote from: wintermute000 on April 22, 2015, 03:40:22 AM
I scarcely doubt your voice servers require several SSDs. Thin provision everything onto a 512Gb SSD and you'll fit an entire CUCM cluster + Unity + Presence I bet.
Or have you done the thin provisioning maths and decided its still not enough?

I thin over-provision at 3:1 or worse and aren't even close to running out of room.
I have multiple clusters setup from 8.0 on to 10.5 :) I also have several Apache servers, IOU, Cacti, BIND, Console, F5's, xRV, Titanium, INE Topology. So I decided iSCSI would be best so that all VM's are in one place. I probably could fit them all but it would be tight, I chose to use what I already had in this case though since the NAS is always on anyways.
"The thing about quotes on the internet is that you cannot confirm their validity." -Abraham Lincoln

wintermute000

#23
fair enough. I only have around 2 dozen vSRX/CSRs, VIRL, AD, linux, jumppost and vcenter. You poor voice guys LOL.


OTOH I do use iSCSI so I'm a hypocrite :) but for purposes of hosts I want to vmotion around / test HA features etc.


In another year or so when I get ministry of finance approval for a full rebuild I'll probably go all SSD, 10G and vSAN :awesome:  in my dreams....

hizzo3

I have one of the new i5s in my work laptop (ultrabook). Broadwell if I remember right. 7 hours battery, and it has more than plenty processing power for coding and stuff. I haven't tried any of the real intensive stuff

wintermute000

ultrabook = dual core (the U parts), even if i7. Intel's branding is highly misleading.

FWIW I have a haswell-U i7-5500 in my work rig, which would be comparable to a broadwell-U chip. It runs IOU and my python ubuntu instance / Cisco OnePK VM fine, but absolutely chokes on VIRL. I would expect it to choke on vSRX and CSRs but I wouldn't run them on workstation anyway, you want a bare metal hypervisor for that.

Mind you, the more labbing I do, the more I am using IOU / WebIOU, so dunno if VIRL really matters.

wintermute000

#26
WOOHOO scored a HP DL380 G6 on the cheap (800AUD) - 2x hex core X5650 CPUs (24 threads mmmmm), 32Gb RAM, 8x72Gb SAS.
Plenty more room to kick up to 144Gb or whatever ridiculous figure the maximum is for a dual core Westmere :)

Now I can finally run vcenter AND NSX AND AD/linux/jumppost on the same box and/or have a box that VIRL won't bring to its knees...


Time to start planning a dodgy 5.5 --> 6.0 vsphere/vcenter migration. I think I might just chuck in the old vDS config - all my 'important' hosts are on my real VLAN, only test VMs live on the vDS. Otherwise apparently the deal is to attach all the vDS to normal vswitches, move vcenters, then attach them to a new vDS. Given my ultimate aim is to run up NSX, probably not worth the effort. The new environment will be


HP DL380 G6 (2 x Xeon 5650, 32Gb RAM, dual NIC, local storage) - Vcenter, AD, linux, jumppost, VIRL, NSX, puppetmaster
2x Dell Optiplex 990s (i5-2400, 16Gb RAM, quad Intel NIC, attach to iSCSI) - test cluster

SimonV

I scooped up a dirt cheap HP DL380 G7 from a company this week, two Xeon X5650s and 148 Gb of RAM. Going bare metal EVE-NG this weekend!

SimonV

Damned, he said it was 148GB and I only got 44.  I want my money back.

:rage:

wintermute000

That is a massive difference, well within your rights to return it