I have realized over the years in order to be not just proficient but excel with technology its imperative to have a test bed. A plaything, where I can spin up the new and break the old. I have done this in varying degrees in the past. I had turned an old desktop PC into an ESXi 5.0 host with 16 GB of ram and four 500 GB hard drives in RAID 10. That served its purpose but for this new adventure, I needed something more. As I have been out of the infrastructure world for a few years, I wanted to familiarize and refresh my knowledge, so I began the journey to building a new home lab.
Lab Goals and Purposes:
- Test bed for VMW Infrastructure and EUC products
- Allow me to renew my expired VCP: DCV5
- Test cool new features without any red tape
- Break anything and everything I can get my hands on
- Network must be robust and support VPN for external access
- Network must be secure and allow use of VLANs for segregation of traffic
- Base servers must not break the bank
- Base servers must support ESXi and specifically allow for VSAN setup
- Base servers must be Power friendly
MR 32 and MX 64. I did think about using an old router, flashing OpenWRT or something similar and going that route. However, one requirement that is not listed above is to not spend time on something that is not required to meet the Goal. The Meraki gear is going to cost some money in licensing but not hardware. I believe I priced it over 5 years to be around $210. This sounds like a lot but a good router that includes the newest Wi-Fi bands that I can flash something more capable is over $100.00. With this I have an external AP that I can place at a better location than the closet which has the patch panel, I have a dedicated router with cloud based management, an app for monitoring, alerting, any much more. I also purchased a managed TP-LINK 8 port switch to allow for all the physical ports I needed. The MX 64 only has 4 ports and 1 wan port and that wouldn’t suffice for my regular home network needs and 3 servers with multiple NICs each. This switch supports VLANs and not much else. I did wish it had a console and I could do a bit more with it. It does its job, and sits there passing packets and for roughly $20 what more can I ask for.
This is where the existing community really helped out. I checked out your top 10 blogs from the site thevpad.com. I found a wealth of information. I decided upon 3 ESXi hosts. Since I was going to configure VSAN for my shared storage, I decided to go with the absolute minimum requirements: 2 nodes. I thought the direct cross connect ROBO option would be the most cost effective. However, 2 node VSAN does require a witness that I hosted on my ‘utility ESXi’ server. Below is the parts list:
o 32 GB of Non-ECC RAM. There is room to take it to 64 GB each host. If I had opted for ECC RAM I could of took it to 128 GB of RAM. Non-ECC saves some money, and I didn’t see the need for more than 32 GB of RAM. I talk about the RAM space saving tips in ESXi I use in another blog post.
o 120 GB MyDigitalSSD BPX M.2 SSD. This is for the cache layer of an All-Flash VSAN setup. I chose this particular SSD based on cost/performance ratio after doing some digging on its specific read/write IOPS performance. Everything else above I couldn’t justify the added expense.
o 500 GB Samsung 850 EVO SSD. Once again this was picked after some cost/performance analysis. I realize this SSD is not enterprise worthy but for Home Lab duty I think it was a good pick.
o 480 watt PSU. The cheapest PSU I could find knowing I wouldn’t ever need more than 150 watts.
Total price around $3000.
o The Intel NUC is a neat fanless quiet 40 watt box. With its 4 core 8 thread CPU it can handle the load of vCenter server and a couple appliances and Windows servers.
o The Supermicro servers have plenty of CPU compute built in with its 6 core and 12 thread CPUs.
o The Supermicro board has IPMI, or remote management. This is amazing as I can remotely power on the servers when I need them and power them down from vCenter. Also the IPMI has full monitoring for temperatures as well.
o Four network ports to work with. Built-in 10 Gb I use for direct VSAN and HA/FT traffic. This was also key as I did not want to purchase a 10 Gb switch. I use the two 1 Gb ports for management and VM traffic.
o 2 node VSAN took a few tweaks to setup but it runs great for shared storage. I have learned in lab loads the 10 Gb NICs are complete overkill. I have yet to see VSAN use more than 1 Gb. This may be because the underlying storage can’t pull more data quick enough.
o I did wish the NUC I purchased had more than 1 NIC however everything is working just fine.
o The Supermicro 10 Gb NICs run HOT. There is a little heatsink on the chip which seems to wasteful. It was put there for costs purposes but its design could be much better and dissipate heat way quicker. Even with direct airflow the NICs run about 10 degrees hotter than the CPU! Also the actual riser for the ports is very hot to the touch.
o Even though I can run all 3 servers (at just above idle) on about 150 watts as measured from the Powerstrip it puts out more heat than I thought. My original intent was to run this in a closet where the network gear and patch panel is. I am currently but its getting toasty when I turn on the 2 Supermicros. I need to rethink the cooling aspect.
o I should of bought a 500 GB hard drive for the Intel NUC. vCenter just by itself requires 120 GB of hard drive. Although with thin provision its much less.