LAB

Building Next Gen Datacenter – Portable Datacenter in a Pelican Case

The cloud is great, but sometimes you really need to have a portable solution and here it is. I call it the Pelicase Datacenter (someone used that name in a twitter feed and I think its kind of cool.)

Before you dig in the list a couple of questions that pops up all the time:

Question: What about heat issues?

Answer: The only hot item is the LED lamp, the router is getting warm but the rest of the stuff keeps cool forever. I have been using the Pelicase for book writing the entire summer, the longest “uptime” is more then 40 hours. Just to get you some number:

After 8 hours of operation in a room with 25 degrees Celsius/77 Fahrenheit degrees .

  • The NUS’s is 30 degrees Celsius / 86 degrees Fahrenheit
  • The power supply is 41 degrees Celsius / 105.8 degrees Fahrenheit
  • The switch is 41 degrees Celsius / 105.8 degrees Fahrenheit
  • the router is 42 degrees Celsius / 107.6 degrees Fahrenheit

Question: What’s the price for all of it?

Answer: I have no idea and the reason is very simple. I have friends working on some of these companies.

The Pictures

WP_20140927_002
The complete setup with my Laptop on the right side and the Pelicase Datacenter on the left. They are connected using a 1GB network cable.

WP_20140927_001
A closer look at the Pelicase Datacenter.

WP_20140927_003
In the front you can see the LED USB light used to light up the keyboard that is in front of the case.

WP_20140927_004
The TINY Router, configured for routing using cable or 3/4G and support for Wireless access to the Datacenter.

WP_20140927_005
The 6 PSU’s need for the 6 Intel NUC’s.

WP_20140927_006
The GB Switch.

WP_20140927_007

The 6 Intel NUC’s, 5 of them is running Hyper-V and the last one is running Windows 8.1

The Shopping list

The Case:

image
http://www.pelicancases.com/1500-p/1500.htm

The Router/Wireless/Firewall:

image
http://www.dovado.com/en/products

The Screen:

image
http://www.gechic.com/product_help_en.asp?s=3

The Switch (current):

image
http://www.linksys.com/en-apac/products/switches/SE2800

The Switch (previous):

image
http://www.netgear.com/business/products/switches/unmanaged-plus/gigabit-plus-switch.aspx#tab-models

The USB LED Lamp:

image
http://www.ikea.com/se/sv/catalog/products/80243801/

The Intel NUC’s:

image
http://www.intel.com/content/www/us/en/nuc/nuc-kit-d54250wyk.html

Supported Memory:

image

Memory I use:

image
http://www.kingston.com/en/memory/search/Default.aspx?DeviceType=2&Mfr=INT&Line=D54250WYK&Model=85387&Description=Kingston_ValueRam_Memory_HyperX_Memory_for_Intel_D54250WYK_Next_Unit_of_Computing_(NUC)

Disk drives:

image
http://ark.intel.com/products/75331/Intel-SSD-530-Series-240GB-2_5in-SATA-6Gbs-20nm-MLC

The keyboard and mouse:

image
http://plexgear.com/

Software:

All NUC’s runs Windows Server 2012 R2 as Hyper-V hosts, but there is a whole lot more around the software and configuration, so this last part will be updated later this week(end)

Setup & Configuration:

TBA

/mike

27 replies »

  1. Mikael I love it and thanks for sharing!

    Do you think you’d have room to fit in a Synology DS414slim by chance? With a small NAS filled with SSD’s you’d truly have a mini data center in a box. Great stuff!

    • Almost, it could work if you re-arrange some of it or get a bigger Pelican case. But, since i’m using Hyper-V I use the current shared nothing migration with Diff Disks, so moving around the VMs are easy. But yeas, it might fit. Maybe tht is my next Project :-)

  2. This is awesome! Can’t wait to read about the configuration. How much mSATA HDD do you have in each NUC?

  3. Alternative with more power and higher Performace:

    5x Mac MINIs using VMWare Fusion for virtualisation, Thunderbolt NAS Storage (that´s 20gbit/s).
    Formfactor should be similar size. Of course the price is abit more expensive.

    The Thunderbolt NAS can also be used as Backup (Time-Machine) for the Macs
    To go one further – add a Mac Mini with OSX Server.

    • That would not fit me at all. I build datacenters based on Microsoft infrastructure, Hyper-V, System Center, Storage Spaces, SOFS and that case it does not make sense to run Mac MINIs with VMware and thunderbolt dows not work at all and i cannot build that HA anyway. But it might work for Non Microsoft LAB enviroment.

  4. I’m curious as to what the actual reason/use is for this? Aren’t those Intel NUK’s pretty low power? Also, wit single drives in each one, aren’t they a non-redundant point of failure? The switch also seems like a single point of failure. I’m not knocking this at all. It seems very interesting. I just don’t quite get the real benefit unless it’s just a fun experiment?

    • Yes, they are also small, noiseless and fast. I need to have multiple servers to be able to demo/showcase/write books and on the fly be able to rebuild the entire set up in less than 2-3 hours and they do the job. I’m sure there are others, but I like them.
      The purpose is not to have a production ready data center. The purpose is to be able to demo/test/lab all kinds of various configurations in an environment that is portable, flexible and fast enough to do demo’s on and easy to rebuild. I also have a normal datacenter at work and a smaller at home, but traveling with a 19 inch closet as a personal item on a SAS flight is not something I would like to try. The redundancy is created using Hyper-V and replica and in some scenarios I connect them to ISCSI targets, but still it is just a very small demo/test/lab environment that is extremely flexible.

      A real datacenter could be built on commodity hardware, but we still need to add the extra layer of redundancy.

  5. Did you use Hydration to build the datacentre. How did you solve the network card not being support by server o/s as intel doesn’t provide drivers for 2012 on the NUC’s

    • I start with a USB key to install the OS on the hyper-V hosts. Then I build the fabric layer and the SCVMM will do the rest. They “glue” in between is PowerShell. Correct Intel does not like Servers for some stupid reasons so you need to tweak the inf file. Then MDT will happily inject the unsigned driver. I will update the post with all the details during this week.

  6. Nice, fun stuff to have home or as a lab. Can out plug it in a Energy meter and say what the Watt is when idle and stressed?

  7. Great stuff. And congrats on the design. I built CriKit a few years ago for the same purposes – testing, writing, tinkering – ( crikit.info ) because nobody made anything inexpensive that let me play with all the cloud platforms. The NUC’s are kind of weak compared to Xeon’s but they get the job done for a small number of VM’s and where you don’t really need speed. One NIC is kind of a limit too when there is a lot of activity on the network and a single live migration pretty much kills the throughput. Also, even though the nodes are low wattage, they do generate heat and they create a heat bundle that kind of feeds off itself when the units are in contact with each other. Just sayin. You may want some measure of air gap between them just to be safe.

    These types of solutions will be commonplace in the near future. With public cloud on the back-end, the only question that remains is how much compute and storage will remain on premises for whatever reason. Even this case could do all the computing for a very large number of small businesses if they use the cloud on the back end.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.