Supermicro FatTwin
Sexy, dense and can handle the heat
The design concepts underpinning the Supermicro FatTwin are something I would hold up as “game changing.” The reasons why are explored in a review I wrote for The Register a while back, but the product is so good it’s worth expanding that review a little. Free from any real editorial constraints, I think it’s time to take a bit of a wander through what this equipment can actually mean to a systems administrator.
So what’s a FatTwin anyways?
A FatTwin is a brand name used by SuperMicro to describe their latest in dense-but-powerful server offerings. Two dual-processor servers per U, 4 U per chassis for a total of 8 2P servers in a 4U enclosure.
There is no shared backplane for networking or other system components. The only thing shared by the chassis is power. This drives the cost of the chassis down so that if you are paranoid and want a spare chassis on the shelf you can probably afford it without breaking the bank.
Isn’t a shared power plane a single point of failure?
Yep; a shared power plane is a single point of failure. If you somehow manage to tank the shared power plane in the chassis you will not be able to light up half of the nodes. One failure and half of the systems go. The shared power plane is an intermediate step between single-everything as would be found in entirely discrete systems, and shared-many-systems as is found in blade servers.
A typical blade backplane has hundreds of individual traces; a myriad of single points of failure that could go wrong. When compared to blades, the choice of “shared power only” offers simplicity; it has the bare minimum of traces and just carries power. The result is that the power backplane on these FatTwins is pretty hard to kill.
I had a field day pulling PSUs out and plugging them back in, yarding out power plugs, and disconnecting the hot-swap fans at random to see if I could blow the thing up. No matter how much I tortured the backplane, it wouldn’t die.
So what makes it special?
Since I’ve started showing the FatTwin around, I’ve had a lot of systems administrators ask me quite bluntly “why should we care about this?” For some, cloud computing means they are not looking to refresh their datacenters. For others, power, space and cooling are paid for by another department, so why not simply buy great big 4U servers for everything?
The hard truth is that not everyone will care about something like a FatTwin. If all you need is one or two servers then this is swatting a fly with a nuke. If that’s the case, I’d argue that you should then be checking out Supermicro’s other server offerings that are more to scale for your needs. The engineering on everything Supermicro I’ve touched in the past few months is beyond excellent, so it’s worth your time.
If, however, your plans for the future include 4 or more physical servers then the FatTwin starts to win out on a number of fronts. We’ve discussed compute density and reliability which are the main selling features of these systems. Let’s take a look at the “little things” that catch my sysadmin’s eye.
IPMI: The FatTwin systems ship with a robust and capable baseband management controller. This allows you to remotely reboot your system, update its BIOS, or install an operating system. The implementation in use contains an IPKVM that can be accessed through the fully-featured node-local web-based administration page or a JAVA app that Supermicro provides, allowing for management of multiple systems.
On-motherboard USB: There’s a little USB port on motherboard that allows for you to add a nano-USB key to your system. Rather than a USB header – as is found on nearly every motherboard – this is a proper USB port. This means that if you are using the node as an ESXi server or an HPC node you can load your OS onto a USB drive, leaving all your SAS and SATA ports open for data drives. It’s a nice touch.
PCI-E x16 slot: a lot of high-density server solutions don’t come with a proper PCI-E slot. FatTwins do. In the case of my specific unit, I added 10GbE cards to the mix. You could add a GPU, RAID controller or what-have-you. I appreciate the flexibility, especially as my servers tend to evolve over time.
They come in many flavours: There are currently over 20 FatTwin models with more on the way. 4 or 8 Node, Front or Rear I/O, 8 or 16 DIMMs, 2.5” or 3.5” HDDsas well as GPU and Hadoop optimised models. You can get nodes with multiple 1GbE ports onboard or with 10GbE LOMs. Pretty much, if you want it, you can get it; Supermicro obviously doesn’t believe in “one size fits all.”
Power efficiency: FatTwins come with 80+ platinum power supplies. (Apparently, Supermicro manufactures their own PSUs. Who knew?) This gets combined with a high temperature tolerance design to their systems which drives down cooling costs. Considering that electricity prices where I live just keep going up, this makes me very happy.
Microcomponent quality: The motherboard is covered in quality voltregs and solid caps. There will be no bulging capacitors as this system ages. It will keep on ticking until well past the time when CPU advances mean it is no longer worth the electricity required to keep the system running.
What can you do with it?
When I look at emerging technologies I start to realise that the FatTwin design has a promising future ahead of it. I believe the datacenters of the future are going to exist in two forms: the first a Nutanix-like vblock setup and the second a PernixData-like node-cache setup. Nutanix actually use Supermicro as the OEM for many of their systems because they find the multi-node/single power plane design that underpins the FatTwin to be the best.
Similarly, I’ve been beta testing PernixData’s FVP on my FatTwin and I’m blown away by the possibilities. With PernixData you can use local PCI-E flash as well as local SAS or SATA-attached SSDs as cache to accelerate a slow SAN. With Supermicro’s FatTwin you can cram two absolutely top-end Xeons, 512GB of RAM, a PCI-E flash card as well as 6 SSDs into each node.
That makes for a mind-numbing 16 CPUs (128 cores!), 4TB of RAM, 4 PCI-E Flash cards and 48 SSDs all crammed into 4U. That then uses local flash resources to accelerate slow (or distant) SANs in order to enable mind-boggling numbers of VMs in a very small space.
Is it worth it?
This is really what it boils down to. Is it worth it? The truth is that having played with the technology for several months now; I cannot imagine buying systems not based on this technology. I’m starting to feel the space pressure in my datacenter and I certainly am carefully balancing the power and cooling thresholds the building is capable of.
The FatTwin has been a lifeline; it has more compute power in 4 U than the other four racks of equipment that run alongside it. It is more reliable than anything else I’ve worked with and the dual Xeon 2680s combined with the 2x Intel 520 480GB SSDs make for incomprehensibly fast servers.
Being Supermicro, the raw cost of the servers is about as low as it gets. The bigger issues arise around Supermicro’s support offerings: for some, relying on VARs or keeping spare parts around just doesn’t cut it. Rumour has it that Supermicro has heard the cries of those seeking more traditional enterprise-class support options; these are supposedly just around the corner.
Personally, I find that the benefits the FatTwin offers outweigh the support headaches of going through VARs. For everyone else, it seems it will only be a matter of time before Supermicro can bring a FatTwin to your server room too.
- Information Overload? There’s an app for that. - January 12, 2017
- Year end thank yous - December 23, 2016
- Archival cloud storage can be an affordable backup layer - October 3, 2016
- On the importance of the user experience - August 13, 2016
- Beyond the traditional storage gateway - June 17, 2016
- Data residency made easy - June 15, 2016
- DevOps shouldn’t be a straitjacket - March 15, 2016
- Preparing for Office 2016 - November 7, 2015
- Supermicro, VSAN and EVO:Rail - February 4, 2015
- Make a #WebScaleWish - November 21, 2014