Hyperconverged infrastructure is so hot right now it needs liquid cooling

featured-image

Lenovo brings its Neptune cold plates to servers packing sixth-gen Xeons to run VMware, Nutanix, and AzureStack Hyperconverged infrastructure most often involves a collection of modest 2U servers powered by mid-range processors that aren’t particularly challenging to operate. But Lenovo’s new models packing Xeon 6 processors may need liquid cooling....

Hyperconverged infrastructure most often involves a collection of modest 2U servers powered by mid-range processors that aren’t particularly challenging to operate. But Lenovo’s new models packing Xeon 6 processors may need liquid cooling. The Chinese hardware giant yesterday launched “ThinkAgile HX Series GPT-in-a-Box solutions”, hyperconverged infrastructure (HCI) products that have the option to use its Neptune Core Module, a direct liquid cooling device that pipes cold water to a cold plate that sits atop a CPU.

The hardware comes in three flavors, each tuned to run HCI stacks from Nutanix, VMware, or Microsoft’s AzureStack HCI. All can run a variety of 6th-generation Intel Xeon CPUs (aka Granite Rapids) including parts that 350-watt parts like the 86-core and 172-thread model 6787p . Interestingly, Lenovo’s fact sheet for the ThinkAgile MX650 V4 Hyperconverged System it’s built for AzureStack lists several single-socket models that it will only build to order.



The servers can also run GPUs – up to ten of them although the sheer size of parts like Nvidia’s H100 means only a pair can be installed. HCI is often touted as a fine candidate for installation in branch offices or at the network edge, as its inclusion of software-defined storage in homogenous appliances and central management features mean it’s less complex to deploy than some other hardware options. News that Lenovo feels the need to equip HCI boxes with liquid cooling for their CPUs could be seen to dilute the HCI value proposition by adding complexity.

However Lenovo’s bundling of these devices into GPT-in-a-Box configs suggests most will be racked and stacked in formal datacenter settings, probably at orgs that already run plenty of HCI and want to keep doing so as they implement on-prem AI workloads rather than creating new hardware silos. Such orgs will still need to invest in the extra hardware required by liquid cooling. But at least they’ll still be using a familiar software stack and operating environment as they do so.

And maybe a few of these boxes will make it out to the edge, too. A half-height liquid cooled rack is easier to deploy than a small immersion tank. ®.