HPE stumps for composable infrastructure, launches Synergy system

1 Dec 2015 | Author: | No comments yet »

HPE Enters Composable Infrastructure Space with Synergy.

Hewlett Packard Enterprise has developed a new type of “composable’ hardware that it claims will cut data center costs and slash the time it takes to spin up new applications.

Called HPE Synergy, it combines storage, compute and network equipment in one chassis, along with management software that can quickly configure the hardware automatically to provide just the resources needed to run an application, HPE said. “HPE Synergy’s unique built-in software intelligence, auto discovery capabilities and fluid resource pools enable customers to instantly boot up infrastructure ready to run physical, virtual and containerized applications,” the company said. Synergy, which HPE’s well oiled marketing engine assures us is an industry first, includes big ‘frames’ that lets the IT department insert rack scale blobs of compute, storage and fabric into an infrastructure, managed by software. The vision is “composable infrastructure.” Devised under the codename “Project Synergy,” it is infrastructure that quickly and easily reconfigures itself based on each application’s needs, designed for the new world where companies even in the most traditional of industries, from banking to agriculture, constantly churn out software products and generally look to software as a way to stand out among peers.

The Synergy platform, which HPE has been working on for three years, brings together all the resources along with the necessary software-defined intelligence and unified API to self-discover and self-assemble the needed infrastructure and to ensure complete infrastructure programmability, according to company officials. The company reckons the hybrid “platform” bridges the gap between the old and new worlds, as enterprises look to improve management of the existing server barn and test, develop and throw new apps in the cloud. HP, the company HPE was part of until the beginning of last month, and other big hardware vendors that have dominated the data center market for decades, companies like Dell, IBM, and Cisco, have all struggled to maintain growth in a world where not only developers but also big traditional enterprise customers are deploying more and more applications in the public cloud, using services by the likes of Amazon and Microsoft. At the heart of the Synergy infrastructure, which will be available starting in the second quarter next year, is a set of open APIs that bring software intelligence to deploying workloads based on the business demands of the application.

But as we pointed out when this idea started being bandied about this year, to provide truly composable infrastructure, the tight coupling between the CPU and its main memory (whether it is DRAM, MCDRAM, or 3D XPoint in the case of Intel) must be shattered and then allowed to be configured on the fly. (You cannot do this today. These are two very different ways of running a tech estate but though organisations face this bimodal challenge they don’t have the budgets for two operating models, or two different types of infrastructure. “We are bringing an infrastructure to our customers that lets them deal with this dual challenge,” said Neil MacDonald, veep and GM of BladeSystem for the converged data centre infrastructure unit. “On the one hand [there are] traditional IT needs, packaged apps, ERP, virtualisation farms, and on the other hand, applications developed for a hybrid cloud world or cloud native and containerised apps, and you can do this on a single infrastructure,” he added. IT departments need to enable this by not only driving greater efficiencies and reducing costs in the current systems and enterprise software the run their traditional business, but to address the demands for speed and agility that come with the rise of such trends as big data, mobility, security and cloud computing.

Driving factors for many of these businesses go beyond cost reduction and focus more on flexibility, speed and time to value, Antonio Neri, executive vice president and general manager of HPE’s Enterprise Group, told eWEEK. Solution providers, for their part, said the new infrastructure is sure to put pressure on hyperconverged stalwarts like Nutanix and infrastructure providers like Dell-EMC and Cisco which have yet to clearly lay out next-generation hyperconverged infrastructure road maps. Such environments waste money and don’t offer the kind of flexibility that composable infrastructures bring. “This is about bringing the private cloud up to a level that you cannot get with the current infrastructure,” Miller said during a press briefing in the days leading up to the Discover show. “You’re talking about two worlds: one static, and one very dynamic.” The move to composable infrastructure is at its earliest stages, and HPE is joining other tech vendors—such as Cisco Systems, Intel and Dell, to differing degrees—as they look to begin offering pools of infrastructure resources that can be composed and then decomposed as needed, according to Jed Scaramella, research director at IDC.

Intel is moving in that direction with its Rack System Architecture (RSA), introduced in 2013, which essentially offers a core common design that vendors can adopt and differentiate off of, Scaramella said. They have gone from thousands a month into hundreds of thousands of month, and they are locked into proprietary environments that they can’t necessarily get out of easily. Switch fabrics can present Ethernet ports for server-to-server connectivity or Fibre Channel ports for server-to-storage connectivity, as needed by applications. When a template is launched, it configures the hardware programmatically, without human intervention, according to HPE, reducing the chance for errors and speeding up the process.

While radically different from the traditional data center environment, where each resource is often overprovisioned, just in case demand rises, or where some resources, such as compute, for example, are overprovisioned, while others aren’t, the idea isn’t new. Nevertheless, he said, HPE has a significant leg up on competitors, particularly Dell and EMC, which are facing questions with regard to product road map in the wake of Dell’s $67 billion proposed acquisition of EMC. “I see HP having a two- to three-year advantage over Dell-EMC, which faces a year or two of worrying about how to consolidate,” he said. It may not be immediately clear why HPE needed a new hardware stack to provide this composable infrastructure, since the automation and orchestration is running on management nodes in the enclosures, just as was the case with blade servers like its BladeSystem designs. If a template for a configuration doesn’t exist, HPE developed a “unified API” that handles functions like the BIOS configuration, storage provisioning and other tasks to set up the hardware. One purpose was to provision the right amount of resources for every application; another was to enable Facebook data center managers to upgrade individual server components, such as CPUs, hard drives, or memory cards, individually, without having to replace entire pizza-box servers.

Cisco is offering composable infrastructure capabilities through its Unified Computing System (UCS) M-Series modular servers and its UCS C3260 rack servers, which like HPE’s Synergy offering can compose and recompose the various disaggregated resources into infrastructures optimized for particular workloads. But channel partners we’ve spoken to reckon Synergy will have little impact outside of the large enterprises. “I think many of our smaller customers would not need to change enough stuff to make it that relevant – and would not need any bare metal servers so virtualisation would suffice,” one HP channel loyalist said. A key is Cisco’s System Link technology, which enables the disaggregation of the compute resources from the underlying hardware while bringing the control plane into the hardware. “Composable software is truly software infrastructure,” Todd Brannon, director of UCS marketing at Cisco, told eWEEK. The Thunderbird machines are not fully composable and they cannot be until CPU chip makers break the memory controllers and main memory storage free from the CPU complex in some way.

These must be upgraded in lockstep, and we probably won’t see such composable processing complexes from Intel until the “Skylake” Xeon E5 v5 generation in 2017, which we told you about here back in May. The idea of a composable infrastructure isn’t new, said Gartner analyst Paul Delory, noting that Cisco uses the term to describe its UCS M Series servers. What we want to do is wrap our servers in code.” The M-Series—launched last year—and the newer C3260 systems are “driving on the same axis that we’ve been driving on” since first rolling out the UCS converged infrastructure in 2009, Brannon said. What’s more, Vencel pointed out, Synergy is integrated with all of the cutting edge application development tools and environments, including Puppet, Chef and Docker.

But HPE appears closest to delivering on its potential, he said — with the caveat that Synergy is still months from release. “I think what they’ve done is innovative,” he said. That new Ultra Path Interconnect (UPI) point-to-point link for Skylake processors (an upgrade to the QuickPath Interconnect, or QPI, links used with Xeons since 2009) and their memory may not provide the generic links and memory controllers that would be necessary to break the CPU from the memory. HPE officials first raised its Synergy composable infrastructure strategy this summer, when it rolled out the open API, a single line of code that can abstract, discover, search, provision and update all elements of the infrastructure so it can test and run code.

HP Enterprise Sales Growth Partner by Arrow Electronics, is using the Synergy announcement to hammer home to customers the innovation advantage that HPE has in the infrastructure market, said Vencel. In one example, Paul Durzan, a VP of product management at HPE, listed nine APIs DevOps usually have to code for to automate the way applications use infrastructure.

They included, among others, APIs to update firmware and drivers, select BIOS settings, set unique identifiers, install OS, configure storage arrays, and configure network connectivity. Synergy also uses HPE’s OneView management software for a single interface for composing physical and virtual resources into whatever configuration is needed for the application. Synergy opens up new opportunities for solution providers to provide programmable intelligent infrastructures with a robust double-digit margin model, said Miller. “Solution providers get to control the compute, fabric and storage, enabling them to have a richer sale,” he said.

But converged systems are limited by the physical hardware in the box, says Paul Durzan, HPE vice president for Infrastructure Management and Orchestration Software. “When you buy your resources, you’re buying a physical boundary. HPE has developed templates that solution providers will be able to use to customize applications for customers whether they are on-premise or off-premise, he said. In a sense, the HPE Synergy system, as it is currently being delivered, is a multi-chassis blade server that has a single management API stack based largely on OneView, and this layer abstracts every aspect of the underlying infrastructure.

Synergy solves the problem of stranded resources, Durzan says, because unlike converged systems, there are no fixed ratios of storage to compute; with Synergy, all capacity can be used, even if that means tapping storage modules two racks over. The systems can still be virtualized, and HPE says it’s working with VMware, Microsoft, Puppet, Ansible and Chef to provide access to the Synergy API through their virtualization and automation tools. Synergy could enable companies to streamline purchase processes, Fichera said, because they’ll no longer need to order new hardware against specific application or capacity requirements; Synergy provides them greater flexibility to configure systems after they’re installed. IDC’s Scaramella said that while the composable infrastructure market is just coming together, there are elements that enterprises can start embracing now.

One template, for example, could be for a SQL database running on bare-metal servers using flash storage; another could be a cluster of servers virtualized using hypervisors with flash storage; there could also be a unified communications template for Microsoft’s Skype for Business. It’s a stepping stone on the way to a more fully composable system, which might allow individual processors and memory chips to be programatically assembled.

While HPE’s composable-infrastructure ideas aren’t new, the company’s scale, existing customer relationships, and breadth of its services organization are substantial advantages. That capability is limited today by Intel’s Xeon server processors, but when high-speed silicon photonic interconnects become a reality, servers may eventually become disaggregated down to the individual chip level, said IDC analyst Jed Scaramella. What is not clear is how (or why) these composability features will be restricted from its Apollo HPC and enterprise machines or its Cloudline minimalist servers. As the superstar Silicon Valley venture capitalist Vinod Khosla recently pointed out at the Structure conference in San Francisco, IBM, Dell, HP, and Cisco are all “trying the right things,” even though they haven’t come up with new, truly innovative ideas in decades.

Google, Facebook, and Microsoft do not use what are in essence blade servers – they tend to go with racks machines with some shared infrastructure at the rack level for power and cooling. There are half-height and full-height nodes that offer two or four Xeon E5 processors, respectively, and a full-height, single-wide node that offers two Xeon E7s and a double-wide, full-height node that has four Xeon E7 sockets. (There is not an eight-socket Xeon E7 Thunderbird node, but there could be one if HP wanted to make one.

It would probably eat up four out of the six bays in the 10U Synergy 12000 enclosure.) A Thunderbird storage node takes up one of the six bays – which means it is half-height but double wide – and has 40 2.5-inch disks in it. Here is what it looks like with its disk bay tongue sticking out: The Synergy 12000 chassis also has room for redundant management appliance bays for running the embedded OneView, which is called HPE Composer, and another tool called Image Streamer, which as the name suggests is a templating system for throwing software images out on composed infrastructure stacks within the racks of Thunderbird iron. Image Streamer provisions boot and run storage volumes on the storage nodes and deploys operating systems on compute nodes and also iSCSI targets for the boot and run volumes on the nodes. Customers can do Fibre Channel over Ethernet on the 40 Gb/sec switch if they want to, linking out to storage and doing away with Fibre Channel links to storage.

We were wondering why HPE is not trotting out support for 25 Gb/sec, 50 Gb/sec, and 100 Gb/sec switching in the new iron, which would seem logical, and all Thome could tell us is that HPE is in talks with multiple ASIC suppliers to broaden the interconnect options. The first is what HPE is calling traditional enterprise applications that are relatively stable and that are updated maybe once or twice a year and that have their systems optimized to run those applications; these can be run on bare metal but are often highly virtualized, with many applications running side-by-side om each node in a cluster. At the other end of the spectrum are new applications, typically data analytics and mobile front-ends for applications or whole new mobile apps themselves that get changed every couple of months.

Here you can write a commentary on the recording "HPE stumps for composable infrastructure, launches Synergy system".

* Required fields
Twitter-news
Our partners
Follow us
Contact us
Our contacts

dima911@gmail.com

ICQ: 423360519

About this site