New NVIDIA GPU Drives Launch of Facebook’s ‘Big Sur’ Deep Learning Platform

11 Dec 2015 | Author: | No comments yet »

Facebook Open Sources ‘Big Sur’ AI Hardware Design.

FACEBOOK HAS UNVEILED its next-generation GPU-based systems for training neural networks, Open Rack-compatible hardware code-named “Big Sur” which it plans to open source. Facebook is releasing for free the designs of a new computer server – twice as fast as those used by Facebook earlier, it designed to put more power behind artificial-intelligence software, MIT Technology review reported.Over the last few years, a technology called deep learning has proven so adept at identifying images, recognizing spoken words, and translating from one language to another, the titans of Silicon Valley are eager to push the state of the art even further—and push it quickly.Facebook Inc.’s use of artificial intelligence, which ranges from tools for image recognition to the filtering of the news feeds for its social network, demands special computing infrastructure.

The social media giant’s latest machine learning system has been designed for artificial intelligence (AI) computing at a large scale, and in most part has been crafted with Nvidia hardware. The company recently began building custom servers for its artificial intelligence workload and Thursday announced it would release the designs for that powerful hardware to the world — for free.

These days, machine learning and artificial intelligence are, hand in hand, becoming the lifeblood of broad new applications throughout the business and research communities. Facebook worked closely with Nvidia, a leading manufacturer of GPUs, on its new server designs, which have been stripped down to cram in more of the chips. The social network on Thursday unveiled new AI hardware its researchers have developed to train neural networks, and open-sourced the design, offering other organizations a blueprint for how to set up their own AI-specific infrastructure. At Google, this tech not only helps the company recognize the commands you bark into your Android phone and instantly translate foreign street signs when you turn your phone their way.

The company said the plan to open-source the blueprints of the servers — called “Big Sur” — would help other companies and researchers benefit from the incessant tweaking of Facebook’s developers. But even as that dynamic has been significantly driven by computers that are more powerful and more efficient, industry is reaching the limits of what those computers can do. Increasingly, Facebook is developing elements of its business centered on artificial intelligence, and the social networking giant’s ability to build and train advanced AI models has been tied to the power of the hardware it uses.

At Facebook, it helps identify faces in photos, choose content for your News Feed, and even deliver flowers ordered through M, the company’s experimental personal assistant. Among its recent AI projects have been efforts to make Facebook easier to use for the blind, and to incorporate artificial intelligence into everyday users’ tasks. This is a bid to make it easier for AI researchers to share techniques and technologies. “As with all hardware systems that are released into the open, it’s our hope that others will be able to work with us to improve it,” Facebook said, adding that it believes open collaboration will help foster innovation for future designs, and put us closer to building complex AI systems that will probably take over the world and kill us all. All the while, these two titans hope to refine deep learning so that it can carry on real conversations—and perhaps even exhibit something close to common sense.

Nvidia released its end-to-end hyperscale data centre platform last month claiming that it will let web services companies accelerate their machine learning workloads and power advanced artificial intelligence applications. GPUs are widely used in artificial intelligence because the chips have far more individual processing cores on them than traditional processors produced by Intel Corp., making them adept at the dumb-but-numerous calculations required by AI software.

Consisting of two accelerators, Nvidia’s latest hyperscale line aims to let researchers design new deep neural networks more quickly for the increasing number of applications they want to power with AI. Many high-performance computing systems require special cooling to operate, but Facebook has designed its new servers for “thermal power and efficiency,” allowing the company to operate them in its own free-air cooled, Open Compute standard data centers.

The hardware designs will be made available as part of the Open Compute Project, an initiative Facebook Chief Executive Officer Mark Zuckerberg started in 2011 to share the secrets of the Menlo Park, California-based company’s data centers. That’s because Piantino told reporters, “our capabilities keep growing, and with each new capability, whether it’s computer vision, or speech, our models get more expensive to run, incrementally, each time.” Also, he said, as the FAIR group has moved from research to capability, it has seen product groups from across Facebook reach out about collaborations.

For Facebook, releasing its designs has potent benefits: the openness can be a major incentive for top talent to join the company; firms that use the equipment may contribute their improvements back to the community, letting Facebook outsource some of its research and development costs; and if enough people buy the equipment, then economies of scale will ultimately lower the price Facebook pays for its computer hardware, Serkan Piantino, the engineering director of Facebook’s AI group, said in an briefing with reporters. “Often the things we open-source become standards in the community and it makes it easier and cheaper for us to acquire the things later because we put them out there,” Piantino said. The Internet’s largest services typically run on open source software. “Open source is the currency of developers now,” says Sean Stephens, the CEO of a software company called Perfect. “It’s how they share their thoughts and ideas. In the closed source world, developers don’t have a lot of room to move.” And as these services shift to a new breed of streamlined hardware better suited to running enormous operations, many companies are sharing their hardware designs as well.

Although GPUs were originally designed to render images for computer games and other highly graphical applications, they’ve proven remarkably adept at deep learning. Traditional processors help drive these machines, but big companies like Facebook and Google and Baidu have found that their neural networks are far more efficient if they shift much of the computation onto GPUs. In short, Facebook can achieve a greater level of AI at a quicker pace. “The bigger you make the neural nets, the better they will work,” LeCun says. “The more data you get them, the better they will work.” And since deep neural nets serve such a wide variety of applications—from face recognition to natural language understanding—this single system design can significantly advance the progress of Facebook as a whole. And in a larger sense, if more companies use the designs to do more AI work, it helps accelerate the evolution of deep learning as a whole—including software as well as hardware.

Here you can write a commentary on the recording "New NVIDIA GPU Drives Launch of Facebook’s ‘Big Sur’ Deep Learning Platform".

* Required fields
Twitter-news
Our partners
Follow us
Contact us
Our contacts

dima911@gmail.com

ICQ: 423360519

About this site