Facebook open sources its artificial intelligence server

10 Dec 2015 | Author: | No comments yet »

Facebook Open Sources Its AI Hardware as It Races Google.

Facebook Inc.’s use of artificial intelligence, which ranges from tools for image recognition to the filtering of the news feeds for its social network, demands special computing infrastructure. NVIDIA (NASDAQ: NVDA) today announced that Facebook will power its next-generation computing system with the NVIDIA® Tesla® Accelerated Computing Platform, enabling it to drive a broad range of machine learning applications.Over the last few years, a technology called deep learning has proven so adept at identifying images, recognizing spoken words, and translating from one language to another, the titans of Silicon Valley are eager to push the state of the art even further—and push it quickly.Facebook said today it has developed a new computing system aimed at artificial intelligence research that is twice as fast and twice as efficient as anything available before.

Facebook wants to speed up research into artificial intelligence for everyone by making the plans for a massively powered box available to any company that wants to hasten up its efforts to build better facial or voice recognition.Taking powerful artificial intelligence software and making it open source, so anyone in the world can use it, seems like something out of a sci-fi movie, but both Google and Microsoft have done exactly that in recent months.The social network is one of the Valley’s most invested when it comes to building out artificial intelligence technology to help its products think and act like humans.

The company recently began building custom servers for its artificial intelligence workload and Thursday announced it would release the designs for that powerful hardware to the world — for free. While training complex deep neural networks to conduct machine learning can take days or weeks on even the fastest computers, the Tesla platform can slash this by 10-20x.

These days, machine learning and artificial intelligence are, hand in hand, becoming the lifeblood of broad new applications throughout the business and research communities. It’s a competitive endeavor — Google, IBM, Uber and Baidu are just a few of the companies racing Facebook to scoop up deep learning experts, the rare minds capable of building this type of software.

The company said the plan to open-source the blueprints of the servers — called “Big Sur” — would help other companies and researchers benefit from the incessant tweaking of Facebook’s developers. At Google, this tech not only helps the company recognize the commands you bark into your Android phone and instantly translate foreign street signs when you turn your phone their way. But even as that dynamic has been significantly driven by computers that are more powerful and more efficient, industry is reaching the limits of what those computers can do.

The social network’s giveaway is the latest in a recent flurry of announcements by tech giants that are open-sourcing artificial-intelligence technology, which is becoming vital to consumer and business-computing services. It’s a big move, because while software platforms can certain make AI research easier, more replicable, and more shareable, the whole process is nearly impossible without powerful computers. The company announced Thursday that it built some new AI-specific servers — the physical hardware used to store all of the AI software its employees are creating — to do things like automate text conversations and understand what’s visible in a photograph.

Increasingly, Facebook is developing elements of its business centered on artificial intelligence, and the social networking giant’s ability to build and train advanced AI models has been tied to the power of the hardware it uses. The net result of this speed boost means that the social network could take its uncanny facial recognition abilities to your videos in addition to your photos. The technique involves training artificial neural networks on lots of data — pictures, for instance — and then getting them to make inferences about new data. Opening up the technology is seen as a way to accelerate progress in the broader field, while also helping tech companies to boost their reputations and make key hires.

At Facebook, it helps identify faces in photos, choose content for your News Feed, and even deliver flowers ordered through M, the company’s experimental personal assistant. Among its recent AI projects have been efforts to make Facebook easier to use for the blind, and to incorporate artificial intelligence into everyday users’ tasks. And because it’s contributing these designs to the Open Compute Foundation, much like it has done for its server designs and networking gear, other companies can take these designs and build their own AI hardware or even tweak the Big Sur designs to make them better. Facebook is investing more and more into this field, so it makes sense for the company to design custom hardware, just as it has general-purpose servers, storage, and networking equipment. In November, Google opened up software called TensorFlow, used to power the company’s speech recognition and image search (see “Here’s What Developers Are Doing with Google’s AI Brain”).

The new design, called Big Sur, calls for eight high-powered graphics processing units, or GPUs, amongst the other traditional parts of the computer like the central processing unit, or CPU, hard drive, and motherboard. All the while, these two titans hope to refine deep learning so that it can carry on real conversations—and perhaps even exhibit something close to common sense. GPUs are widely used in artificial intelligence because the chips have far more individual processing cores on them than traditional processors produced by Intel Corp., making them adept at the dumb-but-numerous calculations required by AI software. Not long after, IBM announced the fruition of an earlier promise to open-source SystemML, a system originally developed to use machine learning to find useful patterns in corporate databanks. Serkan Piantino, director of engineering at Facebook’s AI Research, said on a conference call explaining the news, that because of the enormous amount of heat and power drawn by the graphics processing chips used in building the machines, that the team that melted its first enclosure would have gotten a steak dinner.

Facebook’s new server design, dubbed Big Sur, was created to power deep-learning software, which processes data using roughly simulated neurons (see “Teaching Computers to Understand Us”). That’s because Piantino told reporters, “our capabilities keep growing, and with each new capability, whether it’s computer vision, or speech, our models get more expensive to run, incrementally, each time.” Also, he said, as the FAIR group has moved from research to capability, it has seen product groups from across Facebook reach out about collaborations. And the Tesla platform’s growing global adoption facilitates open collaboration with researchers around the world, fueling new waves of discovery and innovation in the machine learning field. Yann LeCun, the head of AI research at Facebook, said that negotiating with the data center administrators to take in the necessary power to ensure the machines got the juice they needed was a significant issue.

Facebook worked closely with Nvidia, a leading manufacturer of GPUs, on its new server designs, which have been stripped down to cram in more of the chips. Big Sur Optimized for Machine Learning NVIDIA worked with Facebook engineers on the design of Big Sur, optimizing it to deliver maximum performance for machine learning workloads, including the training of large neural networks across multiple Tesla GPUs. For Facebook, releasing its designs has potent benefits: the openness can be a major incentive for top talent to join the company; firms that use the equipment may contribute their improvements back to the community, letting Facebook outsource some of its research and development costs; and if enough people buy the equipment, then economies of scale will ultimately lower the price Facebook pays for its computer hardware, Serkan Piantino, the engineering director of Facebook’s AI group, said in an briefing with reporters. “Often the things we open-source become standards in the community and it makes it easier and cheaper for us to acquire the things later because we put them out there,” Piantino said. However, Facebook has managed to make the power and heat of the boxes work in its own data centers, which means they should work well in other modern data center facilities.

Deep learning, a domain in which LeCun is highly regarded, can be used for speech recognition, image recognition, and even natural language processing. To have the computer learn what a cat is, you need you show it potentially millions of pictures of cats (although Facebook’s methods have dramatically reduced that number). The neural networks are virtual clusters of mathematical units that can individually process small pieces of information, like pixels, and when brought together and layered can tackle infinitely more complex tasks. Keep Current on NVIDIA Subscribe to the NVIDIA blog, follow us on Facebook, Google+, Twitter, LinkedIn and Instagram, and view NVIDIA videos on YouTube and images on Flickr.

Derek Schoettle, general manager of IBM Cloud Data Services unit, which offers tools to help companies analyze data, says that machine-learning technology has to be opened up for it to become widespread. The company’s technologies are transforming a world of displays into a world of interactive discovery — for everyone from gamers to scientists, and consumers to enterprise customers. Open-source projects have played a major role in establishing large-scale databases and data analysis as the bedrock of modern computing companies large and small, he says. The Internet’s largest services typically run on open source software. “Open source is the currency of developers now,” says Sean Stephens, the CEO of a software company called Perfect. “It’s how they share their thoughts and ideas.

Real value tends to lie in what companies can do with the tools, not the tools themselves. “What’s going to be interesting and valuable is the data that’s moving in that system and the ways people can find value in that data,” he says. In the closed source world, developers don’t have a lot of room to move.” And as these services shift to a new breed of streamlined hardware better suited to running enormous operations, many companies are sharing their hardware designs as well.

Late last month, IBM transferred its SystemML machine-learning software, designed around techniques other than deep learning, to the Apache Software Foundation, which supports several major open-source projects. Important factors that could cause actual results to differ materially include: global economic conditions; our reliance on third parties to manufacture, assemble, package and test our products; the impact of technological development and competition; development of new products and technologies or enhancements to our existing product and technologies; market acceptance of our products or our partners’ products; design, manufacturing or software defects; changes in consumer preferences or demands; changes in industry standards and interfaces; unexpected loss of performance of our products or technologies when integrated into systems; as well as other factors detailed from time to time in the reports NVIDIA files with the Securities and Exchange Commission, or SEC, including its Form 10-Q for the fiscal period ended October 25, 2015. Getting back to artificial intelligence, the multitude of cores in a GPU allow more computations to be run in at the same time, speeding up the whole endeavor. These forward-looking statements are not guarantees of future performance and speak only as of the date hereof, and, except as required by law, NVIDIA disclaims any obligation to update these forward-looking statements to reflect future events or circumstances. © 2015 NVIDIA Corporation.

In their tests of CPU vs GPU performance for image training, dual 10-core Ivy Bridge CPUs (read: very fast) processed 256 images in 2 minutes 17 seconds. Although GPUs were originally designed to render images for computer games and other highly graphical applications, they’ve proven remarkably adept at deep learning. Many Nvidia devices also come with the Compute Unified Device Architecture (CUDA) platform, that allows developers to write native code like C or C++ directly to the GPU, to orchestrate the cores in parallel with greater precision. In short, Facebook can achieve a greater level of AI at a quicker pace. “The bigger you make the neural nets, the better they will work,” LeCun says. “The more data you get them, the better they will work.” And since deep neural nets serve such a wide variety of applications—from face recognition to natural language understanding—this single system design can significantly advance the progress of Facebook as a whole. This year IBM clustered 48 TrueNorth chips to build a 48 million neuron network, and MIT Technology Review reports that FACETS hopes to achieve a billion neurons with ten trillion synapses.

Even with that number, we’re still far from recreating the human brain, which consists of 86 billion neurons and could contain 100 trillion synapses. (IBM has hit this 100 trillion number in previous TrueNorth trials, but the chip ran 1542 times slower than real-time, and took a 96-rack supercomputer.) Alex Nugent, the founder of Knowm and DARPA SyNapse alum, is trying to bring the future of computing with a special breed of memristors, which he says would replace the CPU, GPU, and RAM that run on transistors. The memristor has been a unicorn of the tech industry since 1971, when computer scientist Leon Chua first proposed the theory as “The Missing Circuit Element.” Theoretically, a memristor serves as a replacement to a traditional transistor, the building block of the modern computer. Instead of two states like a transistor, a memristor can theoretically have four or six, multiplying the complexity of information an array of memristors could hold. Nugent worked with hardware developer Kris Campbell from Boise State University to actually create a specific chip that works with what he calls AHaH (Anti-Hebbian and Hebbian) learning. The ability of the memristors to change their resistance based on applied voltage in bidirectional steps is very similar to the way neurons transmit their own minuscule electric charge, says Nugent.

Here you can write a commentary on the recording "Facebook open sources its artificial intelligence server".

* Required fields
Twitter-news
Our partners
Follow us
Contact us
Our contacts

dima911@gmail.com

ICQ: 423360519

About this site