NVIDIA Launches New GPUs For Deep Learning Applications, Partners With Mesosphere

10 Nov 2015 | Author: | No comments yet »

Cirrascale to Offer NVIDIA Tesla M40 GPU Accelerators Throughout Rackmount and Blade Server Product Lines.

NVIDIA HAS ANNOUNCED an end-to-end hyperscale data centre platform that it says will let web services companies accelerate their machine learning workloads and power advanced artificial intelligence (AI) applications.

Chipmaker Nvidia today announced two new flavors of Tesla graphics processing units (GPUs) targeted at artificial intelligence and other complex types of computing.Nvidia is paving the way for a technology, driven by deep leaning techniques that could take the search capabilities of today’s systems to a whole new level.Over the span of just a few years, machine- and deep-learning went from being mere murmurs to major focal-points at some of the world’s biggest companies.

SAN DIEGO, Calif., Nov. 10 — Cirrascale Corporation, a premier developer of GPU-driven blade and rackmount cloud infrastructure for mobile and Internet applications, today announced it will offer the new NVIDIA Tesla M40 GPU accelerators throughout its high-performance GPU- enabled rackmount and blade server product lines.“The artificial intelligence race is on,” said Jen-Hsun Huang, co-founder and CEO of NVIDIA. “Machine learning is unquestionably one of the most important developments in computing today, on the scale of the PC, the internet and cloud computing. A couple of those focusing hard on the software side of things include Amazon, Google, and Microsoft, while on the hardware side, NVIDIA has been instrumental in designing hardware that can dramatically accelerate processing of important data. Of the latest lineup, the new M4 processor is designed to scale-out architectures within data centers, whereas the larger M40 is geared for optimal performance.

The more brawny M40, by contrast, comes with 3,072 Cuda cores, 12GB of GDDR5 memory, 288 GB/second of bandwidth, power usage of 250 watts, and a peak of 7 teraflops. At the previous couple of GPU Technology Conferences, NVIDIA CEO Jen-Hsun Huang’s opening keynotes have included revelations about machine-learning.

The company is launching two new hardware accelerators today, as well as a suite of tools that will help developers and data center managers use these accelerators to run deep learning software, as well as image and video processing jobs on them. These new GPU accelerators, based on Nvidia’s Maxwell architecture, are the successors for Nvidia’s Kepler-based Tesla K40 and K80. (Specs are here.) The GPU has become a recognized standard for a type of AI called deep learning. At last spring’s event, he invited Google’s Jeff Dean and Baidu’s Andrew Ng on the stage to explain how they make use of GPUs to speed-up their work. The new Tesla M40 features NVIDIA GPU BoostTM technology, which converts power headroom into user-controlled performance boosts, enabling the Tesla M40 to deliver 7 Tflops of single precision peak performance. The graphic product developer, which holds a monopoly when it comes to graphic card for PC’s is also measuring the way deep learning can be used to increase the capabilities of the present supercomputers.

During the keynote, we didn’t just witness something expected like a search engine that becomes smarter; we even saw examples of where computers could teach themselves to play – and get better at – games, one being Breakout. The company has dubbed the latest breed of cards as its “hyperscale accelerator”, as Nvidia believes the processors deliver a level of specialization that can help generate a greater market share in the machine learning market.

Additionally, it provides 12GB of ultra-fast GDDR5 memory, which enables a single Cirrascale GB5600 blade server to house up to an incredible 96GB of GPU memory. Together, NVIDIA and Mesosphere want to “make it easier for web-services companies to build and deploy accelerated data centers for their next-generation applications.” Because of the work Mesosphere did with NVIDIA, developers using Apache Mesos (the open-source backbone of Mesosphere’s data center operating system) will be able to use GPU resources in a data center just like they use CPUs and memory. For years, Nvidia had marketed its Tesla line of GPUs under the term “accelerated computing,” but in its annual report for investors this year, the company changed its tune and began emphasizing Tesla’s deep learning capability. There should be no doubt at this point that machine-learning is going to be a major part of our future, even if it’s not so obvious to the casual observer. GPU resources will be clustered into a single pool and the software will automatically distribute jobs across all the different machines that offer compatible GPUs.

In addition to coming out with the new GPUs — and accompanying performance benchmarks for the Caffe deep learning framework — Nvidia is also introducing its new Hyperscale Suite of software, including the cuDNN library for building applications with deep learning, a GPU-friendly version of the FFmpeg video processing framework, and an Image Compute Engine tool for resizing images. Machine-learning capabilities are not restricted to certain pieces of hardware, but certain pieces of hardware could be a lot better at achieving an overall goal. This eliminates the need for host CPU intervention creating a “micro-cluster” by allowing the accelerators to share a single memory address space. At that aforementioned GTC, Jen-Hsun explained just how powerful the desktop-targeted GeForce GTX TITAN X is in deep-learning, able to churn through an image recognition project with AlexNet much faster than on a CPU alone.

It is said to reduce training time by a factor of eight compared with CPUs, has been designed and tested for high reliability in data centre environments, and features scale-out performance to support the firm’s GPUDirect software, allowing fast multi-node neural network training. They can enhance and contextualize search by using software models, algorithms and analytics to help the system organize, tag, and resize images and videos. A number of public cloud vendors, including AWS and Microsoft, now either offer GPU-centric virtual machines or will offer them soon — and for the most part, these data center operators are betting on NVIDIA.

Nvidia has aggressively positioned itself as a key provider for pushing deep learning, while marketing its Tesla GPUs as a springboard for deep learning capabilities, and emphasizing the same for investors. Cirrascale leverages its patented Vertical Cooling Technology and proprietary PCIe switch riser technology to provide the industry’s densest rackmount and blade-based peered multi- GPU platforms.

Together, these accelerators are said to enable developers to use Nvidia’s Tesla Accelerated Computing Platform to drive machine learning in hyperscale data centres and thus create unprecedented AI-based applications. Industries ranging from consumer cloud services, automotive and health care are being revolutionised as we speak. “Machine learning is the grand computational challenge of our generation. Pricing on either of these two new Tesla accelerators has not been announced, but the M40 and the Hyperscale Suite will become available “later this year”.

Here you can write a commentary on the recording "NVIDIA Launches New GPUs For Deep Learning Applications, Partners With Mesosphere".

* Required fields
All the reviews are moderated.
Twitter-news
Our partners
Follow us
Contact us
Our contacts

dima911@gmail.com

ICQ: 423360519

About this site