Nvidia announces new Tesla M40 and M4 GPUs | Techno stream

Nvidia announces new Tesla M40 and M4 GPUs

11 Nov 2015 | Author: | No comments yet »

Extreme Density, Energy and Cooling Optimized 1U 4x GPU, 4U 8x GPU and 7U 20x GPU Blade Offer up to 28 Teraflops of Single Precision Performance per U, Dramatically Reducing Time to Train Deep Neural Networks.

SAN JOSE, Calif., Nov. 10, 2015 /PRNewswire/ — Super Micro Computer, Inc. NVIDIA HAS ANNOUNCED an end-to-end hyperscale data centre platform that it says will let web services companies accelerate their machine learning workloads and power advanced artificial intelligence (AI) applications.Chipmaker Nvidia today announced two new flavors of Tesla graphics processing units (GPUs) targeted at artificial intelligence and other complex types of computing.Nvidia announced it will launch the processors Tesla M40 and M4 GPUs (graphic processing units), which may represent a new step in the evolution of machine-learning systems.

SMCI, -3.47% a global leader in high-performance, high-efficiency server, storage technology and green computing delivers the industry’s widest range of GPU-enabled SuperServers ready to support the new addition to the NVIDIA® Tesla® Accelerated Computing Platform, the NVIDIA® Tesla® M40 GPU Accelerator. Available immediately, the Supermicro 1U 4x GPU (SYS-1028GQ-TR/-TRT), 2U 6x GPU (SYS-2028GR-TR/-TRH/-TRHT), 4U 8x GPU (SYS-4028GR-TR/-TRT), 4U/Tower 4x GPU (SYS-7048GR-TR) and 7U 20x GPU SuperBlade® (SBI-7128RG-X/-F/-F2) offer unrivaled configuration flexibility and industry leading GPU density.

The more brawny M40, by contrast, comes with 3,072 Cuda cores, 12GB of GDDR5 memory, 288 GB/second of bandwidth, power usage of 250 watts, and a peak of 7 teraflops. It includes new additions to the Tesla platform, including: the Tesla M40 GPU, which Nvidia claims is “the most powerful accelerator designed for training deep neural networks”; the Tesla M4 GPU, a low-power, small form-factor accelerator for machine learning inference; and Hyperscale Suite software, which is designed for machine learning and video processing. These new GPU accelerators, based on Nvidia’s Maxwell architecture, are the successors for Nvidia’s Kepler-based Tesla K40 and K80. (Specs are here.) The GPU has become a recognized standard for a type of AI called deep learning. The company has dubbed the latest breed of cards as its “hyperscale accelerator”, as Nvidia believes the processors deliver a level of specialization that can help generate a greater market share in the machine learning market. ECC memory, 4x PCI-E 3.0 x16 (4 GPU cards opt.), 2x PCI-E 3.0 x8 (1 in x16), and 1x PCI-E 2.0 x4 (in x8) slot, 2x GbE, 1x Video, 2x COM/Serial, 5x USB 3.0, 4x USB 2.0, Built-in Server management tool (IPMI 2.0, KVM/media over LAN) with dedicated LAN port, redundant 2000W Titanium Level (96%+) power supplies 7U SuperBlade® (SBI-7128RG-X/-F/-F2) – 20x NVIDIA Tesla M40 GPUs in 10 Blade Servers, each supports Dual Intel® Xeon® processor E5-2600 v3 family, up to 512GB DDR4 2133MT/s ECC RDIMM in 8 DIMM slots, 1x 2.5″ SSD, 1 SATADOM, FDR 56Gb/s InfiniBand switches, 1 and 10Gb/s Ethernet switches, redundant chassis management module (CMM) and Titanium Level (96%+) 3200W and Platinum Level (95%) 3000W/2500W (N+N or N+1 redundant) power supplies with cooling fans.

Together, NVIDIA and Mesosphere want to “make it easier for web-services companies to build and deploy accelerated data centers for their next-generation applications.” Because of the work Mesosphere did with NVIDIA, developers using Apache Mesos (the open-source backbone of Mesosphere’s data center operating system) will be able to use GPU resources in a data center just like they use CPUs and memory. For years, Nvidia had marketed its Tesla line of GPUs under the term “accelerated computing,” but in its annual report for investors this year, the company changed its tune and began emphasizing Tesla’s deep learning capability. Tagging online videos with information about colors, surroundings and even emotions in order to enhance search could soon become easier with Nvidia’s new machine-learning graphics processors. GPU resources will be clustered into a single pool and the software will automatically distribute jobs across all the different machines that offer compatible GPUs.

In addition to coming out with the new GPUs — and accompanying performance benchmarks for the Caffe deep learning framework — Nvidia is also introducing its new Hyperscale Suite of software, including the cuDNN library for building applications with deep learning, a GPU-friendly version of the FFmpeg video processing framework, and an Image Compute Engine tool for resizing images. Following up the Titan X platform for mobile gaming, the Pascal GPU series arrived with the promise to speed up deep learning applications tenfold compared to Nvidia’s previous Maxwell processors. Supermicro is committed to protecting the environment through its “We Keep IT Green®” initiative and provides customers with the most energy-efficient, environmentally-friendly solutions available on the market. According to Ian Buck, Nvidia’s vice president of accelerated computing, both graphic processing units are server based, and thus they will require to be stored in the same server, also suitable for videos. Image courtesy of NVIDIA. “Virtual reality, deep learning, cloud computing and autonomous driving are developing with incredible speed, and we are playing an important role in all of them”, Huang wrote, in prepared remarks.

To view the original version on PR Newswire, visit:http://www.prnewswire.com/news-releases/supermicro-1u-4u-gpu-superservers-and-7u-superblade-maximize-compute-density-and-performance-per-watt-with-support-for-new-nvidia-tesla-m40-gpu-accelerator-300176160.html A number of public cloud vendors, including AWS and Microsoft, now either offer GPU-centric virtual machines or will offer them soon — and for the most part, these data center operators are betting on NVIDIA. Making use of a system that can hold a growing database, machines can store new information and adapt to changes, which can be useful in a large variety of fields, from intuitive operating systems to driving services. Nvidia’s machine-learning technology (in this case Tegra X1) can be used to boost the auto-pilot feature on cars, which relies on artificial intelligence that responds to the vehicle’s trajectory through different traffic patterns that processors record as new data for future reference. Concentrating on the more traditional functions of GPUs, the vice president explains “machine learning to help add information attached to videos could improve the accuracy of search results.

Industries ranging from consumer cloud services, automotive and health care are being revolutionised as we speak. “Machine learning is the grand computational challenge of our generation.

Here you can write a commentary on the recording "Nvidia announces new Tesla M40 and M4 GPUs".

* Required fields
All the reviews are moderated.
Twitter-news
Our partners
Follow us
Contact us
Our contacts

dima911@gmail.com

ICQ: 423360519

About this site