Google Open Sources ‘TensorFlow’ Machine-Learning Tech

9 Nov 2015 | Author: | No comments yet »

Google Just Open Sourced TensorFlow, Its Artificial Intelligence Engine.

“Just a couple of years ago, you couldn’t talk to the Google app through the noise of a city sidewalk, or read a sign in Russian using Google Translate, or instantly find pictures of your Labradoodle in Google Photos,” CEO Sundar Pichai wrote in a blog entry. “Our apps just weren’t smart enough.” That’s no longer the case, though, thanks in part to TensorFlow, a new machine-learning system that is “faster, smarter, and more flexible than our old system,” Pichai said. “We use TensorFlow for everything from speech recognition in the Google app, to Smart Reply in Inbox, to search in Google Photos. Google has been working on machine learning for years deep inside its R&D labs, and some of the advances it’s made have found their way into products such as Google Photos.The Web company, seeking to influence how people design, test, and run artificial-intelligence systems, is making its internal AI development software available for free.

O’Reilly was standing a few feet from Google CEO and co-founder Larry Page this past May, at a small cocktail reception for the press at the annual Google I/O conference—the centerpiece of the company’s year. It allows us to build and train neural nets up to five times faster than our first-generation system, so we can use it to improve our products much more quickly.” As the season of giving is nearly upon us, Google is open-sourcing TensorFlow so the entire machine-learning community can play with it.

Google had unveiled its personal photos app earlier in the day, and O’Reilly marveled that if he typed something like “gravestone” into the search box, the app could find a photo of his uncle’s grave, taken so long ago. Students, researchers, hobbyists, hackers, engineers, developers, innovators, and inventors are encouraged to build upon the existing program framework, provide feedback, and contribute to the source code. On Monday Google announced it is open-sourcing its machine learning software, meaning it’s making the software freely available to outside software developers. But DistBelief was limited, insofar as it was “narrowly targeted to neural networks, it was difficult to configure, and it was tightly coupled to Google’s internal infrastructure — making it nearly impossible to share research code externally,” according to a blog post by Jeff Dean, senior Google fellow, and Rajat Monga, technical lead. TensorFlow was originally a project developed by researchers and engineers working on the Google Brain Team within Google’s Machine Intelligence research organization for the purpose of conducting machine learning and deep neural networks research.

It’s based on the same internal system Google has spent several years developing to support its AI software and other mathematically complex programs. Machine learning has been around for a long time, and it has been a crucial technology in the success of Internet giants like Google, Amazon and Facebook — used in the development of search, ad targeting and product recommendations.

In more technical terms, the deep learning framework is a both a production-grade C++ backend which can run on CPUs, Nvidia GPUs, Android, iOS and OS X, as well as a Python front-end that interfaces with Numpy, iPython Notebooks, and other Python-based tooling, writes Vincent Vanhoucke,Tech Lead and Manager for the Brain Team on his Google+ profile. For example, back in October Google revealed it would use AI to improve YouTube’s video thumbnails, effectively creating the best thumbnail when users upload videos. It also makes entirely new product categories possible, ranging from self-driving cars from Tesla and Google, to new forms of entertainment in virtual reality applications being developed by Facebook for its virtual-reality system Oculus. But its accuracy is enormously impressive—so impressive that O’Reilly couldn’t understand why Google didn’t sell access to its AI engine via the Internet, cloud-computing style, letting others drive their apps with the same machine learning. Last Friday, Toyota announced it would spend $1 billion for research and development on artificial intelligence in the United States over the next five years.

Deep learning involves training systems called “artificial neural networks” with lots of data derived from various inputs, and introducing new information to the mix — there are many startups currently working on developing deep learning techniques. Google announced on Monday a bold step to establish its leadership in the field of machine learning, accelerate the pace of innovation in the field and potentially strengthen its business.

More broadly, the software could also be used in other contexts as well, such as to help researchers untangle complex data in fields such as biology and astronomy, Pichai said. They’re betting that by being open they can entice talented academics to work for them, while encouraging the wider community to work on new AI technologies. After all, Google also uses this AI engine to recognize spoken words, translate from one language to another, improve Internet search results, and more.

Featuring a Python interface, TensorFlow is now available under an Apache 2.0 license as a standalone library along with associated tools, examples and tutorials. In addition, the company believes that TensorFlow has the ability to be useful in research to make sense of complex data, like protein folding or crunching astronomy data, for example. But with TensorFlow we’ve got a good start, and we can all be in it together.” Google’s innovative search technologies connect millions of people around the world with information every day. Instead of separate tools for each group, TensorFlow lets researchers test new ideas, and when they work, move them into products without having to re-write code.

Its previous system, DistBelief, developed in 2011, was tailored for building neural networks, the building blocks of deep learning, and for use on Google’s own network of data centers. This can speed up product improvements, and of course, by giving the larger machine learning community access to now do the same, Google will also benefit from the accelerated pace of innovations that come of the open sourced tech.

The initial release of TensorFlow will be a version that runs on a single machine, and it will be put into effect for many computers in the months ahead, Google said. This latter feature is what powers “Smart Reply,” a way for Google’s email app Inbox t create automatic responses to your emails for you – an easy-to-understand example of the potential for machine learning to enhance the products we use daily, like email. By releasing TensorFlow, Google aims to make the software it built to develop and run its own AI systems a part of the standard toolset used by researchers, said Jason Freidenfelds, a spokesman for Mountain View, California-based Google. Through open source, outsiders can help improve on Google’s technology and, yes, return these improvements back to Google. “What we’re hoping is that the community adopts this as a good way of expressing machine learning algorithms of lots of different types, and also contributes to building and improving [TensorFlow] in lots of different and interesting ways,” says Jeff Dean, one of Google’s most important engineers and a key player in the rise of its deep learning tech.

But Christopher Manning, a computer scientist at Stanford University, who has tried TensorFlow, is impressed by the software. “It’s a better, faster set of tools for deep learning,” Mr. This includes Torch—a system originally built by researchers at New York University, many of whom are now at Facebook—as well as systems like Caffe and Theano.

And while Google does not make money directly on Android, the company profits handsomely from its search-advertising services on Android phones. “The software itself is open source, but if this is successful, it will feed Google’s money-making machine,” said Michael A. DistBelief also trained the Inception model that won Imagenet’s Large Scale Visual Recognition Challenge in 2014, was used in experiments in automated image captioning, and in DeepDream. That’s because Google’s AI engine is regarded by some as the world’s most advanced—and because, well, it’s Google. “This is really interesting,” says Chris Nicholson, who runs a deep learning startup called Skymind. “Google is five to seven years ahead of the rest of the world.

We added all this while improving upon DistBelief’s speed, scalability, and production readiness — in fact, on some benchmarks, TensorFlow is twice as fast as DistBelief,” the announcement states. Google’s move, said Oren Etzioni, executive director of the Allen Institute for Artificial Intelligence, is “part of a platform play” to attract developers and new hires to its machine-learning technology. “But Google is taking a much less restrictive approach,” Mr.

And it’s not sharing access to the remarkably advanced hardware infrastructure that drives this engine (that would certainly come with a price tag). Google became the Internet’s most dominant force in large part because of the uniquely powerful software and hardware it built inside its computer data centers—software and hardware that could help run all its online services, that could juggle traffic and data from an unprecedented number of people across the globe.

Typically, Google trains these neural nets using a vast array of machines equipped with GPU chips—computer processors that were originally built to render graphics for games and other highly visual applications, but have also proven quite adept at deep learning. It can run entirely on a phone—without connecting to a data center across the ‘net—letting you translate foreign text into your native language even when you don’t have a good wireless signal. It’s a set of software libraries—a bunch of code—that you can slip into any application so that it too can learn tasks like image recognition, speech recognition, and language translation. The hope, however, is that outsiders will expand the tool to other languages, including Google Go, Java, and perhaps even Javascript, so that coders have more ways of building apps.

According to Dean, TensorFlow is well suited not only to deep learning, but to other forms of AI, including reinforcement learning and logistic regression. Why this apparent change in Google philosophy—this decision to open source TensorFlow after spending so many years keeping important code to itself?

The open source movement—where Internet companies share so many of their tools in order to accelerate the rate of development—has picked up considerable speed over the past decade. They had a lot of tendrils into existing systems at Google and it would have been hard to sever those tendrils,” Dean says. “With TensorFlow, when we started to develop it, we kind of looked at ourselves and said: ‘Hey, maybe we should open source this.’” That said, TensorFlow is still tied, in some ways, to the internal Google infrastructure, according to Google engineer Rajat Monga.

But it has shared the code under what’s called an Apache 2 license, meaning anyone is free to use the code as they please. “Our licensing terms should convince the community that this really is an open product,” Dean says. Like Torch and Theano, he says, it’s good for quickly spinning up research projects, and like Caffe, it’s good for pushing those research projects into the real world. And that’s a good thing. “A fair bit of the advancement in deep learning in the past three or four years has been helped by these kinds of libraries, which help researchers focus on their models.

Here you can write a commentary on the recording "Google Open Sources ‘TensorFlow’ Machine-Learning Tech".

* Required fields
Twitter-news
Our partners
Follow us
Contact us
Our contacts

dima911@gmail.com

ICQ: 423360519

About this site