Google.ai launched in push towards an AI-first world

GoogleaiAs it continues to further its core mission of Òorganizing the world’s informationÓ, Google is moving from a mobile-first to an artificial intelligence (AI) first world, said CEO Sundar Pichai at the Google I/O 2017 Developers Conference at the Shoreline Amphitheatre in Mountain View, California.

ÒIn an AI-first world, we are rethinking all our products,Ó said Mr Pichai, adding that the company is using machine learning (ML), deep learning (DL) and computer vision in all its productsÑbe it search, data centres, medical imaging, cloud, Google Assistant, the newly-launched Google Lens for Google Assistant, Google Home, or hands-free calling on Google Home.

All these innovations are now being clubbed under an umbrella unit called Google.ai, which comprises Research, Tools and Applied AI.

ÒMobile brought multi-touch,” said Mr Pichai.Ê”Now we have voice and vision.” He pointed out that computers are getting much better at understanding speech. ÒSimilar is the case for Vision (with) great improvements in computer vision. Clearly at an inflection point with vision. So today we are announcing Google Lens, which will be first included in Google Assistant.”

Advertisements
  • eHotelier Essentials Banner

As part of Google’s AI-first strategy, Mr Pichai also unveiled its second-generation Tensor Processor Unit (TPU)Ña cloud-computing hardware and software system that is part of Google’s AI-first data centre strategy. TPUs, first revealed last year, are chips designed specifically for Machine Learning. Pichai pointed out that TPUs were used by the AlphaGo AI system, DeepMind, that created a stir when it beat Go expert, Lee Sedol.

TPUs are being used by ML models to improve the company’s products like Google Translate and Google Photos. Google said its cloud TPUs are now being deployed across its Google Compute EngineÑa platform that companies and researchers can tap for computing resources similar to Amazon Web Services Inc. and Microsoft Corp.’s Azure.

Google also announced that the Assistant is coming to iOS devices. Users will be able to open up the Google App, press the voice button, and speak to the Assistant.

Google wants to improve intelligence in cars too. Even as cars are rapidly transforming into connected, intelligent machines and provide a new opportunity for enabling a rich app ecosystem, they still present a challenging environmentÑdriver distraction, varying screen sizes and shapes, different input mechanisms and local regulations to name a few.

Google on Wednesday said that two billion users are using Android users. Google is using Android Auto to enable developers to deliver Òseamless experiencesÓ to drivers through the number of Android Auto compatible cars and the new standalone phone app. The company is now beginning to integrate Android, the ecosystem, and the Google Assistant more deeply into cars.

Further, it was only on May 12 that Google announced Project Treble, insisting that it was Òre-architecting Android to make it easier, faster and less costly for manufacturers to update devices to a new version of AndroidÓ. Project Treble is a utility that Google wants to implement in the Android ecosystem to roll out the latest updates to the end user, regardless of the device’s make.

Android was unveiled in 2007 as a free, open-source mobile operating system. Project Treble will be coming to all new devices launched with Android O and beyond, according to the blog.

In 2007, Google held its first annual developer conference, which it called Google Developer Day. In 2008, this evolved into a two-day developer gathering at the Moscone Centre in San Francisco and gave way to the Google I/O conference we know today. The ÒIÓ and the ÒOÓ stand for Òinput/outputÓ, and Google’s statement of its commitment to ÒInnovation in the OpenÓ. The goal of the event is to empower developers with the resources they need to create experiences on its platforms including Android, Chrome, and Cloud.

Technology companies such as Google, Facebook Inc., Amazon.com Inc. and Nvidia Corp. want to claim the AI mindspace. For instance, just as Google has its TensorFlow framework for Deep Learning, Facebook has Caffe2. Amazon’s Alexa, Microsoft’s Cortana and Apple’s Siri compete with Google Assistant.

Moreover, other than newer entrants like the Bridge Explorer Edition from Occipital, there are existing products like Amazon Echo and Microsoft’s Project Evo that compete with Google Home. Echo connects to the Alexa Voice Service to play music, provide information, news, sports scores, weather and more. Google Home, on its part, is a voice-activated speaker powered by the Google Assistant.

The question, though, is whether Google have what it takes to deliver the goods, especially in the enterprise market where it does not have a significant presence?

ÒI think Google has everything it takes to deliver consumer services that are enhanced and improved by AI,” said Patrick Moorhead, President and Principal Analyst, Moor Insights and Strategy. “Google derives 95 per centÊof their profit from consumers, so I’m a bit sceptical if they can convert that to enterprises. Their business model of mining personal information could also clash with what enterprises really want, which are ways to make more and save more money. There’s no doubt Google has the experience. I question whether they have the enterprise mindset.”

By Leslie D-Monte from Live Mint

Ê

European budget and luxury hotels expected to give highest returns and best prospects
Indian hotel industry deplores the GST structure proposal by the GST Council