From Search to Shopping: 11 new Artificial Intelligence apps in Google

Iscriviti alla newsletter

Not only MUM: during Google I/O the scene was dominated by the applications of the near future made possible by the development of artificial intelligence systems, which has become an integral part of the work that the Mountain View company is leading. Fundamental advances in computing are helping to address some of the biggest challenges of the century, such as climate change, while AI finds application in Big G ecosystem product updates, including Search, Maps and Images, proving how machine learning can improve the lives of users in large and small ways.

Eleven fields of AI use within Google products

In particular, there are 11 new updates based on artificial intelligence announced at last week’s event, which we will soon be able to see and use in our daily lives. An overview of these innovative applications comes from a summary post written by Christine Robson, Director of Product of Google Research: from MUM, which could change the way the search engine includes queries, up to Maps and Shopping, through futuristic solutions that will allow you to have early health diagnoses through your smartphone.

LaMDA, a breakthrough in natural language understanding for dialogue

The first system presented is LaMDA – short for “Language Model for Dialogue Applications”, defined as a machine learning model designed for dialogue and built on Transformer”, or the neural network architecture that Google invented and made open source, already used for Google BERT.

LaMDA is a vocal search and dialogue feature and can be trained to read terms, understand the relationship between words in a sentence and predict which word might come next. Compared to previous systems, it is trained to produce reasonable and specific answers and not simply generic. In the presentation of the CEO Pichai was shown an example of its operation, which showed greater efficiency than the current Google Assistant, with answers with complex nuances and, in some cases, even witty.

According to Google, “this search in the early stages could unlock more natural ways of interacting with technology and completely new categories of useful apps”, which come close to the complexity of human conversations that are “rooted in concepts that we have learned during our life, composed of answers that are both sensible and specific”.

MUM will contribute in making Google Search smarter

The article recalls that already in 2019 the company had launched BERT, a Transformer AI model that can better understand the intent behind the search queries, compared to which the Multitask Unified Model (MUM) is 1000 times more powerful.

For now, “we are still at the beginning of MUM’s exploration” and there is not yet a possible launch date in the Search – but, in the words of Danny Sullivan, there will be ads and specific information at the appropriate time; the goal is that one day we will be able to “type a long, dense query of information and natural sound, and find more quickly the relevant information” that we need.

Project Starline reconnects people

We enter the field of science fiction applications with Project Starline, a technological project that combines the advances of hardware and software to allow friends, family and colleagues to feel together, even when they are different cities (or countries).

“Imagine looking through a kind of magic window; through that window, you see another person, life-size and in three dimensions: you can speak naturally, gesticulate and make eye contact,” Robson writes.

To create this experience, Google is applying Search in artificial vision, machine learning, spatial audio and real-time compression; In addition, it has developed a light field display system that creates a sense of volume and depth without the need for additional glasses or earphones. The final effect is the feeling that “someone is sitting right in front of you, as if they were right there”.

The Quantum AI Campus to (also) build a super computer

Within the next ten years “we will build the world’s first quantum computer with error correction,” Google promises, and the new Quantum AI (Quantum Artificial Intelligence) campus is the place where this will be possible.

Campus Quantum AI di Google

Facing many of the world’s biggest challenges, from climate change to the next pandemic, will require a new kind of computing: a quantum computer, useful and error-corrected, will reflect the complexity of nature and develop new materials, better batteries, more effective medicines and more. For this reason, Google has decided to open a Quantum AI campus in Santa Barbara, California, which will host research offices, a manufacturing facility and the company’s first quantum data center.

Maps will help reduce moments of sudden braking while driving

Soon, Google Maps will use machine learning to reduce the chances of running into moments of abrupt braking, or “accidents where you hit hard on the brakes, caused by situations like sudden traffic jams or confusion on which highway exit to take”.

When we ask for directions in Maps, the system calculates the route based on many factors, such as the number of lanes of a road or the direction of the route. With this update, it will also consider the likelihood of abrupt braking: Maps will identify the two fastest route options at the time, automatically recommending the one with less sharp braking times (provided that the ETA – estimated time of arrival, expected arrival time – is more or less the same).

AI in Google Maps

In business forecasting, these changes have the potential to eliminate over 100 million abrupt braking events on guided routes with Google Maps every year.

Even more customized memories in Google Photos

Already today, with the Memories feature we can go back to look at important photos of past years or highlights of the last week. Using machine learning, Google Photos will soon be able to identify less obvious patterns in photos: from the end of summer, “when we find a set of three or more photos with similarities as shape or color, we will highlight these small reasons for you in your memories,” announces the company.

identificazione pattern in Google Foto

For instance, Photos could identify a pattern of family photos taken on the same sofa over the years – something we would never have thought of looking for, but that tells a meaningful story of our daily life.

Cinematic moments in photos

Google Photos will soon debut another feature, which closely resembles the moving images seen in the films of the Harry Potter saga!

“When you’re trying to get the perfect picture, you usually take the same picture two or three (or 20) times; using neural networks, we can take two almost identical pictures and fill in the gaps by creating new intermediate frames.”giving life to vivid and moving images called cinematic moments.

i momenti cinematici

Producing this effect from scratch “would take hours for professional animators, but with machine learning we can automatically generate these moments and bring them to your recent highlights.”, and the feature does not require a phone with specific technical features and will be available for everyone on Android and iOS.

Google Workspace, new features for a more inclusive collaboration

In Google Workspace – the suite of software and productivity tools for cloud computing and collaboration – aided writing will suggest a more inclusive language when applicable. For example, he might recommend using the word “chairperson” instead of ” chairman” or ” mail carrier” instead of ” mailman”. In addition, it can also offer other stylistic tips to avoid passive voice and offensive language, which can speed up editing and help make writing better.

AI applications in Google Shopping

During the event, Google also announced a number of features related to Google Shopping and commerce, such as the launch of Google Shopping Graph, a new partnership with Shopify, updates to Google Lens and more.

lo Shopping Graph di Google

The shopping graph shows the best products for the particular needs of the user, helping buyers find what they are looking for thanks to in-depth knowledge of all available products, based on information from images, videos, online reviews and even inventory in local stores.

The Shopping Graph is a model powered by artificial intelligence that tracks products, sellers, brands, reviews, product information and inventory data, as well as how all these attributes are related to each other. With people “shopping on Google more than a billion times a day, Shopping Graph makes these sessions more useful, connecting people with over 24 billion listings of millions of merchants on the Web”.

Interesting (although not linked to AI) is the partnership with Shopify, which allows users who use the platform to simplify the process to present their products on Google in a few clicks, so becoming discoverable for consumers with high intent on Google Search, Shopping, Youtube, Google Images and other surfaces.

Google Lens, nuovi sviluppi di AI

Google has also added the features of Google Lens to Google Photos, and so now in the app there will be a suggestion to search for photos with Lens and allow you to see search results that can help “to find that pair of shoes or wallpaper pattern that caught your attention”.

The progress of Google Lens and AR for learning

It is not the only update related to Google Lens, as also explained by a post on the company’s blog, which reveals that the application that allows you to do a search on what we see (from the camera, photos or even from the search bar) counts “more than 3 billion searches every month and a use is linked more and more to learning”.

Google Lens, apprendimento automatico

For this reason, Google has improved some tools, such as the Translate to Lens filter that allows you to “copy, listen or search in the translated text, helping students to access educational content from the Web in over 100 languages”.

Realtà Aumentata in Google

Another powerful tool for visual learning is augmented reality – AR: a practical application are “new athletes in augmented reality in Research”, which allows you to “see the distinctive movements of some of your favorite athletes in augmented reality, such as the famous balance beam routine by Simone Biles”.

A tool for immediate dermatology support

The last two practical applications of Artificial Intelligence are decidedly extraordinary, and both concern the health field.

Every year “we see billions of Google searches related to skin, nails and hair problems, but it can be difficult to describe what you see on your skin just with words,” Robson said. Support comes from the “dermatological assistance tool powered by AI with CE mark, a web-based application that we aim to make available for the first tests in the EU by the end of the year” which will make it easier to understand the potential situations on the skin.

assistenza dermatologica alimentata da AI

The use of this tool is very simple, because you just need to use your phone camera to take three pictures of your skin, hair or nails from different angles. The system will then ask questions about the type of skin, the time of appearance of the problem and the possible presence of other symptoms, which help AI to narrow the possibilities.

The AI model analyzes all this information and draws on its knowledge of 288 medical conditions to provide a list of possible problems in progress, to be further explored with targeted research. Obviously, Google specifies that the application is not designed to replace medical diagnosis, but rather to represent a good starting point.

A screening support for tuberculosis

Very specific also the other area of medical use of the AI, which will be able to support the battle against tuberculosis, which remains “one of the leading causes of death worldwide, infecting 10 million people a year and having a disproportionate impact on people in low-middle income countries”.

One of the problems is that “it is very difficult to diagnose early, because its symptoms are similar to those of other respiratory diseases”; chest X-rays help with diagnosis, but experts are not always available to read the results. That is why the World Health Organization (WHO) recently recommended using technology to help with screening and triage for tuberculosis: Google researchers are exploring “how to use artificial intelligence to identify potential patients with tuberculosis for follow-up tests, hoping to take the disease in advance and work to eradicate it”.


Featured image from The Keyword, Google

Try SEOZoom

7 days for FREE

Discover now all the SEOZoom features!