News in English

Google reveals all-seeing AI camera tool that REMEMBERS where you left your glasses and other belongings

LOSING your glasses or car keys could soon be a problem of the past after Google revealed a new all-seeing and hearing AI tool.

The tech giant has announced a raft of major upgrades to its AI efforts as it fights to keep up with ChatGPT creator OpenAI.

Getty
Google in a race to keep up with OpenAI[/caption]
Google
Gemini will identify parts of an object just by looking at it[/caption]
Google
It can even remember where you’ve left objects like your glasses or keys[/caption]

OpenAI jumped ahead of Google‘s annual developer event to announce its own big boosts on Monday, including an enhanced chatbot that can see photos and even speak to you like a super-charged Siri.

Now Google is hoping to win over users with a new Project Astra concept that sees everything around you using the camera on your smartphone.

It allows people to walk around and ask anything about their surroundings in real-time.

For example, the tool can identify the name of parts on a piece of equipment such as a speaker that you may need to repair.

But even more shocking is the tech’s ability to remember what it’s seen as you walk around a room.

In a demo video, Google showed how a person asked “Where did I leave my glasses?” and the tool instantly recalled seeing them.

“To be truly useful, an agent needs to understand and respond to the complex and dynamic world just like people do — and take in and remember what it sees and hears to understand context and take action,” explained Demis Hassabis, boss of Google’s AI division DeepMind.

“It also needs to be proactive, teachable and personal, so users can talk to it naturally and without lag or delay.”

The tech giant says it has also improved how voice assistants sound so they’re more natural.

Parts of Project Astra will appear in Google products like the Gemini app later this year.

Hassabis added: “These agents were built on our Gemini model and other task specific models, and were designed to process information faster by continuously encoding video frames, combining the video and speech input into a timeline of events, and caching this information for efficient recall.”

The announcement comes a day after OpenAI surprised the world again with its new Chat-GPT4o tech that can also see things and react.

Bosses showed how it could solve mathematics just by holding the camera in front of a piece of paper, as well as figuring out a person’s mood simply by looking at their face.

THE FIGHT FOR AI

Analysis by Jamie Harris, Senior Technology and Science Reporter

The AI battleground is seriously heating up as all the big guns in tech fight it out for a slice of the action.

Google’s annual I/O event has been in the calendar for months and OpenAI brutally announced a surprise “spring update” would happen the day before.

One expert described Chat-GPT4o as making other voice assistants look “utterly primitive”.

Let’s not forget Apple and Amazon?

Apple is reportedly “nearing” a deal with OpenAI to improve Siri on iPhone.

And Amazon is racing to upgrade Alexa with some AI too.

All this is just in the voice assistant arena.

AI is becoming unavoidable.

Everything from internet searches – as Google announced today – to the photos you take on your phone has some form of AI influence these days

Читайте на 123ru.net