Dave Lee (BBC) has a nice round up of the major announcements that came out of Google's annual developer conference, held this week.
If you follow tech news often, you’ll be more than aware of the promise offered by artificial intelligence (AI) and machine learning. Often, though, it feels like a far-away goal. It will get there, but right now it’s primitive.
At Google’s annual developer conference, held this week near its Mountain View headquarters, the company showed off some of the best practical applications of AI and machine learning I’ve seen yet. They may not make your jaw drop - or, thankfully, put you out of a job - but it’s an incremental change that shows how Google is putting its immense computing power to work.
We weren’t expecting any major new hardware launches this year, instead it was time for Google to build on what we saw here last year with regards to personal assistants, AI, and cloud computing.
- Google Lens - the app uses image recognition to identify objects appearing in your camera lens in real-time. It means you can point a smartphone at a flower and be told exactly what it is.
- A Standalone Daydream (Virtual Reality) Headset
- Very clever photo tools -
Using facial recognition, Google Photos will now spot, say, your mate Bob and automatically suggest you send the picture, or a group of them, straight to Bob. The idea is to remove a little of the friction with photo-sharing.
Shared Libraries takes this a step further, allowing you to share, for example, any picture of your kids automatically with your partner. The software will recognise the faces and create the album for you. If that sends some privacy-related shivers down your spine, Google assured everyone there would be no unexpected sharing of pictures you want to keep secret. We’ll see.
Using machine learning and AI (noticing a pattern here?) the app will also remove unwanted objects from pictures, for when something ugly spoils a good shot.
- VPS - visual positioning system
Most of us are familiar with GPS - global positioning system - but that technology can only get you so far. Though terrific for travelling around large areas outside, GPS has real limitations when you need something more accurate.
Google thinks VPS - visual positioning system - is how to fill that gap. Using Tango, a 3D visualisation technology, VPS looks for recognisable objects around you to work out where you are, with an accuracy of a few centimetres
Google’s head of virtual reality, Clay Bavor, said one application would be using VPS to find the exact location of a product in a large shop.
"GPS can get you to the door,” said Mr Bavor on stage, "and then VPS can get you to the exact item that you’re looking for”.
- A better Google Home (and Assistant on the iPhone)
As Dave points out, these are incremental changes rather than leaps forward and there wasn't all that much here that was jaw dropping. However, some of these developments could have significant implications for wildlife conservaiton applications - if accurate, google lens stands out in particular for obvious reasons. What are your thoughts - could you see any of these developments having meaningful application in your work?
26 May 2017 1:06am
You can get a sense of Google Lens today with the Thing Translator Google AI Experiment. I recommend checking out the other experiments too. Fun and sometimes bordering on magical.