Google I/O came and went, and for the first time, the event was less about their upcoming hardware releases and more about their long-term software goals. Google has some really lofty goals for its AI services, and that was evident in how they talked about it throughout the entire I/O keynote. CEO of Google and Alphabet, Sundar Pichai lead the event by talking about the company’s initial dealings with AI and how the emerging tech was used to simplify and improve Gmail. It all started with “Smart Reply” which were pre-generated responses created based on an email’s content analysis. From there, Google integrated “Smart Compose” which would aid users in writing email replies or constructing them from scratch. Smart Compose used generative AI to anticipate what the next word, or in some cases full sentences would be based on the flow of conversation.
Yesterday during I/O, Google announced the next phase in their AI roadmap, which is called “Help me write.” Help Me Write was shown to be incredibly useful in assisting users with crafting the perfect response in those scenarios that could be a bit challenging. On stage, Pichai demoed the AI assistant crafting an email to be more stern sounding in asking for a full refund instead of a voucher for a canceled flight. The software went through previous emails, gathered flight info, analyzed prior conversations, and populated a response. From there, users can choose to send the generated response, or use a sub menu to refine the text even further. It’s really quite amazing, all things considered, as one that tends to be more passive, I could see tremendous benefit in that type of AI.
In addition to improvements in Gmail, Google will also roll out AI services across virtually all its products and services. Bard, the recently revealed AI assistant, has officially left its closed testing phase and is available for anyone to sign up for and use. Maps will get a new mode called Immersive View, which gives users a virtual tour of their route before they embark, while also providing things like air quality, traffic, and weather to allow for better trip planning. Immersive view will begin rolling out to devices this Summer, and will expand to over 15 cities like New York, San Fransisco, London, and Tokyo by the end of the year. Google Photos is also getting something called Magic Editor, which will allow users to reposition people or things that were chopped out the sensor in a shot, touch up the appearance of the sky, and automatically adjust the lighting in a photo according to any edits made.
These are all worthy additions to Google’s vast services potential, but they also all come with some caveats. For starters, AI although powerful and convenient, can still be wildly inaccurate in some cases. There are also growing concerns over the ethics of AI usage and whether or not companies fully understand its capabilities. There is a growing contingent of tech and AI experts who are looking to limit the development of the technology until companies can implement better securities and safeguards. There’s also a growing concern on how AI could impact the jobs market — seeing as how automation has the potential to completely circumvent thousands of jobs and job titles as it gets more sophisticated.
As Google transitions to an “AI-first” company, there will be tons of room for both improvement and transparency. One way they are looking to directly address this is by implementing tools to identify items that have been generated by AI. Photos will obviously be the absolute easiest to identify, while things like AI-generated voice and video will be significantly more difficult. Either way, Google has committed to doing the work, so we’ll just hope they can keep up. In the meantime, look for many of these new AI services from Google to be available starting today, while others will roll out over the course of the year.