Clik here to view.

Starting last year, as I mentioned at writeup publication time, EDN asked me to do yearly coverage of Google’s (or is that Alphabet’s? whatevah) I/O developer conference, as I’d already long been doing for Apple’s WWDC developer-tailored equivalent event, and on top of my ongoing throughout-the-year coverage of notable Google product announcements:
- Smartphones and smartwatches and their Android O/S foundations
- Various form factor computing platforms and their Chrome OS origins
- New and evolved processor architectures for all of these plus “cloud” servers
- etc…
And, as I also covered extensively a year ago, AI ended up being the predominant focus of Google I/O’s 2023 edition. Here’s part of the upfront summary of last year’s premier event coverage (which in part explains the rationalization for the yearly coverage going forward):
Deep learning and other AI operations…unsurprisingly were a regularly repeated topic at Wednesday morning’s keynote and, more generally, throughout the multi-day event. Google has long internally developed various AI technologies and products based on them—the company invented the transformer (the “T” in “GPT”) deep learning model technique now commonly used in natural language processing, for example—but productizing those research projects gained further “code red” urgency when Microsoft, in investment partnership with OpenAI, added AI-based enhancements to its Bing search service, which competes with Google’s core business. AI promises, as I’ve written before, to revolutionize how applications and the functions they’re based on are developed, implemented and updated. So, Google’s ongoing work in this area should be of interest even if your company isn’t one of Google’s partners or customers.
And unsurprisingly, given Google’s oft-stated, at the time, substantial and longstanding planned investment in various AI technologies and products and services based on them, AI was again the predominant focus at this year’s event, which took place earlier today as I write these words, on Tuesday, May 14:
But I’m getting ahead of myself…
The Pixel 8a
Look back at Google’s Pixel smartphone family history and you’ll see a fairly consistent cadence:
- One or several new premium model(s) launched in the fall of a given year, followed by (beginning with the Pixel 3 generation, to be precise)
- one (or, with the Pixel 4, two) mainstream “a” variant(s) a few calendar quarters later
The “a” variants are generally quite similar to their high-end precursors, albeit with feature set subtractions and other tweaks reflective of their lower price points (along with Google’s ongoing desire to still turn a profit, therefore the lower associated bill of materials costs). And for the last several years, they’ve been unveiled at Google I/O, beginning with the Pixel 6a, the mainstream variant of the initial Pixel 6 generation based on Google-developed SoCs, which launched at the 2022 event edition. The company had canceled Google I/O in 2020 due to the looming pandemic, and 2021 was 100% virtual and was also (bad-pun-intended) plagued by ongoing supply chain issues, so mebbe they’d originally planned this cadence earlier? Dunno.
The new Pixel 8a continues this trend, at least from feature set foundation and optimization standpoints (thicker display bezels, less fancy-pants rear camera subsystem, etc.). And by the way, please put in proper perspective reviewers who say things like “why would I buy a Pixel 8a when I can get a Pixel 8 for around the same price?” They’re not only comparing apples to oranges; they’re also comparing old versus new fruit (this is not an allusion to Apple; that’s in the next paragraph). The Pixel 8 and 8 Pro launched seven months ago, and details on the Pixel 9 family successors are already beginning to leak. What you’re seeing are retailers promo-pricing Pixel 8s to clear out inventory, making room for Pixel 9 successors to come soon. And what these reviewers are doing is comparing them against brand-new list-price Pixel 8as. In a few months, order will once again be restored to the universe. That all said, to be clear, if you need a new phone now, the Pixel 8 is a compelling option.
But here’s the thing…this year, the Pixel 8a was unveiled a week prior to Google I/O, and even more notably, right on top of Apple’s most recent “Let Loose” product launch party. Why? I haven’t yet seen a straight answer from Google, so here are some guesses:
- It was an in-general attempt by Google to draw attention away from (or at least mute the enthusiasm for) Apple and its comparatively expensive (albeit non-phone) widgets
- Specifically, someone at Google had gotten a (mistaken) tip that Apple might roll out one (or a few) iPhone(s) at the event and decided to proactively queue up a counterpunch
- Google had so much else to announce at I/O this year that they, not wanting the Pixel 8a to get lost in all the noise, decided to unveil it ahead of time instead.
- They saw all the Pixel 8a leaks and figured “oh, what the heck, let’s just let ‘er rip”.
The Pixel Tablet (redux)
But that wasn’t the only thing that Google announced last week, on top of Apple’s news. And in this particular case the operative term is relaunched, and the presumed reasoning is, if anything, even more baffling. Go back to my year-back coverage, and you’ll see that Google launched the Tensor G2-based Pixel Tablet at $499 (128GB, 255GB for $100 more), complete with a stand that transforms it into an Amazon Echo Show-competing (and Nest Hub-succeeding) smart display:
Well, here’s the thing…Google relaunched the very same thing last week, at a lower price point ($399), but absent the stand in this particular variant instance (the stand-inclusive product option is still available at $499). It also doesn’t seem that you can subsequently buy the stand, more accurately described as a dock (since it also acts as a charger and embeds speakers that reportedly notably boost sound quality), separately. That all, said, the stand-inclusive Pixel Tablet is coincidentally (or not) on sale at Woot! for $379.99 as I type these words, so…Image may be NSFW.
Clik here to view.
And what explains this relaunch? Well:
- Apple also unveiled tablets that same day last week, at much higher prices, so there’s the (more direct in this case, versus the Pixel 8a) competitive one-upmanship angle, and
- Maybe Google hopes there’s sustainable veracity to the reports that Android tablet shipments (goosed by lucrative trade-in discounts) are increasing at iPads’ detriment?
Please share your thoughts on Google’s last-week pre- and re-announcements in the comments.
OpenAI
Turnabout is fair play, it seems. Last Friday, rumors began circulating that OpenAI, the developer of the best-known GPT (generative pre-trained transformer) LLM (large language model), among others, was going to announce something on Monday, one day ahead of Google I/O. And given the supposed announcement’s chronological proximity to Google I/O, those rumors further hypothesized that perhaps OpenAI was specifically going to announce its own GPT-powered search engine as an alternative to Google’s famous (and lucrative) offering. OpenAI ended up in-advance denying the latter rumor twist, at least for the moment, but what did get announced was still (proactively, it turned out) Google-competitive, and with an interesting twist of its own.
To explain, I’ll reiterate another excerpt from my year-ago Google I/O 2023 coverage:
The way I look at AI is by splitting up the entire process into four main steps:
- Input
- Analysis and identification
- Appropriate-response discernment, and
- Output
Now a quote from the LLM-focused section of my 2023 year-end retrospective writeup:
LLMS’ speedy widespread acceptance, both as a generative AI input (and sometimes also output) mechanism and more generally as an AI-and-other interface scheme, isn’t a surprise…their popularity was a matter of when, not if. Natural language interaction is at the longstanding core of how we communicate with each other after all, and would therefore inherently be a preferable way to interact with computers and other systems (which Star Trek futuristically showcased more than a half-century ago). To wit, nearly a decade ago I was already pointing out that I was finding myself increasingly (and predominantly, in fact) talking to computers, phones, tablets, watches and other “smart” widgets in lieu of traditional tapping on screens and keyboards, and the like. That the intelligence that interprets and responds to my verbally uttered questions and comments is now deep learning trained and subsequent inferred versus traditionally algorithmic in nature is, simplistically speaking, just an (extremely effective in its end result, mind you) implementation nuance.
Here’s the thing: OpenAI’s GPT is inherently a text-trained therefore text-inferring deep learning model (steps 2 and 3 in my earlier quote), reflected in the name of the “ChatGPT” AI agent service based on it (later OpenAI GPT versions also support still image data). To speak to an LLM (step 1) as I described in the previous paragraph, for example, you need to front-end leverage another OpenAI model and associated service called Whisper. And for generative AI-based video from text (step 4) there’s another OpenAI model and service, back-end this time, called Sora.
Now for that “interesting twist” from OpenAI that I mentioned at the beginning of this section. In late April, a mysterious and powerful chatbot named “gpt2-chatbot” appeared on a LLM comparative evaluation forum, only to disappear shortly thereafter…and reappear again a week after that. Its name led some to deduce that it was a research project from OpenAI (further fueled by a cryptic social media post from CEO Sam Altman) —perhaps a potential successor to latest-generation GPT-4 Turbo—which had intentionally-or-not leaked into the public domain.
Turns out, we learned on Monday, it was a test-drive preview of now-public GPT-4o (“o” for “omni”), And not only does GPT-4o outperform OpenAI precursors as well as competitors, based on Chatbot Arena leaderboard results, it’s also increasingly multimodal, meaning that it’s been trained on and therefore comprehends additional input (as well as generating additional output) data types. In this case, it encompasses not only text and still images but also audio and vision (specifically, video). The results are very intriguing. For completeness, I should note that OpenAI also announced chatbot agent application variants for both MacOS and Windows on Monday, following up on the already-available Android and iOS/iPadOS versions.
Google Gemini
All of which leads us (finally) to today’s news, complete with the aforementioned 121 claimed utterances of “AI” (no, I don’t know how many times they said “Gemini”):
@verge Pretty sure Google is focusing on AI at this year’s I/O. #google #googleio #ai #tech #technews #techtok ♬ original sound – The Verge
Gemini is Google’s latest LLM, previewed a year ago, formally unveiled in late 2023 and notably enhanced this time around. Like OpenAI with GPT, Google’s deep learning efforts started out text-only with models such as LaMDA and PaLM; more recent Gemini has conversely been multimodal from the get-go. And pretty much everything Google talked about during today’s keynote (and will cover more comprehensively all week) is Gemini in origin, whether as-is or:
- Memory footprint and computational “muscle” fine-tuned for resource-constrained embedded systems, smartphones and such (Gemini Nano, for example), and/or
- Training dataset-tailored for application-specific use cases
including the Gemma open model variants.
In the interest of wordcount (pushing 2,000 as I type this), I’m not going to go through each of the Gemini-based services and other technologies and products announced today (and teased ahead of time, in Project Astro’s case) in detail; those sufficiently motivated can watch the earlier-embedded video (upfront warning: 2 hours), archived liveblogs and/or summaries (linked to more detailed pieces) for all the details. As usual, the demos were compelling, although it wasn’t entirely clear in some cases whether they were live or (as Google caught grief for a few months ago) prerecorded and edited. More generally, the degree of success in translating scripted and otherwise controlled-environment demo results into real-life robustness (absent hallucinations, please) is yet to be determined. Here are a few other tech tidbits:
- Google predictably (they do this every year) unveiled its sixth-generation TPU (Tensor Processing Unit) architecture, code-named Trillium, with a claimed 4.7x performance boost in compute performance per chip versus today’s 5th-generation precursor. Design enhancements to achieve this result include expanded (count? function? both? not clear) matrix multiply units, faster clock speeds, doubled memory bandwidth and the third-generation SparseCore, a “specialized accelerator for processing ultra-large embeddings common in advanced ranking and recommendation workloads,” with claimed benefits both in training throughput and subsequent inference latency.
- The company snuck a glimpse of some AR glasses (lab experiment? future-product prototype? not clear) into a demo. Google Glass 2, Revenge of the Glassholes, anyone?
- And I couldn’t help but notice that the company ran two full-page (and identical-content, to boot) ads for YouTube in today’s Wall Street Journal even though the service was barely mentioned in the keynote itself. Printing error? Google I/O-unrelated v-TikTok competitive advertising? Again, not clear.
And with that, my Google I/O coverage is finit for another year. Over to all of you for your thoughts in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- The 2023 Google I/O: It’s all about AI, don’t cha know
- Apple’s Spring 2024: In-person announcements no more?
- Playin’ with Google’s Pixel 7
- Fall tech events twain: Apple’s (potential) loss is Amazon and Google’s gain
- Google Pixelbook: Reviewing the Android-on-Chrome OS experience
- Amazon, Microsoft and Google: Services, software and system launch events for the fall (but not for the frugal)
The post The 2024 Google I/O: It’s (pretty much) all about AI progress, if you didn’t already guess appeared first on EDN.