
Happy Apple Worldwide Developers Conference (WWDC) week!
Unlike many WWDCs that preceded it (but not all, with 2021 an example), this one was absent mention of any new hardware or underlying silicon, in contrast to, say, last year’s preview of the Vision Pro headset. Then again, the 2024 WWDC was also thankfully absent the wholesale culling of legacy but seemingly still perfectly capable hardware that tainted the 2022 WWDC (at least for me, with multiple computers in my personal stable getting axed in one fell swoop). But the new operating system versions unveiled at Monday’s keynote weren’t completely absent any associated axe-swinging, resulting both in system demise and maiming…
Hardware triage
From a computer standpoint, upcoming MacOS 15 “Sequoia” only fully lopped off two computers from the “supported” list (and then two years from now, not immediately, assuming Apple continues its longstanding practice of fully supporting not only the latest but also the prior two major yearly-cadence MacOS releases). But the rationale for doing so, along with that for one model that remains on the supported list, was inconsistent at best. Ahead of time, the rumor was that anything absent a T2 security chip was doomed. But the two computers dropped from MacOS “Sequoia”, the 2018 and closely related 2019 versions of the MacBook Air, are T2-inclusive systems. Conversely, the 2019 models of the iMac (both 21.5” and 27”) remain on the supported list, although neither is T2 security chip-enhanced. <shrugs shoulders>
That said, no remaining Intel-based Macs support any of the AI enhancements that Apple announced this week (hold that thought). But that said, I was pleasantly surprised that any legacy x86 Macs remained relevant. Recall, for comparison’s sake, that Apple announced its move from PowerPC to Intel at the early-June 2005 WWDC, with “universal” x86 support (alongside ongoing PowerPC support) first appearing in Mac OS X 10.4.4 at the beginning of 2006. Mac OS 10.6, released in August 2009, culled PowerPC processor support from the code, and “Rosetta” emulation support for legacy PowerPC applications ended with July 2011’s Mac OS X 10.7 (perhaps obviously, Apple temporarily elongated its traditional yearly new-O/S version stride during the transition). Measuring end of support by when hardware was dropped, July 2011 was also when Mac OS X 10.5 fell off the support list, six years (and a month) after the public transition began. How will (or not) past history inform current precedent this transition-time around, which started in June 2022? Who knows. I’m just glad my 2018 Mac mini, 2019 16” MacBook Pro and 2020 13” MacBook Pro will live on (officially, at least) for another year.
iPhones come through even more unscathed, at least at first glance; anything currently capable of running iOS 17 will also be supported by upcoming iOS 18 to some degree. But the fully supported list is conversely far more restrictive than with MacOS 15, which was simpatico to anything Apple Silicon-based. With smartphones, only “Pro” variants of the latest-generation iPhone 15 get the AI-enhanced nod. Accurately reflective of software requirements reality, or Apple’s excuse to prod existing customers to upgrade to this year’s models ? I’ll let you decide
.
With iPads, the same generalities apply, with three exceptions; anything capable of running iPadOS 17 can also handle enroute iPadOS 18 to at least a baseline level, except for the now-axed 2018 sixth-generation iPad and 2017 second-generation 10.5” and 12.9” iPad Pro. This situation, as with the earlier-mentioned 2019 iMacs and 2018-2019 MacBook Airs, makes no sense at initial glance; after all, the unsupported sixth-generation and still-supported seventh-generation iPads are based on the exact same A10 Fusion SoC, and the axed iPad Pro models are based on the more advanced A10X! That said, the two base iPad generations are different in at least one building-block respect; the former has only 2 GBytes of RAM, with the latter bumped it up 50% to 3 GBytes. But that said, second-generation iPad Pros had 4 GBytes, so… <shrugs shoulders again>. And, by the way, only Apple Silicon-based iPads get the enhanced-AI goods.
The obsolescence situation’s not quite as sanguine with Apple’s smartwatch product lines, alas. Upcoming watchOS 11 drops the Apple Watch Series 4, Series 5 and first-generation SE from the supported device list. On the other hand, tvOS 18-to-come is compatible with all Apple TV HD and 4K models stretching all the way back to 2015, so it’s got that going for it, which is nice (no fancy AI features here, thought, nor with the Apple Watch or Vision Pro headset, so…).
Ongoing Sherlocking
Before diving into this section’s details, an introductory definition: “Sherlocking” refers to Sherlock, a file and web search tool originally introduced by Apple in 1998 with Mac OS 8.5. However, as Wikipedia explains:
Advocates of Watson made by Karelia Software, LLC claim that Apple copied their product without permission, compensation, or attribution in producing Sherlock 3.
Moreover:
The phenomenon of Apple releasing a feature that supplants or obviates third-party software is so well known that being Sherlocked has become an accepted term used within the Mac and iOS developer community.
Apple unfortunately has a longstanding tradition of integrating capabilities that previously were supplied by third-party “partner” developers. Here’s 2023’s WWDC list compiled by TechCrunch, for example. And as for this year, here are at least a couple of the 2024 WWDC’s Sherlocked utilities I’ve noted in particular so far:
- Password management: notably spanning not only iOS, iPadOS and MacOS but also Windows, this one seemingly has 1Password and its third-party peers in Apple’s sights.
- An iPad-native Calculator: believe it or not, it took 14 years for Apple to add this, even though one’s been in iOS for both the iPhone and iPod touch since the beginning. It admittedly has some slick Pencil-enabled features beyond those of its iOS sibling, although RPN (reverse Polish notation) support seemingly remains absent. To wit, this HP15C-trained-in-college engineer will be stubbornly sticking with RPN-supportive PCalc.
That said, at least some of the rest-of-industry-precedent features that Apple belatedly adopts are welcomed. They include, this year, support for RCS messaging between Apple and other platforms, versus longstanding down-throttling to defeatured legacy SMS…although Apple kept plenty of other features iMessage-only. Speaking of text messages, they’re now capable of being sent via a satellite intermediary, building on last year’s Emergency SOS capabilities. And also added, this time building on earlier Apple-sourced Continuity innovations, is iPhone Mirroring support.
Earbud advancements
Touchless gesture interfaces for electronics devices have had mixed-at-best to-date success. Facial recognition-based “unlock” schemes using a front camera (conventional 2D or, better yet, a depth-sensing setup not fooled by photographs) have achieved widespread adoption. On the other hand, the countless to-date attempts to control a computer, smartphone, or tablet by a wave of the…err…hand…just haven’t taken off much, a reflection of at least four factors, I ‘spect:
- The limited applicability of the capability beyond for, say, flipping eBook pages
- The sizeable suite of gesture variants that must be retained to expand that applicability
- The imperfect decoding of, and response to, any of those gesture variants, and
- The battery-draining and privacy-concerning need to keep the camera “on” all the time
That said, the head-movement decoding support that Apple’s adding to its AirPods Pro earbuds is admittedly intriguing. This is the case no matter that I’m disconcerted that were I to use it in, for example, rejecting an incoming phone call by shaking my head side-to-side or, alternatively, accepting the call by nodding up and down, others might interpret me as afflicted by Tourettes and/or reacting to the voices in my head. That said, even more intriguing to me is the earbuds’ incoming Voice Isolation feature. As my recent piece pointed out, whereas external-oriented microphones can do a credible job of suppressing the ambient sounds that you hear, they’ve historically done little to suppress the noise that gets transmitted to someone else you’re speaking with. Apple claims to be tackling this latter issue with its upcoming update, and I’m cautiously optimistic that reality will at least approximate the hype (even though, as mine are first-generation models, I won’t be able to personally benefit from the new firmware).
Spatial computing evolution
Apple previewed the Vision Pro headset at WWDC a year ago and, in spite of reportedly flagging sales and existing-owner enthusiasm, the company is plugging on with notable (albeit, IMHO, not game-changing, particularly given the still-high price point) updates from it and partners. For one thing, sales availability is expanding to eight more countries this month (a reflection of higher production volume, accumulating unsold inventory, or both?). AI-rendered “spatial” versions of existing “2D” images in a user’s Photos library are also planned. As are, for example, an expanded repertoire of gesture interface options (to my earlier comments) and an ultrawide Mac virtual display mode equivalent in resolution to two side-by-side 4K monitors.
Other additions and enhancements are reflective of the Vision Pro’s dominant current use case as a video playback device. Spatial videos captured either by the headset itself or from an iPhone 15 Pro (there’s that exclusivity play again) will be shareable with others via the Vimeo service, for example. And for content creation by prosumers and professionals, both new hardware and software is enroute. An update to Final Cut Pro, for example, will “enable creators to edit spatial videos on their Mac and add immersive titles and effects to their projects”, to quote from the press release. Canon has unveiled a spatial still and video image capture lens, due out later this year, for its EOS R7 digital camera. And BlackMagic Design not only plans on fully supporting spatial video within its DaVinci Resolve application workflow but is developing a ground-up spatial video camera, the URSA Cine Immersive, containing dual 8K image sensors.
AI, privacy and ChatGPT
At this point, roughly 70 minutes into the keynote and with only about a half hour left, CEO Tim Cook returned to the stage to kick off the bigger-picture AI segment of the presentation (which, I’m admittedly a bit sad to say, was not introduced with the words “one more thing”). That said, if you’ve been paying close attention to my writeup so far, you’ve seen evidence of plenty of AI-enhanced breadcrumbs already being scattered about; the Apple Pencil-enabled and Notes-integrated ability to sketch out an equation or other arithmetic operation and have the iPad’s Calculator app discern and deliver what you’re asking for, for example, or spatial-converted conventional photographs to view in the Vision Pro and share with others. And at least four other aspects of this keynote segment are likely already unsurprising to many of you:
- That much of what was announced had already been leaked in advance, albeit admittedly to a shockingly extensive and accurate degree this time around
- That Apple chose WWDC to “go big” with AI and, specifically, generative AI, given all the attention (not to mention investor and shareholder love) that partners (more on this in a bit)/competitors such as Google, Meta, Microsoft and OpenAI have garnered of late
- That Apple’s generative AI implementation is multimodal, that is, capable of being trained by, accept inputs from, and infer outputs to various (and often multiple simultaneous) data types: text, voice and other audio, still images, video clips, etc., and
- That Apple had the moxie to brand an industry-standard term, AI, as “Apple Intelligence”
I’m not going to go through every showcased example of what Apple sneak-peeked (custom emoji…sigh…for example); you can peruse the liveblogs and videos for that, if you wish. Instead, I’ll share some big-picture thoughts. First off, much of what we saw at the keynote were under-development concepts, not anything that’ll be ready for prime time soon (if ever). To wit, although the AI-enhanced version of long-suffering Siri (in-advance warning: the full video is NSFW, or more generally for those with delicate constitutions!):
is eagerly awaited, the just-released initial beta of iOS 18 doesn’t include it.
Secondly, and also unsurprisingly, preserving user privacy was a repeatedly uttered mantra throughout this segment (and the keynote more generally). To what degree Apple will be able to deliver on its promises here is yet to be determined, although longstanding and ongoing software and services shortcomings admittedly have me somewhat skeptical (albeit more optimistic than with less walled-garden alternatives, but ironically counterbalanced by worldwide regulatory pressure to tear down those very same Apple walls).
Speaking of privacy, I’ll differentiate by two types of AI-enhanced services that I deduced from what I heard this week:
- Personal and contextual, such as using AI to find an open time in yours and colleagues’ or friends’ calendars for a meeting or other event, versus
- More broad, general information retrieval, such as the answer to the question “What is the airspeed velocity of an unladen swallow?”, along with synthetic data generation
In the former case, from what I’ve gathered, “Apple Intelligence” will rely exclusively on Apple-developed deep learning models, such as a “smaller” 3 TB parameter language one intended for on-device use and a much larger one resident on Apple-owned and -managed servers (which are reportedly based on M2 Ultra SoCs). Responses to inference queries will ideally run fully on-device but, if needed due to complexity or other factors, will automatically be handed off to its Private Cloud Compute server farm for processing. This all, of course, begs the question of why “Apple Intelligence” isn’t accessible to a broader range of Apple hardware, more cloud-reliant with less capable legacy devices. But we already know the answer to that question, don’t we?
For general information retrieval and synthetic data cases, Apple is likely to hand off the request (after receiving the user’s permission to do so) to a model running on one of its partners’ servers and services. The initial “anointed one” is OpenAI, who will offer free ChatGPT access to Apple’s customers (which it’s seemingly already doing more broadly) along with a ChatGPT app (ditto, albeit currently only for Plus subscribers with the desktop version). What I’m guessing here, in the absence of definitive information, is that Apple’s not paying OpenAI much if any money; the latter company’s doing this pro bono as a means of burnishing its brand (to counteract its subsumption within fellow partner Microsoft’s products, for example). To wit, however, Apple was quite candid about the fact that although OpenAI was first, the relationship wasn’t at all exclusive. In a post-keynote press briefing, in fact, Apple executives specifically expressed openness to working with Google and its various Gemini models in the future.
I’ll close with the following day-after-keynote commentary from Tim Cook within an interview with Washington Post columnist Josh Tyrangiel:
Tyrangiel: What’s your confidence that Apple Intelligence will not hallucinate?
Cook: It’s not 100 percent. But I think we have done everything that we know to do, including thinking very deeply about the readiness of the technology in the areas that we’re using it in. So, I am confident it will be very high quality. But I’d say in all honesty that’s short of 100 percent. I would never claim that it’s 100 percent.
I’m not sure whether to be reassured, freaked out or a bit of both in reaction to Cook’s candor. And you? Let me know your thoughts in the comments on this or anything else I’ve discussed.
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- The Apple 2023 WWDC: One more thing? Let’s wait and see
- Apple’s 2022 WWDC: Generational evolution and dramatic obsolescence
- Apple’s WWDC 2021: Software and services updates dominate
- Apple on Intel; Forecasted Transition Complete By End of 2007
- Noise-suppressing headsets and earbuds: Differences, but all telephony duds
- The 2024 Google I/O: It’s (pretty much) all about AI progress, if you didn’t already guess
The post The 2024 WWDC: AI stands for Apple Intelligence, you see… appeared first on EDN.