Advertisement

Where was all the AI at WWDC?

It's in Apple's newest products, just not on the lips of its executives Tuesday.

Justin Sullivan via Getty Images

With its seal broken by the release of ChatGPT last November, generative AI has erupted into mainstream society with a ferocity not seen since Pandora’s famous misadventure with the misery box. The technology is suddenly everywhere, with startups and industry leaders alike scrambling to tack this smart feature du jour onto their existing code stacks and shoehorn the transformational promise of machine-generated content into their every app. At this point in the hype cycle you’d be a fool not to shout your genAI accomplishments from the rooftops; it’s quickly become the only way to be heard above the din from all the customizable chatbot and self-producing Powerpoint slide sellers flooding the market.

If Google’s latest I/O conference or Meta’s new dedicated development team were any indication, the tech industry’s biggest players are also getting ready to go all in on genAI. Google’s event was focused on the company’s AI ambitions surrounding Bard and PaLM 2, perhaps even to the detriment of the announced hardware including the Pixel Fold and 7a phones, and Pixel Tablet. From Gmail’s Smart Compose features to Camera’s Real Tone and Magic Editor, Project Tailwind to the 7a’s generative wallpapers, AI was first on the lips of every Alphabet executive to take the Shoreline stage.

If you’d been drinking two fingers every time Apple mentioned it during its WWDC 2023 keynote, however, you’d be stone sober.

Zero — that’s the number of times that an on-stage presenter uttered the phrase "artificial intelligence" at WWDC 2023. The nearest we got to AI was "Air" and the term "machine learning" was said exactly seven times.

That’s not to say Apple isn’t investing heavily into AI research and development. The products on display during Tuesday’s keynote were chock full of the tech. The “ducking autocorrect” features are empowered by on-device machine learning, as are the Lock Screen live video (which uses it to synthesize interstitial frames) and the new Journal app’s inspirational personalized writing prompts. The PDF autofill features rely on machine vision systems to understand which fields go where — the Health Apps’ new myopia test does too, just with your kid’s screen distance — while AirPods now tailor your playback settings based on your preferences and prevailing environmental conditions. All thanks to machine learning systems.

It's just, Apple didn’t talk about it. At least, not directly.

Even when discussing the cutting-edge features in the new Vision Pro headset — whether it’s the natural language processing that goes into its voice inputs, audio ray tracing, the machine-vision black magic or that real-time hand gesture tracking and Optic ID entail — the discussion remained centered on what the headset features can do for users. Not what the headset could do for the state of the art or the race for market superiority.

The closest Apple got during the event to openly describing the digital nuts and bolts that constitute its machine learning systems was its description of the Vision Pro’s Persona feature. With the device’s applications skewing hard toward gaming, entertainment and communication, there was never a chance that we’d get through this without having to make FaceTime calls with these strapped to our heads. Since a FaceTime call where everybody is hidden behind a headset would defeat the purpose of having a video call, Apple is instead leveraging a complex machine learning system to digitally recreate the Vision Pro wearer’s head, torso, arms and hands — otherwise known as their “Persona.”

“After a quick enrollment process using the front sensors on vision pro, the system uses an advanced encoder decoder, neural network to create your digital persona,” Mike Rockwell, VP of Apple’s Technology Development Group, said during the event. “This network was trained on a diverse group of thousands of individuals, it delivers a natural representation which dynamically matches your facial and hand movement.”

AI was largely treated as an afterthought throughout the event rather than a selling point, much to Apple’s benefit. In breaking from the carnival-like atmosphere currently surrounding generative AI developments, Apple not only maintains its aloof and elite branding, it also distances itself from Google’s aggressive promotion of the technology, and also eases skittish would-be buyers into the joys of face-mounted hardware.

Steve Jobs often used the phrase “it just works” to describe the company’s products — implying that they were meant to solve problems, not create additional hassle for users — and it would appear that Apple has rekindled that design philosophy at the dawn of the spatial computing era. In our increasingly dysfunctional, volatile and erratic society, the promise of simplicity and reliability, of something, anything, working as advertised, could be just what Apple needs to get buyers to swallow the Vision Pro’s $3,500 asking price.