Pardon me, everyone. I must apologize because last week, after a WONDERFUL long weekend in the woods doing important work for the Mankind Project, I came back home and dove right into the following week’s workload. In the process, I completely forgot to publish my weekly newsletter! So this one will have some extra videos for you to check out!
This was a big week in technology. Basically, any week that has Apple showcasing its next major product lineup, it’s guaranteed to be a big week in tech. That’s exactly what happened as the company showed off the new iPhone 16 line of devices, the Apple Watch 10, and AirPods 4. Oh, and a lot of time was spent on a little initiative that the company is working on: Apple Intelligence.
For the first time ever, I did live coverage of the announcement. I wish I had something great to point you to, but I made a stupid error and rebroadcasted the live stream of the event to my channel while providing my own running commentary. I know better. Within ten minutes, the livestream was taken down and my YouTube channel was issued a warning. Thankfully, not a full strike. Yes, I believe what I was doing would fall under the category of fair use. But my appeal did not come back in my favor, unfortunately. (Here’s the commentary if you want to watch though warning, you’ll be missing the actual Apple stream.)
What I did gain from watching the event, though, was some fresh perspective on how Apple is implementing its artificial intelligence strategy into the new lineup. Apple Intelligence finally gets its major release on the upcoming iPhone 16 (with a small delay into October when iOS 18.1 is pushed out to those devices.) In watching the event closely, I started to clue into something that has me very curious about the current state of AI, the tech interests that want us all to use THEIR AI, and whether it will be effective in the long run or not.
What Apple showed off with Apple Intelligence is an AI layer that is baked into the operating system itself. It’s an integrated approach that places its AI smarts at the foundation of the code that’s driving the new smartphones. Apple has always been big on ecosystem and tight integration, so it makes perfect sense that Apple would take such an approach with its AI. This isn’t a feature set that’s bolted on top of things willy-nilly, this is an OS that brings its AI into the core of the experience.
Siri has long been a punching bag in the world of voice assistants, which has always struck me as somewhat odd. Not that I think Siri is great, it has its shortcomings and we’ve all complained about them at one time or another. But I’ve always been struck by the fact that Apple seemingly has wanted Siri to be a revolution, and even in the face of that desire, with all of its resources, it still couldn’t quite convince the majority of people that things had drastically changed for the better. Sure, there have been improvements over time that move the needle in little ways, but it could never quite emerge from the dark shadow that it found itself early on, especially when compared with the seemingly far more capable and dependable Google Assistant.
With AI baked into the core of the OS, Apple finally has the opportunity to give Siri a major facelift and rewrite the story around the assistant. Perhaps now, Siri can deliver on its promises and become the conduit for many newcomers to “modern AI.” So many iPhone users surely exist who have heard all the noise about AI to this point, but have yet to give it a solid chance. A supercharged, revitalized Siri that’s powered by Apple Intelligence, and supplemented by ChatGPT (sometime next year) could be the winning ticket.
Google, on the other hand, has taken a more open approach by bringing its Gemini AI platform to Android in the form of an app. Either that app is already installed on your device, or you must seek it out to get started. Once it resides on your device, you can choose to make it the quintessential voice assistant on your device, overwriting the commands that would summon Google Assistant before its arrival.
This app-based approach is incredibly flexible in the sense that it allows almost anyone to upgrade their phone with Google’s AI with a simple install. But it brings AI to the OS in a more patchwork process. Sure, Google can supplement the app with extended capabilities that reach further into the OS with special APIs that expand its access and make it seem more integrated. But at the end of the day, this is an app with special functions and capabilities, not an integration in how Apple approaches its on-device AI.
Now, I give Google a lot of credit. They’ve embedded Gemini into a whole slew of apps and made that little sparkle icon appear in places that keep the eyes curious about how AI can help in the context of each destination. So perhaps, it’s less of a difference than I’m admitting here. But given the historical importance of Apple’s tight ecosystem (“It Just Works”) compared to the historical reputation of Google’s commitment to being open (even when it makes things more confusing), I’m really curious to see which platform engenders a desire by the everyday user to finally actually try this AI stuff for themselves. Time will certainly tell!