eXtensions - Sunday 16 June 2024
By Graham K. Rogers
Apple appears to have moved the goalposts at WWDC, particularly with regard to the AI user experience. We are used to Craig Federighi giving excellent presentations, but this was surely one of his best. Rather than a summary, I will borrow the words of Stephen Warwick (iMore): Apple has taken a different approach. The company trotted out AI in its customary cool and collected manner, not as a standalone entity, a big scary LLM, or even as its own product, but rather a series of subtle, unique tools that will underpin every facet of iPhone, iPad, and Mac for years to come.
To be fair, there had already been considerable speculation about the obvious inclusion of AI, particularly as Wall Street was fretting that Apple was falling behind. With the specific chips that are able to work with the new features, this is clearly not true: Cupertino has been working on this for at least a couple of years or more. Apple never rushes into new and untried technology. AI output is thought to be unreliable - both Microsoft and Google have seen some serious errors; while there are countless reports of suspect output, including with academic articles - why would Apple take the risk? It is better to take a slower approach to such features and make sure they work well first. OS X, Intel Macs and Apple silicon were all innovative moves that Apple had working from the get-go. Other companies have rushed the process and made mistakes with untested devices or software. The rise in the share price following the Event suggests that this was well-received. After reading several early reports, I went home, made some tea and took notes as the video of the event played.
Most Apple events begin with some type of video, and this was no different with a zany parachute jump sequence: Craig Federighi in the team, with Phil Schiller ("I'm getting too old for this.") in the pilot's seat. I do not expect Federighi actually made the jump: too valuable. I am not sure about the relevance of the clip, but it segued neatly into Tim Cook's opening delivery. He outlined what was coming in the barest of details being careful not to mention AI specifically. He began with AppleTV+ which is 5 years old this year, mentioning several of the good movies and series that are now available. He then moved to Apple Vision Pro and we were introduced to Mike Rockwell.
One of the criticisms of Vision Pro was that there were too few apps, but does anyone remember the iPhone and the iPad when they were first released? Rockwell mentioned that there are some 2000 apps now, with more being developed. I notice that, like the iPad, new uses are being found, for example in the field of medical technology. He added that VisionOS was being updated to version 2 and that as part of this update, spacial photos can be created from 2D images. A beta version of the update has been made available (D. Griffin Jones, Cult of Mac). Tim Cook added later that Vision Pro (the device) is to be released in 8 other countries (Apple's usual favored nations): China, Japan, Singapore, Australia, Canada, France, Germany and the UK. Ivan Mehta (TechCrunch) explains that the first release will be in China, Japan and Singapore on June 28, then Australia, Canada, France, Germany and the U.K. on July 12. Asia first, then the Old World. He also notes that VisionOS2 was released on Monday.
Rockwell continued to provide more developer-related news, such as the new frameworks and APIs that are to be made available. He also reported that Canon is to offer a new lens (RF-S7.8mm F4 STM DUAL) for the EOS R7 camera so that it can record spatial video. This is typical of the behind-the-scenes discussions between Apple and other technology companies that take place months or years before the device appears. A new lens is not something that can be pulled out of a hat. Likewise, Blackmagic have also announced that a new URSA Cine Immersive camera is in development which is designed to capture content for Apple Vision Pro with 8160 x 7200 resolution per eye. Both Canon and Blackmagic have also developed software for their spacial video solutions. The investment and commitment by these companies imply a good level of faith in the potential what Apple is doing, and also suggests some long-term negotiations among the parties.
Craig Federighi appeared and gave a series of significant presentations: first on the operating systems; but later on Apple Intelligence. The next version of macOS is to be called Sequoia and this has an exciting new feature of iPhone mirroring. Kyle Wiggers (TechCrunch). There are plenty of other new features with the macOS update and some of these are outlined in the article. The pace of Federighi's presentation was so rapid that I only grasped a few of the more significant parts of the delivery, but more was to come later. I also noted that Keychain was to have a significant update and will become a Passwords app, which will be cross-platform (macOS/iOS/iPadOS) including Windows. Stephen Warwick (iMore) as a Windows user (as well as iMore's News Editor on the Apple-oriented site), takes a look at what is new in Sequoia, with some (justifiable reservations). Note, however, that his comments are based on the first beta release and some features, like iPhone mirroring, are not yet available.
Moving to iOS 18 Federighi announced a host of new features, some of which will not be available outside the USA, which will integrate with the AI (see below) and add to usability of the devices. Of particular value are the ways in which the Home Screen can be changed by the user. It will be now possible to hide and lock apps. This will be of particular value to those who work on sensitive data, such as in spreadsheets. Access is only granted to those who can authenticate themselves. Satellite messaging will be available (USA only initially) with end to end encryption, although I can confidently predict this will not come here. Likewise improvements to Maps are US-centric, and Apple Pay is unavailable here.
Photos is to be updated and considerable emphasis was put on the ways the images can be organized. There was nothing however on the editing features of this app which are poor on the iPhone and in real need of improvement on iPads. A nice feature, is that an update will allow more input to AirPods, with the shake of the head (yes or no) and integration with Siri.
After a new raft of features for the Apple Watch, Craig Federighi was back with iPadOS. With a new iPad Pro, this was of much interest to me. I am using the device far below its potential. As well as the same updates to iOS, there are several other device-specific improvements, including new Apple Pencil features and a floating tab bar. A long-awaited change is the arrival of a calculator app, but this is not simply a supercharged iPhone app as this works with the Apple Pencil with new additions that also the user to write calculations out on the screen. These are completed when the = sign is pressed. If the figures in a calculation are updated, so the answer updates on the fly. This is rather interesting, and can be used in other apps, like Notes.
A new feature called SmartScript improves the user's handwriting (something I will be grateful for) while keeping it recognizable as the user's written output. It will also be possible to paste type-written content into a handwritten note and that will match the handwritten content. As much as Federighi feigns innocence (and does it very well) these new features are not magic. The backgrounds for many of these improvements and new features became more clear as the presentation continued to drop seeds of ideas that were in need of germination.
As the beta releases were examined, so more information appeared regarding features that had not been covered in the WWDC Keynote Presentation. There may well be more, and as iOS 18 is developed after release, there are certain to be a few more features for users. Daryl Baxter (iMore) outlined 18 new features that had not been mentioned at WWDC. He refers to the release as iOS 18, Kraken. I am of course aware that macOS has version names, but had not seen this applied to iOS before. The whole idea of the update, including AI (see below) is to help users have a better experience, and this extends to the way the home screen can be set up, how charging times can be changed, and the way the iCloud is to be displayed. Baxter has a good explanation of each of the features he found.
The last 30 minutes or so became really interesting, preceded by a short introduction from Tim Cook in which he dropped some heavy clues about the use of AI and large language models (LLM). Cook set out some valuable parameters that the knee-jerk critics should examine: powerful, intuitive, integrated (something Apple is already good at); it should be founded in a user's personal ways of using the devices; and above all there should be a high regard for privacy.
Apple has taken a different approach. The company trotted out AI in its customary cool and collected manner, not as a standalone entity, a big scary LLM, or even as its own product, but rather a series of subtle, unique tools that will underpin every facet of iPhone, iPad, and Mac for years to come (Stephen Warwick, iMore).
Federighi gave a nod to the existing AI models, but pointed out that these tools know little about You, the user. I have been aware for a few years that Craig is a good presenter, knowing his stuff and with the ability to recover and ad lib when necessary. The delivery on Apple Intelligence (you see what they did with AI there? - not only possession, but taming the idea) was perhaps his best yet, with several references to high level technology, but phrased to make it accessible to the user trying to absorb the concepts and words.
I know that this was pre-recorded and he probably had assistance like teleprompters, but this was a smooth delivery of information at the right speed and pitch. I would love to have a transcript of that presentation, although an Apple Security Research Blog entry contained many of the core ideas mentioned by Federighi, particularly those parts regarding the use of LLMs and the privacy, andy the way Apple Cloud Compute will work. He mentioned that ChatGTP from Open AI would be integrated into the operating systems and several apps, but was clear: the user is in control and will be asked before any information is shared (see notes on Musk, below). While Apple and Open AI have a cash-free relationship here - both benefit from the mutual use - other applications, such as from Google (et al) will be coming later.
There is a limitation. . . . Only certain chips will work with this, and certain devices: iPhone 15 Pro, and devices with Apple silicon (M1-M4 and beyond), which strongly suggests that these abilities were baked in to the Apple silicon a couple of years ago, so AI was being worked on for some considerable time, despite the comments of many pundits, and the vocalized fears that Apple was falling behind. Remember, Apple never rushes into new technology.
Perhaps the most comprehensive overview of Apple Intelligence features set to arrive with the new OS releases came from Devon Dundee on MacStories. This is carefully split up into three categories: language, images, and Siri, making the way AI works easier to absorb. I am not sure I would want to avail myself of all the features, but there is quite some potential here. Part of Dundee's concluding remarks include the sentence, "The company pulled off what they needed to with this announcement, stepping boldly into the generative AI game while putting their own spin on it." As ChatGTP was featured, it was no surprise to see Sam Altman at WWDC, here pictured with someone who looks like Eddy Cue (yellow c-level badge). One of my former students, attending as a developer (of Thai ride-hailing app, Bonku) snapped that shot, not long before taking a selfie with Phil Schiller.
While far less has been said about Sequoia, it is a large part of the Apple Intelligence strategy. As a lot of people, including myself, use for writing AI will become available to us. I like it for the final touches before putting something online, or working on larger papers in Pages; and I far prefer Keynote on the Mac when preparing presentations. Also having an early look at Sequoia was the experienced Howard Oakley (Eclectic Light Company) who looks at the evolution of writer assistance from the humble spell check through to the potential of ChatGTP on the Mac. Like me, Oakley is a bit of an AI skeptic but also sees that some parts of the features expected in Sequoia can be of great help. A good example is the summary.
There is a summarize feature on the Mac (in Shortcuts), although I have not had much success with this, but there is significant potential with the new summarizing feature. Oakley looks at other features like grammar, typing suggestions and extracting text from images which I find useful. He then looks at the features to come, like the writing tools, Private Cloud Compute, and Generative AI (+ChatGTP). Like me he is wary of this but I sense he is generally favourable to what Apple has done here.
As a side comment, I also looked at comments from Mike Masnic on TechDirt last week in which he reviews the wisdom of Judge Kevin Newsom, of the 11th Circuit who had a "what if" moment regarding AI and wrote a sound concurring opinion that analyses the potential from a legal standpoint. I often look at the ethical problems, with regard to academic writing, but this discussion opens up some ideas worth sifting through.
As further proof that a person inheriting a father's fortune does not necessarily inherit the intelligence (see also, Trump), we were treated to a knee-jerk reaction from Elon Musk who was apoplectic about the risks from any such Apple device (he claimed) that included this data-sucking feature; and no employee of Musk would be allowed to bring such a device to work; while visitors would be relieved of their iPhones, iPads, et al which would be placed in Faraday cages while the visitor was on the premises Hartley Charlton, MacRumors. So what does he feel about those Android devices already running such software?
In the following hours, a number of articles have looked at Musk's reaction and there is a certain amount of head scratching. The main idea is that he does not understand what is involved and/or this is a reaction against ChatGTP. Will his reaction be the same when Apple (as the company has suggested) incorporates other AI services, such as from Google? Of course he will. Karl Bode (TechDirt) puts the Musk thing in some context, and among the comments, notes that "Keep in mind that Musk's companies have a pretty well established track record of playing extremely fast and loose with consumer privacy themselves." So it's a bit of a Musk hissy fit because Apple "decided to partner with OpenAI instead of his half-cooked and more racist Grok pseudo-intelligence system."
And now Nick Robins-Early (Guardian) reports that Musk has withdrawn his lawsuit (Tuesday) against Altman.
Graham K. Rogers teaches at the Faculty of Engineering, Mahidol University in Thailand. He wrote in the Bangkok Post, Database supplement on IT subjects. For the last seven years of Database he wrote a column on Apple and Macs. After 3 years writing a column in the Life supplement, he is now no longer associated with the Bangkok Post. He can be followed on X (@extensions_th). The RSS feed for the articles is http://www.extensions.in.th/ext_link.xml - copy and paste into your feed reader.
For further information, e-mail to
Back to
eXtensions
Back to
Home Page