junio 14, 2024

iOS 17.4: Apple continúa trabajando en las funciones de Mensajes y Siri con tecnología de inteligencia artificial, con la ayuda de ChatGPT

Apple is widely expected to unveil major new artificial intelligence features with iOS 18 in June. Code found by 9to5Mac in the first beta of iOS 17.4 shows that Apple is continuing to work on a new version of Siri powered by large language model technology, with a little help from other sources.

In fact, Apple appears to be using OpenAI’s ChatGPT API for internal testing to help the development of its own AI models.

According to this code, iOS 17.4 includes a new SiriSummarization private framework that makes calls to the OpenAI’s ChatGPT API. This appears to be something Apple is using for internal testing of its new AI features.

There are multiple examples of system prompts for the SiriSummarization framework in iOS 17.4 as well. This includes things like “please summarize,” “please answer this questions,” and “please summarize the given text.”

The system prompts also make mention of what to do when it is given input in the form of an iMessage or SMS. This aligns with previous reporting from Bloomberg, which said Apple is working on AI integration in the Messages app that can “field questions and auto-complete sentences”

Given user received SMS(s) containing sender, content, send_time fields, suggest an appropriate action for a voice assistant to take, including the action type, action value, action value type, and confidence score in JSON format. The possible action types are MessageReply, GetDirection, Call, SaveContact, Remind, MessageContact, and None. The possible ActionValueType are message, address, phoneNumber, contact, reminder. The possible score value ranges from 0 to 1 which represents the confidence score of the suggested action.

Apple is unlikely to use OpenAI models to power any of its artificial intelligence features in iOS 18. Instead, what it’s doing here is testing its own AI models against ChatGPT.

For example, the SiriSummarization framework can do summarization using on-device models. Apple appears to be using its own AI models to power this framework, then internally comparing its results against the results of ChatGPT.

In total, iOS 17.4 code suggests Apple is testing four different AI models. This includes Apple’s internal model called “Ajax,” which Bloomberg has previously reported. iOS 17.4 shows that there are two versions of AjaxGPT, including one that is processed on-device and one that is not.

Other models referenced by iOS 17.4 include the aforementioned ChatGPT as well as FLAN-T5, which you can learn more about here.

The biggest takeaway from these findings is that Apple is ramping up its efforts to integrate large language models into iOS. It’s also notable to see Apple simultaneously developing its own system and comparing the results of that system to things like ChatGPT and FLAN-T5.

In October, Bloomberg’s Mark Gurman gave an in-depth rundown on some of Apple’s goals for AI in iOS 18. The report outlined that “there’s an edict” within Apple and Craig Federighi’s software team to fill iOS 18 “with features running on the company’s large language model.”

FTC: We use income earning auto affiliate links. More.