If you use iPhones like the iPhone SE 2023 or the iPhone 14 Pro Max, you know that generative AI has still not been implemented on these smartphones. While the iPhone 15 models are slowly getting new AI-powered features, its integration is still quite far. However, iPhone developers have found a new way to make it a possibility.
First of all, why has Apple not introduced generative AI and LLM (large language models) on their smartphones so far? Developers claim that the memory limitations of smartphones were its biggest drawback so far. This is why it’s difficult to use the ChatGPT-4 API and LLM on smartphones since it uses “billions of different parameters,” – which a smartphone is not capable of because of its hardware memory (RAM) limitations.
However, researchers at Cornell University have found a solution for this. They have found two new techniques that might make the use of LLM on iPhones a reality rather than just a far-fetched pipe dream.
The first technique is known as Windowing. In this method, the LLM recycles already stored data on your smartphone and its cloud servers to generate your prompt requests. It will do so instead of processing new data. Therefore, it will be able to generate your prompts without putting too much load on your iPhone’s memory (RAM).
The second process is known as Row Column Bundling. In this method, the LLM will collect big chunks of data at several intervals, processing them in different stages.
The researchers claim that both these methods will reduce the use of hardware memory by half, making the use of LLM AI models on iPhones possible. However, they only ran limited tests. Therefore, they have requested the iPhone developers to look into these processes and see whether they work successfully or not.
Successfully deploying the use of LLMs on iPhones will sky-rocket the use of Siri to new levels. We hope for the best and look forward to seeing what the integration of AI on iPhones will look like in the future.