
OpenAI, Google, and Meta are all doing one thing: giving AI memory

OpenAI, Google, Meta and other tech giants have recently upgraded the memory functions of their chatbots, allowing for the storage of more user information to provide personalized responses. This move is seen as a way for AI giants to compete for user engagement through the differentiated feature of "memory," while potentially becoming a new avenue for AI monetization
Tech giants are competing to equip AI with "memory systems," entering a new battleground in AI competition?
On May 16, it was reported that OpenAI, Google, Meta, and Microsoft have upgraded their chatbots' memory functions in recent months, allowing for the storage of more user information to provide personalized responses.
Pattie Maes, a professor at MIT Media Lab and an expert in AI-human interaction, pointed out:
"If you have an AI assistant that truly understands you because it retains memories of conversations, the entire service becomes stickier; once you start using a particular product, you won't switch to others."
This initiative is seen as a way for AI giants to compete for user stickiness through the differentiated feature of "memory," which may also become a new avenue for AI monetization. However, this technological upgrade raises concerns about user privacy and commercial manipulation risks, potentially leading to stricter regulations in the future.
A New Battleground for AI Giants
Google's Gemini and OpenAI's ChatGPT have made significant progress in memory functions. Reports indicate that these improvements include expanding the context window (which determines how much conversation content a chatbot can remember at once) and using retrieval-augmented generation techniques to identify relevant context from external data.
In March of this year, Google expanded Gemini's memory scope to include users' search history (note: requires user permission), not just limited to conversations with the chatbot, and plans to extend this to other Google applications in the future. Michael Siliski, Senior Director of Product Management at Google DeepMind, stated:
"Just like human assistants... the more they understand your goals, who you are, and your needs, the better help they can provide."
OpenAI's ChatGPT and Meta's chatbots in WhatsApp and Messenger can reference past conversations, not just the current session. Users can delete specific memories from settings and receive notifications on-screen when the model creates memories.
OpenAI stated, "Users always have control and can ask ChatGPT what it remembers about them, change saved memories and past conversations, or turn off the memory feature at any time."
Microsoft, on the other hand, uses data from emails, calendars, and intranet files for the memory function in commercial services.
It was reported that last month, Microsoft began rolling out a preview version of Recall on some devices, which records user activity by capturing the computer screen. Users can opt out or pause the screenshots.
AI giants generally believe that the improved/updated memory functions of chatbots will play a significant role in driving monetization through affiliate marketing and advertising.
Meta CEO Mark Zuckerberg stated in last month's earnings call, "Displaying product recommendations or ads on chatbots will be a huge opportunity."
Similarly, OpenAI improved ChatGPT's shopping features last month to better showcase products and reviews
Privacy Concerns and Risks of Manipulation for Commercial Interests
Experts warn that the upgraded memory function of chatbots may also be used to manipulate users for commercial interests, raising privacy concerns.
Moreover, the enhanced memory function may lead the model to overly cater to user preferences, reinforcing biases or inaccuracies.
Last month, OpenAI apologized for its GPT-4o model being found to excessively flatter and please users, rolling the model back to a previous version.
More generally, AI models may produce hallucinations, creating unrealistic or meaningless responses, and experience "memory drift," where memories become outdated or contradictory, affecting accuracy. MIT Professor Maes warned:
"The more the system knows about you, the more likely it is to be used for negative purposes, whether to get you to buy something or to persuade you to accept specific beliefs. So you have to start thinking about the incentive mechanisms behind the companies providing these services."