The partnership between Apple and OpenAI is off to a rocky start as ChatGPT users on macOS recently learned their conversations were being stored in plain-text files.
Apple has positioned itself as a company that prioritizes privacy in a market where many of its competitors reap a lion’s share of their profits by selling or repurposing user data. But, as demonstrated by data and electronics engineer Pedro José Pereira Vieito in a post on Meta’s Threads, somebody dropped the ball when it came to OpenAI’s third-party integration of ChatGPT on macOS.
Privacy threat
ChatGPT was released on macOS in May to subscribers. General access for non-subscriber accounts was made available on June 25. Until Friday, July 5, however, the app stored all chat logs in unencrypted plain-text files on users’ hard drives.
This meant anyone with access to the computer, either physically or via remote attack such as malware or phishing, had access to every conversation any user on that computer had with ChatGPT.
Sandboxing
Apple’s macOS has a privacy protection measure called “sandboxing” that controls application access to software and data at the kernel level. Apps installed via Apple's app service are "sandboxed" by default so that data is never left unencrypted.
Pereira Vieito attributes this recent issue to the fact that the ChatGPT app on macOS is offered solely through OpenAI’s website:
“OpenAI chose to opt-out of the sandbox and store the conversations in plain text in a non-protected location, disabling all of these built-in defenses.”
It’s unclear at this time if any users were actually affected by the apparent oversight, but the general tenor on social media and pundit commentary indicated shock.
In the comments section of an article published on the Verge, for example, user GeneralLex posted that they discovered the unencrypted text files stored in their computer’s memory:
“I used Activity Monitor to dump the ChatGPT executable from memory and found that, horror of horrors, chat log is in plain text, unencrypted in memory!”
A simple mistake?
The real question is: why did this happen? We know how it happened and it's clear the issue has been resolved, but the why remains unknown.
Presumably, this was done so that OpenAI could easily access the chat logs for further development of ChatGPT. According to the app’s terms of use, users have to explicitly opt-out of sharing their data with OpenAI.
But why didn't Apple intercede on behalf of users before the app went live and why didn't OpenAI recognize that it was generating sensitive, unencrypted data on user's machines?
Cointelegraph reached out to OpenAI and Apple for more information but didn’t receive an immediate response from either.