Apple has joined the list of companies prohibiting its employees from utilizing generative AI tools like OpenAI’s ChatGPT. The tech giant’s decision is motivated by fears that confidential information entered into these systems may be leaked or collected without consent.
According to a report from The Wall Street Journal, Apple employees have been explicitly cautioned against using OpenAI’s ChatGPT and GitHub’s AI programming assistant Copilot. Bloomberg reporter Mark Gurman also noted that ChatGPT had been on Apple’s list of restricted software “for months.”
Apple’s concern is well-founded. By default, OpenAI retains all interactions between users and ChatGPT in order to train their AI systems. These conversations can be accessed by moderators for compliance monitoring and to uphold the company’s terms and services.
In April, OpenAI introduced a feature that allows users to disable chat history. However, even with this setting enabled, OpenAI still retains conversations for 30 days, granting them the ability to review them “for abuse” before permanently deleting them.
Given the versatility of ChatGPT in tasks such as code improvement and idea generation, Apple’s apprehension is justified. The company worries that employees may unknowingly input confidential project details into the system, potentially exposing them to OpenAI’s moderators. While it is possible to extract training data from certain language models through their chat interfaces, there is currently no evidence suggesting ChatGPT is vulnerable to such attacks.
Apple is not alone in implementing this ban, as other companies including JP Morgan, Verizon, and Amazon have taken similar measures.
Apple’s decision to ban its employees from using ChatGPT is notable, considering that OpenAI recently launched an iOS app for the tool. The app, which supports voice input and is available for free in the US, allows users to access ChatGPT on their mobile devices. OpenAI plans to expand the app’s availability to other countries soon, including an upcoming Android version.
As privacy concerns continue to grow in the context of AI systems, companies are increasingly taking precautions to safeguard sensitive information. While ChatGPT offers various benefits, the potential risks associated with data leakage necessitate careful usage and consideration, particularly within organizations dealing with confidential projects.