Samsung, one of the largest electronics conglomerates in the world, recently lifted the ban on the use of the AI bot ChatGPT to improve productivity and keep up with the latest technology. However, in less than three weeks after the ban was lifted, several employees accidentally leaked sensitive company information on the chatbot.
The leaked information reportedly includes measurement and other confidential details of an in-development semiconductor, as well as yield data from the company’s device solution and semiconductor business unit. According to a Korean report, the employees uploaded the program code, designed to identify defective equipment, for ‘code optimization’, shared a meeting recording with the bot to ‘auto-generate’ the minutes, and copied all the problematic source code of a semiconductor database download program, and entered it into ChatGPT to inquire about the solution.
As per the ChatGPT FAQs, “Your conversations may be reviewed by our AI trainers to improve our systems,” which means that the leaked information will now be accessible to Open AI. This incident has raised concerns about the use of AI bots in workplaces and the potential risks of data breaches.
In response to the incident, Samsung has taken ’emergency measures,’ including limiting the upload capacity to 1024 bytes per question and warning their employees that if a similar accident occurs even after emergency information protection measures are taken, access to ChatGPT may be blocked on the company network. Additionally, the company is reportedly considering building an in-house AI service to prevent such incidents in the future.
This incident highlights the importance of data security and the risks associated with the use of AI bots in the workplace. While these tools can be beneficial in improving productivity and efficiency, they also pose a significant risk to sensitive company information. Companies must implement strict policies and guidelines to ensure the safe and responsible use of AI bots in the workplace.