Concerned about potential data leaks, Samsung recently told its employees that it was temporarily banning the use of ChatGPT or other generative artificial intelligence tools. The decision was taken after three leaks involving the tool were identified within 20 days of its release.
According to internal letters leaked to the media, Samsung is concerned about data being provided to external servers of artificial intelligence platforms such as Google Bard and Microsoft Bing, which may be impossible to retrieve and remove. The company is concerned that private information may be exposed to other users as a result.
To address these issues, Samsung conducted a survey of its employees’ use of AI technologies, which found that 65% of respondents thought such services posed a security risk. The company has also stated its ambition to develop its own AI tool to prevent internal information leaks.
These technologies will be used to translate and summarize documents, assist with software development and investigate ways to prevent critical company information from being leaked to other locations. Samsung intends to keep its data private and prevent it from being leaked by developing its own internal AI technologies.
The internal message also reminded employees to strictly follow Samsung’s security procedures. Failure to do so could result in the exposure of confidential company information, and offenders could face disciplinary action, including dismissal.
Given these developments, it is clear that Samsung is concerned about data security. The company is aware of the dangers posed by AI technologies, especially when they are not under its control, and is taking steps to mitigate these risks.
While some employees may be disappointed by the temporary ban on ChatGPT, it’s a necessary step to ensure the security of the company’s data. It will be fascinating to see how Samsung’s own AI capabilities compare to existing third-party offerings, and how they are adopted across the business as the company continues to develop its own.