According to a recent Reuters report, Alphabet, Google’s parent company, is actively promoting its Bard chatbot on a global scale. Along with the promotion, Alphabet is advising its employees to be cautious when using chatbots, including its own invention, Bard.
According to sources familiar with the situation, Alphabet has warned its employees not to enter sensitive information into its artificial intelligence chatbot. This is an attempt to protect sensitive data within the company. Google has confirmed the existence of this policy, highlighting the company’s long-standing commitment to data security.
In addition, Alphabet has urged its developers to avoid directly adding computer code generated by the chatbot. According to some sources, Bard may make code suggestions that are inappropriate or require further investigation. Despite these limitations, Bard can still be a useful tool for programmers. Google has reaffirmed its commitment to openness and emphasized the need to inform consumers about the limitations of its technology.
Google’s caution reflects a wider trend in corporate security standards. Many companies around the world, including industry titans such as Samsung, Amazon and Deutsche Bank, have taken steps to regulate the use of AI chatbots. These safeguards are designed to reduce risk and ensure data security. While Apple is believed to have a similar approach, the company did not respond to a request for comment on the issue.
According to a recent survey by social networking site Fishbowl, 43% of professionals, including C-level executives, use AI technologies such as ChatGPT or similar platforms. Surprisingly, a significant proportion of these users have not informed their employers about their use of these technologies. The survey, which included more than 12,000 respondents, demonstrates the growing use of AI technologies across industries.