Alphabet has advised employees not to enter its confidential materials into AI chatbots, the people said and the company confirmed, citing long-standing policy on safeguarding information.
A Google privacy notice updated on June 1 states: "Don’t include confidential or sensitive information in your Bard conversations."
Some companies have developed software to address such concerns. For instance, Cloudflare, which defends websites against cyberattacks and offers other cloud services, is marketing a capability for businesses to tag and restrict some data from flowing externally.
Matthew Prince, CEO of Cloudflare, said that typing confidential matters into chatbots was like "turning a bunch of PhD students loose in all of your private records."
The chatbots, among them Bard and ChatGPT, are human-sounding programs that use so-called generative artificial intelligence to hold conversations with users and answer myriad prompts. Human reviewers may read the chats, and researchers found that similar AI could reproduce the data it absorbed during training, creating a leak risk.
Alphabet also alerted its engineers to avoid direct use of computer code that chatbots can generate, some of the people said.
Asked for comment, the company said Bard can make undesired code suggestions, but it helps programmers nonetheless. Google also said it aimed to be transparent about the limitations of its technology.
A growing number of businesses around the world have set up guardrails on AI chatbots, among them Samsung, Amazon.com, and Deutsche Bank, the companies told Reuters. Apple, which did not return requests for comment, reportedly has as well.
Some 43% of professionals were using ChatGPT or other AI tools as of January, often without telling their bosses, according to a survey of nearly 12,000 respondents including from top U.S.-based companies, done by the networking site Fishbowl.
It "makes sense" that companies would not want their staff to use public chatbots for work, said Yusuf Mehdi, Microsoft's consumer chief marketing officer.
"Companies are taking a duly conservative standpoint," said Mehdi, explaining how Microsoft's free Bing chatbot compares with its enterprise software. "There, our policies are much more strict."
Microsoft declined to comment on whether it has a blanket ban on staff entering confidential information into public AI programs, including its own, though a different executive there told Reuters he personally restricted his use.
Forum