Site icon Protechbro: Top Stories on Bitcoin, Ethereum, Web3, & Blockchain

Slack Criticized for Deceptive AI Training Policy

Slack

Slack

Slack users are upset with Salesforce’s AI push, raising concerns about big tech using personal and company data to train AI.

Like many companies, the business uses user data to train some of its new AI services. However, you must email Slack to request opting out to avoid the firm utilizing your data.

And hidden away in what looks to be an outdated, convoluted privacy policy that nobody was reading are the rules of that involvement. That was Slack’s situation up until a disgruntled individual wrote about them on a developer-focused community website, at which point the post became viral, as it did in this instance.

Everything began last night when a comment on Hacker News raised how Slack trains its AI services, clearly referencing its privacy policies. No further explanation was required. That article started a lengthy discourse that revealed something new to existing Slack users: Slack automatically enrolls people in its AI training, and the only way to opt out is to email a specific address.

Following that, the Hacker News thread sparked other discussions and inquiries on different platforms: Why isn’t “Slack AI,” a relatively new product with a generic name that allows users to search for answers and summarize conversation threads, among other things, ever referenced by name on the privacy principles website, not even to indicate whether or not the policy applies to it? Furthermore, why does Slack mention”AI models” and “global models”?

Slack’s image is damaged by users’ confusion about how the company implements its AI privacy principles and their astonishment and annoyance at emailing a company that makes a big deal of saying, “You control your data,” to opt-out.

The terminology may not be new, but the shock certainly is. Pages on the Internet Archive state that the terms have been practical since September 2023. (The business has been contacted to confirm.)

As the privacy policy states, Slack uses user data to train “global models,” which power search results, channel suggestions, and emoji recognition. Slack informs us that there are restrictions on how it can use the data.

Slack features machine-learning algorithms at the platform level for search results and channel and emoji recommendations. A firm representative told TechCrunch, “We do not build or train these models in a way that they could learn, memorize, or be able to reproduce some part of customer data.” The policy must cover the company’s larger goals for training AI models and their overall breadthare.

Customers who choose not to participate in data training will nonetheless gain access to Slack’s “globally trained AI/ML models,” according to the firm. However, even in that scenario, it remains to be seen why the business is utilizing user data to provide features like emoji suggestions in the first place.

Additionally, the business declared that Slack AI is not trained using consumer data.

Large language models (LLMs) are used by Slack AI, an add-on that may be purchased separately; however, LLMs are not trained using client data. Because Slack AI leverages LLMs hosted entirely on Slack’s AWS infrastructure, client data is kept internal and not shared with any LLM providers. According to a spokesman, this guarantees that consumer information remains under that company’s ownership and is used only by that company.

Some will be cleared up sooner rather than later. Responding to a critical assessment of Threads by writer and engineer Gergely Orosz, a Slack engineer, Aaron Maurer, said the page needs to be updated to show “how these privacy principles play with Slack AI.”

Maurer stated that the company’s efforts around search and recommendations are reflected in these agreements, which were drafted before the implementation of Slack AI. With all the misunderstandings around Slack’s current use of AI, reviewing the terminology for any upcoming upgrades will be worthwhile.

The problems at Slack serve as a harsh reminder that user privacy should always come first in the rapidly advancing field of artificial intelligence and that a company’s terms of service should explicitly state how and when data is used—or is not—used.

Exit mobile version