Subscribe for notification
AI

Slack Criticized for Deceptive AI Training Policy

Slack users are upset with Salesforce’s AI push, raising concerns about big tech using personal and company data to train AI.

Like many companies, the business uses user data to train some of its new AI services. However, you must email Slack to request opting out to avoid the firm utilizing your data.

And hidden away in what looks to be an outdated, convoluted privacy policy that nobody was reading are the rules of that involvement. That was Slack’s situation up until a disgruntled individual wrote about them on a developer-focused community website, at which point the post became viral, as it did in this instance.

Everything began last night when a comment on Hacker News raised how Slack trains its AI services, clearly referencing its privacy policies. No further explanation was required. That article started a lengthy discourse that revealed something new to existing Slack users: Slack automatically enrolls people in its AI training, and the only way to opt out is to email a specific address.

Following that, the Hacker News thread sparked other discussions and inquiries on different platforms: Why isn’t “Slack AI,” a relatively new product with a generic name that allows users to search for answers and summarize conversation threads, among other things, ever referenced by name on the privacy principles website, not even to indicate whether or not the policy applies to it? Furthermore, why does Slack mention”AI models” and “global models”?

Slack’s image is damaged by users’ confusion about how the company implements its AI privacy principles and their astonishment and annoyance at emailing a company that makes a big deal of saying, “You control your data,” to opt-out.

The terminology may not be new, but the shock certainly is. Pages on the Internet Archive state that the terms have been practical since September 2023. (The business has been contacted to confirm.)

As the privacy policy states, Slack uses user data to train “global models,” which power search results, channel suggestions, and emoji recognition. Slack informs us that there are restrictions on how it can use the data.

Slack features machine-learning algorithms at the platform level for search results and channel and emoji recommendations. A firm representative told TechCrunch, “We do not build or train these models in a way that they could learn, memorize, or be able to reproduce some part of customer data.” The policy must cover the company’s larger goals for training AI models and their overall breadthare.

Customers who choose not to participate in data training will nonetheless gain access to Slack’s “globally trained AI/ML models,” according to the firm. However, even in that scenario, it remains to be seen why the business is utilizing user data to provide features like emoji suggestions in the first place.

Additionally, the business declared that Slack AI is not trained using consumer data.

Large language models (LLMs) are used by Slack AI, an add-on that may be purchased separately; however, LLMs are not trained using client data. Because Slack AI leverages LLMs hosted entirely on Slack’s AWS infrastructure, client data is kept internal and not shared with any LLM providers. According to a spokesman, this guarantees that consumer information remains under that company’s ownership and is used only by that company.

Some will be cleared up sooner rather than later. Responding to a critical assessment of Threads by writer and engineer Gergely Orosz, a Slack engineer, Aaron Maurer, said the page needs to be updated to show “how these privacy principles play with Slack AI.”

Maurer stated that the company’s efforts around search and recommendations are reflected in these agreements, which were drafted before the implementation of Slack AI. With all the misunderstandings around Slack’s current use of AI, reviewing the terminology for any upcoming upgrades will be worthwhile.

The problems at Slack serve as a harsh reminder that user privacy should always come first in the rapidly advancing field of artificial intelligence and that a company’s terms of service should explicitly state how and when data is used—or is not—used.

Ruth Okarter

Ruth is a seasoned news reporter and editor who brings her sharp eye and passion for storytelling to Protechbro.com. With a background in English and literary studies, Ruth crafts compelling narratives that unpack the complexities of the ever-evolving tech landscape.

Disqus Comments Loading...

Recent Posts

Bluesky Woos Swifties

Since Donald Trump's election win, Bluesky has gained 2 million users, or 15%, attracting both disillusioned left-leaning X users and…

1 hour ago

SEC Chief Gensler Hints at Resignation Amid Crypto Crackdown

SEC Chief Gary Gensler defended his crypto enforcement record in a Thursday speech and hinted at a possible departure from…

2 hours ago

EU Regulator Tightens Rules for Crypto Providers

To comply with the EU's restrictive measures regimes, crypto-asset service providers that conduct transfers must select a screening method. Two…

2 hours ago

South Korea Investigates Upbit for 600K KYC Violations

South Korea Upbit faces scrutiny for alleged KYC violations, with potential fines of $71,500 per case. According to reports, Upbit,…

3 hours ago

FTC Probes Microsoft Cloud Practices

The Federal Trade Commission has reportedly launched an investigation into Microsoft’s cloud business practices, examining potential antitrust concerns According to…

3 hours ago

Tech Leaders Back Colleagues for Trump’s Cabinet

Tech industry leaders advocate for their peers to be considered for positions in Donald Trump's Cabinet, highlighting expertise and innovation…

4 hours ago