On Wednesday, Bluesky, a decentralized alternative to X (previously Twitter), updated its trust and safety policies
The company is currently in the process of developing and piloting a variety of initiatives that are designed to address a variety of issues, including harassment, spam, false accounts, video safety, and bad actors.
Bluesky has announced that it is creating new tooling capable of identifying instances in which a single individual builds and manages multiple new identities to address malicious users or those who harass others.
This could reduce the incidence of harassment, in which unsavory actors establish numerous personas to target their victims.
Another novel endeavor will assist in identifying “rude” responses and their presentation to server moderators.
Bluesky will provide a network similar to Mastodon, in which self-hosters and other developers can operate their servers and communicate with Bluesky’s server and other network members.
The federation capability is currently in the early access phase. Nevertheless, in the future, server moderators will have the authority to determine the appropriate course of action for individuals who submit offensive responses.
Meanwhile, Bluesky will gradually diminish the visibility of these responses in its application. It also states that account-level labels and suspensions will result from the repeated use of offensive labels on content.
To reduce the use of lists to harass others, Bluesky will exclude individual users from a list if they block the list’s originator.
Starter Packs, a sharable list that can assist new users in discovering individuals to follow on the platform, have also recently been updated with comparable functionality (see the TechCrunch Starter Pack).
Bluesky will also survey lists that contain abusive names or descriptions to reduce the likelihood of individuals harassing others by adding them to a public list with a toxic or abusive name or description.
The app will conceal individuals who violate Bluesky’s Community Guidelines until the list proprietor implements modifications that align with Bluesky’s policies.
The company did not provide specifics, but it did state that lists are still a topic of active discussion and development. Users who persist in creating abusive lists will also face additional consequences.
In the months ahead, Bluesky will also utilize notifications in its app to manage moderation complaints rather than relying on email reports.
Bluesky is initiating a pilot program to combat spam and other fraudulent accounts. The program will endeavor to automatically identify when an account is fraudulent, spamming, or defrauding users.
The company stated that the objective is to be able to take action on accounts within “seconds of receiving a report” when paired with moderation.
One of the more intriguing developments is how Bluesky will adhere to local laws while allowing free speech. It will employ geography-specific identifiers to conceal a particular piece of content for users in a specific region to comply with the law.
The company stated in a blog post that this enables Bluesky’s moderation service to maintain flexibility in creating a space for free expression while ensuring legal compliance, enabling Bluesky to continue operating as a service in those geographies.
“This feature will be implemented country-by-country, and we will endeavor to provide users with information regarding the source of legal requests whenever it is permissible.”
The team is implementing features such as the ability to turn off autoplay for videos, mandatory video labeling, and reporting videos to address potential trust and safety concerns with the recently added video.
It is currently determining what additional features may be required, which will be prioritized by user feedback.
The company asserts that its comprehensive framework pertains to abuse by “evaluating the frequency of an event about its severity.”
The organization prioritizes resolving high-frequency and high-harm issues and edge cases that could cause significant harm to a small number of users.”
Bluesky asserts that despite its limited impact on a small number of individuals, the latter in sulacksnual harm” to warrant action to prevent the abuse.
Reports, emails, and mentions to the @safety.bsky.app account are all viable methods for addressing user concerns.