Opening Hours

Mon - Fri: 7AM - 7PM

NEW YORK: Mark Zuckerberg touts Meta’s Twitter-mimicking app Threads as a ‘friendly’ haven for public discussion online, more hostile than billionaire Elon Musk owns He argued that it is clearly distinguished from Twitter.

Meta CEO Mark Zuckerberg said Wednesday shortly after the service launched, “We definitely put a lot of emphasis on kindness and make it a friendly place here.”

Maintaining the ideal vision for Threads, which attracted over 70 million users in its first two days, is another story.

Certainly, Meta Platforms is no novice at managing the hordes of the Internet who incite anger and post vulgar posts. The company said it would impose the same rules on users of its new Threads app as it maintains on Instagram, a social media service for sharing photos and videos.

The Facebook and Instagram owner has embraced an algorithmic approach to its content offerings, giving it more control over the types of dishes that work as it steers away from news and toward entertainment. become.

But given Threads’ integration with other social media services such as Mastodon, and the appeal of microblogging to newsmania, politicians, and other fans of debates, Meta also took on the new challenge posed by Threads. I’m trying to draw a new path through them.

For starters, the company has no plans to extend its existing fact-checking program to Threads, spokeswoman Christine Pye said in an emailed statement Thursday. This removes the distinguishing feature of how Meta managed misinformation in other apps.

Pai added that Facebook and Instagram posts that have been rated false by fact-checking partners (including Reuters’ divisions) carry the label even when posted in threads.

Meta declined to comment when asked by Reuters why it was taking a different approach to misinformation in the thread.

In a New York Times podcast on Thursday, Instagram head Adam Mosseri said Threads “facilitates public discussion” more than other services on Meta, and therefore focuses on news. While admitting that it tends to attract large crowds, the company’s objectives were to: Focus on light themes such as sports, music, fashion, and design.

Nonetheless, Meta’s ability to distance himself from the controversy was quickly questioned.

Thread accounts reviewed by Reuters posted about the Illuminati and “billionaire Satanists” within hours of opening, with other users comparing each other to Nazis, from gender identities to violence in the West Bank. They fought over everything from

Conservative celebrities, including former President Donald Trump’s son, have protested censorship after a label appeared warning them of posting false information to would-be followers. Another Meta spokesperson said those labels were a mistake.

To Fediverse

Further challenges in content moderation await when Meta links Threads to the so-called fediverse, where users of servers operated by other non-meta entities can communicate with Threads users. Meta’s Pai said Instagram’s rules apply to these users as well.

“If an account or a server, or a large number of accounts from a particular server, are found to be violating our rules, they will be blocked from accessing threads. will no longer be available and vice versa,” she said.

Still, researchers who specialize in online media said the devil lurks in the details of how Meta approaches such interactions.

Alex Stamos, director of the Stanford Internet Observatory and former head of security at Meta, said that without access to back-end data about users who posted banned content, the company would not be able to create a major type of content model. I posted in the thread that I would face big challenges to implement the implementation.

“With federation, the metadata that major platforms use to tie accounts to a single attacker or detect large-scale fraud is not available,” Stamos said. “This will make it even more difficult to stop spammers, troll farms and economic abusers.”

In his post, he said he hoped the thread would limit the exposure of Fediverse servers with a large number of fraudulent accounts and impose tougher penalties on those who post illegal content such as child pornography. said there is.

Even so, the interaction itself poses challenges.

“When you start thinking about things that are illegal, things get really weird and complicated,” said Solomon Messing of the New York University Center for Social Media and Politics. He cited examples of child exploitation, non-consensual sexual images and arms sales.

“If you come across such material while indexing content (from other servers), do you have any responsibility beyond blocking the content from the thread?” Analysis: Meta “Friendly” Threads Clash with Unfriendly Internet

Recommended Articles