In today’s digital world, social media platforms play a major role in shaping how people receive, interpret, and share information. Big platforms such as Meta (Facebook and Instagram) and TikTok carry a major responsibility to eliminate the spread of misinformation. The spread of misinformation directly impacts the public’s understanding of important issues, news, etc., which is why it is crucial for these large platforms to have control over misleading content. This blog will examine the current policies used by Meta and TikTok and will show how effective their approaches are when addressing misinformation.

Meta (Facebook and Instagram):

Meta overtime has developed a multi-layered approach to address misinformation that combines both technology, human review, and outside partnerships. Previously, Meta’s most important tool was a third-party fact checking program. This allowed independent organizations to review and decide what was misleading content and then take it off these social media apps. However, in 2025 Meta announced a major change which was that they were not going to be using the third-party fact checking system in the United States as much. Meta would now put more responsibility within an automated system, and they explained that there reason for this shift was mostly influenced by concerns about biases. Meta believed leaving it up to technology would solve a lot of those concerns and they would also not be so restricted in deciding misleading content.

Meta also has a newer tool, which is a Community Notes style system. Users can basically add any type of correction to a post that they believe is misleading. This eliminates a single authority having the only viewpoint to decide what is true or false. This also allows a wider range of contributors that all share different viewpoints so that way there is no single bias. Meta’s approach to community guidelines basically encourages collective judgment rather than centralized control. I think this system balances free expression while also promoting it to be done with accuracy, which seems like it would solve most problems.

Even with these new systems though, there still is a lot of concern. A major issue that is still relevant is that misinformation can spread drastically before it is even flagged. So, even though the information would eventually get deemed as misleading, it wouldn’t solve the “we need this done immediately.” I think another concern is that removing professional fact checkers might even affect the consistency when identifying false claims. According to DISA article on “Implications of Meta’s Shift to Community-Based Content Moderation for Misinformation,” community-based moderation can sometimes be influenced by group bias or coordinated efforts, which may affect the reliability of corrections.

Honestly, from my perspective since I use both of these apps, I think misinformation still tends to spread super quickly before warning labels even appear. Even though I do think the labels are helpful, they might come just a little too late to the game.

I think a possible improvement would be to re-introduce stronger partnerships, while also still allowing community input. This would offer a hybrid system that could combine both user participation with expert verification. Additionally, Meta could also be a little more transparent by explaining how the community notes work and offer more insight on how the process following then flags misleading content.

Tik Tok:

TikTok uses a little bit of a different strategy that relies a lot on its algorithm driven structure. Instead of just mainly focusing on user input TikTok prioritizes early detection systems. According to TikTok’s Community Guidelines page the platform removes or limits content almost immediately that contains false information. They especially focus on this issue when it comes to public safety and health.

A crucial part of TikTok’s system is its ability to limit the spread of misinformation before it becomes viral. Since the platform uses automated detection tools and user reports, it is able to identify potentially harmful content fairly quickly. This is very important especially during high-risk events such as elections or natural disasters, and TikTok increasing its monitoring efforts better catches misinformation earlier in its spread.

In 2025 TikTok also introduced a fairly similar idea as Meta. Their community-based system was called Footnotes. This allowed eligible users to add information to videos that they believed were misleading. Contributors had to meet certain requirements though which included proving their account age and guideline compliance. The corrections being made also only became visible if users with different opinions also agreed that this information was helpful. Again, this type of system is basically designed to reduce bias and allow more viewpoints.

TikTok also works with fact-checkers and external experts to review trending content that way they are constantly circulating through information. According to TikTok Newsroom, TikTok shared a report that showed how much content they remove that violates its misinformation policies. This does a good job at showing us that their enforcement is still very active and ongoing.

I think Tik Tok at the moment has the best system to quickly eliminate the spread of misinformation, however, is can still sometimes spread, even with the fast-acting systems. The main downside is honestly that Tik Tok is a video-based app so one little, short clip that is engaging or emotionally charged can still be shared fairly quickly before being deleted. From real-world observation, TikTok’s system often feels faster at catching misinformation but still could have some spread.

One little suggestion that I believe would improve TikTok would be to expand educational tools that help users recognize misinformation on their own. Instead of just relying mainly on removal and algorithm control, (which I do think is helpful) they can then also include short prompts or learning features that explain why certain content may be misleading and how to avoid a widespread.

Final Thoughts:

Overall, I can see that Meta and TikTok take somewhat different approaches to the same problem. Meta relies a lot more on a combination of user input and algorithmic moderation, while TikTok focuses on fast detection, and trying to eliminate things quickly. I think both platforms have made progress in addressing misinformation, but obviously neither system is perfect. A combination of both platforms would help even more I believe. I like the idea of the community page because it offers a voice to everyone, but I do think we still need experts detecting misleading content. At the end of the day, we need to have a voice but also let the professionals do their job. It is very hard to completely eliminate misinformation because I mean one click of a button and you can put anything out there, but I do believe these social media apps are trying to find quicker resolutions and more effective ways to eliminate wide spreads of misinformation.

Posted in

Leave a comment