The idea of freedom of expression on the internet was born with a certain sense of romanticism. During the 1990s and early 2000s, there was an almost utopian belief that online networks would become a new kind of global public square, open and horizontal, where anyone could speak and be heard. That vision hasn’t disappeared, but it has become far more complex as private platforms began to control this space. Today, discussing freedom of expression in online communities is, in practice, a discussion about power. Who decides what can be said? Based on what criteria? And with what consequences? According to a report by the UNESCO, the internet has drastically expanded individuals’ ability to express themselves, while also introducing new mechanisms of control, ranging from state censorship to private moderation carried out by technology companies. This tension between freedom and control sits at the center of nearly every major case. One of the most emblematic episodes took place in the United States after the events of January 2021 at the Capitol. Platforms like Twitter and Facebook permanently banned then-president Donald Trump. For some, it was a necessary measure to contain incitement to violence. For others, it set a dangerous precedent: private companies gaining the power to silence elected political leaders. This case made something clear that had previously been implicit: the rules of global public discourse are, to a large extent, in the hands of corporations. That power is not neutral. Research shows that content moderation is guided by internal policies that are often not transparent, raising questions about criteria and bias. It is not just about removing illegal content, but about interpreting context, intent, and impact, which is deeply subjective. But the problem is not only moderation. In some cases, the opposite, the absence of it, has led to equally troubling scenarios. The social network Gab emerged as a response for users banned from mainstream platforms, promising near-absolute freedom. The result was a highly polarized environment, with a strong presence of extremist discourse and limited diversity of viewpoints. Studies suggest that such spaces tend to become echo chambers, where people mainly consume ideas that reinforce their existing beliefs. This kind of environment not only limits debate but can amplify radicalization. Another relevant study found that bots on social networks can intensify conflicts by spreading emotionally charged and inflammatory content, increasing exposure to negative narratives. In other words, freedom without any mediation can, paradoxically, reduce the quality of discourse itself. On the other hand, there are cases where online freedom of expression has had deeply positive impacts. Contemporary social movements would hardly exist in the same way without these platforms. The Arab Spring, for instance, used social media as a tool for mobilization and exposure. More recently, movements like #MeToo have shown how online communities can give voice to experiences that had been silenced for decades, creating real pressure for institutional change. These examples reveal a less discussed aspect: digital freedom of expression is not only about the right to speak, but about who finally gets to be heard. In contexts where traditional media or power structures exclude certain groups, digital platforms act as amplifiers for historically marginalized voices. But even in these positive cases, dilemmas emerge. The same mechanism that allows the exposure of abuses can also be used to spread misinformation at scale. The spread of fake news has become one of the greatest contemporary challenges precisely because it exploits the open logic of networks. There is also a more subtle, but perhaps more important element: algorithms. They do not directly censor, but they decide what appears and what disappears in the flow of information. This profoundly shapes people’s perception of reality. As academic studies point out, these systems tend to favor content that generates more engagement, which often means polarization and extreme discourse. At its core, the debate about freedom of expression in online communities has moved beyond a purely legal question. It has become structural. It is not just about the right to speak, but about platform architecture, economic incentives, and social dynamics. Perhaps the most uncomfortable question is not whether we should have freedom of expression online, but who actually controls its limits and to what extent we are aware of that while using these platforms every day. --- If in the first part the central question seemed to be “who can speak,” the natural continuation of this investigation is to understand “who gets to keep speaking” and under what conditions. Because in practice, freedom of expression in online communities does not end with posting something. It continues in how that content circulates, how it is received, amplified, or buried. A case that helps clarify this dynamic is what happened on Reddit with the r/The_Donald community. For years, it functioned as a highly active hub for political mobilization. Supporters argued it was a legitimate exercise of political expression. Critics pointed to repeated violations of platform rules, including harassment and the spread of misinformation. In 2020, Reddit decided to ban the community, citing its inability to maintain basic standards of interaction. What is interesting here is not just the ban itself, but the domino effect. Some users migrated to alternative platforms, many with more permissive policies. The result was fragmentation of public debate. Instead of a central space where opposing ideas confront each other, multiple isolated environments emerged. This raises a delicate issue: in trying to control toxic speech, platforms may end up pushing certain groups into even less moderated spaces, where radicalization can grow without counterbalance. This phenomenon also appears in studies about political misinformation. Research published by Massachusetts Institute of Technology found that false news spreads faster than true news on social media, largely because it is more novel and emotionally engaging. This creates a perverse structural incentive. It is not just about the right to speak, but about what type of content the system itself rewards. And here comes a point rarely discussed outside technical circles: moderation is not only human, it is algorithmic. Platforms like YouTube and TikTok rely heavily on automated systems to detect and limit problematic content. The issue is that these systems operate based on statistical patterns, not true contextual understanding. This leads to frequent errors, from the wrongful removal of journalistic content to the persistence of harmful material that slips through unnoticed. A striking example emerged during the war in Syria, when videos documenting human rights violations were automatically removed by content detection systems. Organizations such as Human Rights Watch warned that this was effectively erasing crucial evidence that could be used in future investigations. The attempt to “clean” the platform ended up interfering with the historical record of real events. At the same time, there are stories where online freedom of expression has quite literally saved lives. During the COVID-19 pandemic, digital communities became essential sources of information and support. Healthcare professionals shared real-time experiences, exposed shortages, and pressured governments. In many countries, these voices only gained visibility through social media, bypassing slower or more controlled institutional channels. But again, the picture is not purely positive. The same freedom allowed a surge of conspiracy theories and misinformation about vaccines. Platforms had to act quickly, removing content and promoting official sources like the World Health Organization. Once more, the tension reappears: at what point does limiting content become collective protection, and when does it cross into censorship? There is also a less visible but highly relevant layer: the psychological impact of unrestricted expression. Research shows that online environments with weak moderation tend to drive away ordinary users, especially women and minorities, who are more frequently targets of harassment. This means that allowing “everything” can actually reduce the diversity of active voices. In the end, freedom of expression in online communities is not just being exercised, it is constantly being negotiated. Every moderation policy, every algorithm tweak, every decision to ban or amplify content redefines what it means to “be able to speak” in that space. And perhaps the most uncomfortable realization is that this negotiation happens largely out of public view, through internal company decisions, lines of code, and automated systems that very few people truly understand. If the internet is still this global public square, it is no longer as spontaneous as it seems, and perhaps it never really was. The question that remains is not only about the right to express oneself, but about who is quietly shaping the reach and impact of that expression, and whether we are truly participating in a free space or simply navigating a carefully regulated environment without fully realizing it.

