When the Algorithm Decides for You

When the Algorithm Decides for You

You open Facebook “just for a moment” and, before you know it, half an hour has passed. On Instagram, the videos seem to read your mind. On X, everyone seems to be outraged. It's no coincidence. Behind it all is the algorithm, a sort of automatic filter that decides what you see, what you don’t see, and in what order you see it.

An algorithm, explained without technical jargon, is like a doctor’s recipe or a set of instructions: “if this happens, do that.” On social media, these instructions say things like: “if a video gets many comments very quickly, show it to more people,” or “if this person watches cat videos until the end, show them more cats.” It’s as if you had a personal librarian who keeps putting in front of you the books they think will make you come back to the library every day.

The big platforms live off your attention because they sell advertising based on it. That’s why their algorithms observe everything you do: how long you stay on a video, which posts you comment on, who you follow, what you scroll past in a second, and what you share with your friends. With that data, they build a fairly detailed profile of your interests, fears, and anxieties. From there, they try to predict what content will keep you hooked, and with that logic, your feed becomes something entirely personalized, where the main criterion is not “the most important” but “what keeps you inside the platform.”

This is where the concept of algorithmic bias comes into play, when these systems, instead of functioning neutrally, produce systematically unfair or distorted outcomes.

A particularly clear example is content about Palestine. Human rights organizations have documented that, in times of high tension, posts supporting the rights of the Palestinian people have been deleted, hidden, or disproportionately penalized on Facebook and Instagram, even when they did not incite hate or violence. Academic research on the protests in Sheikh Jarrah in 2021, for example, describes how activists perceived an “algorithmic censorship”: livestreams being cut off, accounts limited, posts stopping circulation, and enormous difficulty appealing those automated decisions.

A study conducted in Cuba in 2019 showed that content about the U.S. embargo posted on Instagram from our country had 50% fewer reactions than the same message posted from another country on the same account.

The effects of algorithmic bias affect us all because they amplify extreme positions and emotional reactions. Messages that outrage, frighten, or anger tend to generate more comments and shares, and that's why algorithms tend to promote them, which can give the impression that the world is permanently on the brink of collapse. At the same time, misinformation benefits from this dynamic: if false news touches emotional chords and spreads quickly, the system doesn't have time to stop it before it reaches millions of people.

There are also less visible but equally important effects: we form a distorted idea of what the "majority" thinks because we confuse our bubble with the whole of society; we make personal decisions based on incomplete or biased information; and certain groups are systematically underrepresented or misrepresented, which affects their ability to make themselves heard.

Understanding algorithmic bias does not mean falling into paranoia or abandoning social networks, but rather looking at them more consciously. Being able to take a step back, to ask "What am I not seeing?" and to seek other windows to the world is, today, a way to protect your autonomy and critical perspective amid the digital noise.

(Taken from Granma) 

No comments

Related Articles

#120 Constitution Street / © 2026 CMHN Radio Guaimaro Station. Radio Guaimaro Broadcasting Station (ICRT).

(+53) 32 812923
hector.espinosa@icrt.cu