CrossOver podcast – episode 1 “The keyboard fighters”
They claimed on social media they were to invade the Parliament of Brussels. In reality, as good “slacktivists” they only fought on the Internet.
At the crossroads of algorithmic investigation and field reportage and through the story of the Freedom Convoy, our guests will tell us about the considerable gap between the virtual and the real worlds…
For this first crossing of algorithms, we are pleased to talk to Hind Fraihi, a reporter at Apache (Belgium), and Guillaume Kuster, working on algorithms and CEO of Check First (Finland). With them and Divina Frau-Meigs from the association Savoir Devenir (France), you will enter the incredible world of the production of information in the hands of artificial intelligence. By mixing data journalism and field investigation and using the monitoring tool of the popular search engines and social media developed within the European project Crossover, our speakers will try to unveil the mystery around the recommendation algorithms modifying media agendas and our vision of the news.
How do they work?
What are their objectives?
How can they be controlled? That’s what this podcast is all about.
with Divina Frau-Meigs, Sébastien Gaillard, Hind Fraihi and Guillaume Kuster.
Sebastien: You are listening to the podcast of the Crossover project. Crossover is a European project that explores the amazing behind-the-scenes algorithms that make the world go round. It’s a citizen information project. In this episode, “Keyboard Fighters,” we’re going to talk about sorting algorithms, recommendation algorithms, and Freedom Convoy. Do you remember in early 2022, the Freedom Convoy, with those trucks that blocked the roads in Canada to protest against Covid measures?
Sebastien: Reading the social networks, we, in Belgium, were going to do the same. It was supposed to be a major event. The blocking of the European Parliament, action, chaos. But pschht! Nothing. More noise than harm. The vehicles were rare and mostly French. The Belgians stayed behind their screens. So, what happened? What if it was another trick of the algorithms? Let’s start by joining Divina Frau-Meigs for the investigation.
Divina: Hi, I’m Divina Frau-Meigs and I’m the president of Savoir Devenir, an association specialized in Media and Information Literacy. With us, two of our partners of the Crossover project, Hind Fraihi and Guillaume Kuster. Hello, Hind.
Hind: Hi Divina, I’m Hind. I work for Apache, an investigation media in Flanders and I am the coordinator of Crossover for Apache.
Divina: And you led the investigation on the Freedom Convoy that you’re going to tell us about.
Guillaume, what about you, on the side of algorithms?
Guillaume: Hi Divina, I’m Guillaume Kuster, I’m the president of Check First which is a methodology and software company. We have built monitoring tools for large platforms like YouTube and Facebook for this Crossover project.
The Dashboard is the public part of the monitoring tools we make on the content recommendation algorithms. What you need to understand is that Crossover is a project that will monitor the major platforms.
Divina: Together, we’re going to try to understand the huge mismatch that can occur between what social networks make us believe and the reality. How did the rumor around the Freedom Convoy come about? Why were hundreds of police officers and dozens of journalists mobilized in anticipation of violence?
Hind: There were actually more journalists and police present than there were protesters.
Especially cars with French and Dutch license plates were present. But I have to say that, online, there was a lot more movement than what we actually saw.
Divina: On the ground and not on social media. How did it go? What was the atmosphere like? Was it organized? Was it crazy?
Hind: The online mobilization was great, very lively. But the real life demonstration was a flop. So there weren’t many people and the people who were there, the protesters, you could say they’re nomadic protesters who travel from protest to protest and they have a very conspiracy mindset, I must say. They distrust the government, the media, and the school too. I’ve also seen demonstrators with children who travel with these little children from demonstration to demonstration to protest against Covid measures in schools as well.
Divina: And then online there was this rumor that there was going to be big violence, kind of like what happened in Canada. Did you find that on the ground?
Hind: No, this fighting spirit, we didn’t find it on the field. On the field, it was more a spirit of solidarity, a spirit, as I said, of distrust, but also a passive spirit because there were not many participants.
Divina: So it’s not always a success, these online calls for mobilizations don’t always work in real life, it seems.
Hind: They are fighters…in Flemish we say “klavier vechter” fighting from the computer keyboard, so most of them are very passive and so they belong rather to the “slacktivism” movement. So it’s only click, click and be supportive online and very combative online. But really, offline fighters. So we see a disconnection between online and offline mobilization.
Divina: That was the first survey. And then, if I understood correctly, you entered by hashtags, such as #Convoidelaliberte. Were there other hashtags?
Hind: Yes, there were other hashtags. #nietmijnregering, that means “not my government”, or #anticor (antibodies)
Divina: And so you had this idea to investigate the convoy? When this massive mobilization was announced that we were all waiting for, you used a tool called the Dashboard.
Guillaume: That’s the originality of the project, that is to say that we’re going to have computers that simulate users who are in people’s homes, unlike most research projects that we can see where it’s going to be in data centers or places that could be recognized by Google and so on. Here, we’re in people’s homes and we’re playing at simulating people. We do it several times a day, once an hour for Twitter for example, or twice a day for YouTube.
We search for about thirty predefined terms, we mentioned some of them with Hind earlier, and regularly, we keep all these data and see in what order the results are presented and what they are. You know, the part that is automatically filled in by Google is not always going to be the same depending on who you are, your browsing history, your geographic location, the type of computer to phone you’re using. And our role at Crossover is to first collect all of that and then allow people like Hind or Disinfo Lab to analyze the results and draw conclusions.
We try to see if, depending on whether you live in Namur or in Brussels, since I remind you that Crossover takes place in Belgium, which is a very interesting country because it is relatively small, but several languages are spoken there, there are strong foreign influences from the Netherlands and from France. And we are going to observe all that and see if we see movements that will make us think that in Antwerp, we get a different type of result than in Brussels or in Hainaut for example, which is in Wallonia, so in French-speaking Belgium.
Divina: Yes, Guillaume, the Dashboard has the particularity of covering several social media, search engines, etc. How many at the moment?
Guillaume: On the dashboard we have YouTube, Twitter, Facebook, Reddit which is used in Belgium, Odyssey which is this kind of YouTube without any filter, on which we find a lot of conspiracy theories, conspiracy theorists, flat earth theory enthusiasts, etc etc
And on the side of Google, we do two things : we are going to follow what we call auto-completion, that is to say that when you search for a search term you type “the coronavirus is…” and you let Google finish the sentence, you have search suggestions, we are going to collect these search suggestions. And on the other hand, we follow the order in which the articles are presented in Google News or Google Actualités, when we search for “vaccine”, which articles are shown to us and in which order.
Divina: Hind, so you decided to write from these hypotheses about the convoy, the invasion of the capital, etc. You wrote that the movement was quite paranoid, quite badly organized and that it seemed to emanate from abroad. What made you say that?
Hind: We drew conclusions on the ground from what we saw, and then was supported by the Crossover dashboard data. And one of the first conclusions is that the Belgian freedom fight is an import product.
The Crossover survey shows that the convoy lived more in Belgium, in French-speaking Belgium than in Dutch-speaking Belgium. And according to our analysis, the content shared prior to the Brussels demonstration was fed by French media. For example, a highly recommended channel on YouTube was Russia Today France (RT France), controlled by Russia, followed by France 24.
And among the recommended videos, content related to anti-vaccine opposition mostly stood out. It was the same conclusion for the Dutch speaking part for the Flemish. So the recommended content on other platforms also has foreign influences for Flemings, especially from the Netherlands.
Divina: When you did your survey and followed your assumptions, you didn’t go to all the social media, all the groups? Which ones did you focus on?
Hind: We followed a lot of platforms, but we focused our investigation on Twitter, on YouTube and Facebook groups. We saw a lot of foreign influences for the French speakers, but also for the Flemish, especially from the Netherlands, with recommendations from De Telegraaf newspaper for example, but also recommendations from vloggers who are skeptical about the vaccine or vaccines.
And I have to add that our conclusion was also confirmed by the Brussels police who registered 130 vehicles coming from France and the Netherlands. So not really a big participation of Belgians.
Divina: Very good and so, clearly, the algorithms are at the heart of the matter. What’s going on behind the scenes of these social networks and algorithms? Have you guys been digging under the hood?
Guillaume: Looking under the hood is clearly not possible. So what about knowing how an algorithm works? Why specifically one content is highlighted over another ? That’s covered by the industry secret. If it’s well hidden, these are well hidden secrets by the platforms, whatever they are, whether it’s YouTube, Facebook or others, and their recommendation algorithms, what you have to understand is that they have a role, it’s to keep the users on the site as long as possible.
In the case of YouTube, for example, or Facebook, the programmers will tell a machine, an artificial intelligence: well, we want people to see as many ads as possible, and to do that, they have to see as many videos as possible, as much content as possible. So we have to make them stay to show them things that interest them.
How is this done? There’s so much content available on YouTube or Facebook that it can’t be done manually. It’s an artificial intelligence that will profile users and according to what it understands, the behavior of this user will show him certain content compared to others. This means that today, even within these companies, such as Meta or YouTube, there is no one who really knows how to describe precisely why such and such a video has been shown to such and such a user.
There, they will be able to describe the main trends, “yes, but we show engaging content, we will favor content that comes from channels that users have already visited, ecc.”. But there is a lot of mystery and there are also documents that have come out of these companies (internal documents that should not have come out, especially from Facebook), where we see engineers sounding the alarm to management and saying “if we have to answer to the European Union on these questions of algorithms, we won’t really know where the data is and how we arrive to these conclusions. So we’re a little bit embarrassed to answer. It’s a big mystery.
But to come back very quickly to the intentionality of spreading misinformation, I don’t know if we can say that there is an intentionality on the part of the platforms. What we were saying earlier about the fact that artificial intelligences are going to be at the origin of content recommendations, they do what they are asked to do, they detect that there is an interest in clearly catchy or conspiratorial content, and so on. And if we see that Jean-Michel in Namur, likes to watch videos of CGTN because it’s a bit different from what he’s used to seeing, the artificial intelligence is going to serve him a bit more of it. So the whole question is how to make sure that these problematic contents are presented less, rather than amplified by the algorithms. That’s why we’re trying to open a dialogue with platforms like YouTube and Meta.
Divina: Hind, so just to wrap up this first part of our Crossover podcast, how does a journalist defend himself and deal with these revelations that are made by the Dashboard?
Hind: The Crossover Dashboard is a good tool for measuring online tensions, but it can’t be used on its own, obviously. There has to be some journalistic work to follow.
Divina: And you have an opportunity to talk to Guillaume and his team to refine it?
Hind: Oh yes!
Guillaume: We talk to each other, of course! (laughs)
Divina: Well, I think this notion of dialogue is maybe the one we’ll keep for the end of this first episode of our Crossover podcast.
Sebastien: Now that the survey has enlightened us on “How it works”, let’s see how not to be led too far by the algorithms. Divina Frau-Meigs, the first point is that when you hear these kinds of stories, you think that everybody should know how algorithms work.
Divina: It’s essential indeed, not to understand mathematical formulas, don’t worry, but to know how algorithms work, yes. Their missions are to predict, to recommend, to do marketing too. And it is very important, in order to regain control over our digital life, to understand their effects, the way they influence our information, our consumption, our relationships.
Sebastien: So the second point is the notion of manipulation, because we have the impression that social networks are there to manipulate us.
Divina: Yes, but be careful not to fall into paranoia. We have to be clear on that, manipulation is not the business, it’s the platforms and social networks. Their raison d’être is to make us stay online as much as possible to make them earn money. And there, all means are good. Algorithms push the content that makes an audience and keep us glued to our screen as long as possible. It’s as simple as that. At the risk of questioning the quality or the relevance of the information.
But where we find the roots of the paranoia is that algorithms are also used by states and all sorts of “rogue actors”, without the knowledge of social networks and users. This is what we call cyberwar, with interference often coming from abroad. And there, disinformation and conspiracy theories activate people’s fears and worries. And this can create a real gap between reality and virtuality by boosting conspiracy theories, as in the case of the Freedom Convoy, which was inflated online but flopped in real life.
Sebastien: And there are some accounts that purposely put out sensationalist content to create the buzz, so it’s hard not to take the bait.
Divina: It’s true, it’s not easy and nobody likes to be the fish that gets trapped. But that doesn’t mean that we can’t do something, alone or together.
In media and information literacy, we are fighting to regain control over information, over search engines, we can considerably slow down the recommendation algorithms. For example, we can erase our browsing history, or browse in private mode via VPN.
On social media, it’s a bit more complicated. But if you open your community to people with different opinions, if you force yourself to read different sources, then you will have access to much less biased information.
The algorithms try to anesthetize us, so we have to get out of the comfort zone that they provide us. The worst enemy of algorithms in fact is curiosity. It’s up to us to go and see the results on the third or fourth page of our screens to discover something else. It’s up to us to snoop around, to share less trendy results, to resist the sirens of the most famous of algorithms: since you liked this, you’ll like that.
The answer is also political. The European Commission is working on it. So are our governments. But for this to happen quickly, we, the citizens, must demand the transparency of algorithms. In particular, the revenues they allow platforms to collect with online advertising. It is up to us to remain vigilant.
Sebastien: This podcast was produced by Savoir Devenir and INA within the framework of the Crossover, a project financed by the European Union.
Sophia Hamadi and Pascale Garreau were responsible for the conception, Gabriel Fadavi for the technical direction and editing. Writing assistance, Jean-François Gervais and Sébastien Gaillard.
Find the notes of this podcast, the Dashboard and all the episodes on the site Crossover.social