Lyotard warned us decades ago in his seminal work The Postmodern Condition about science’s legitimation crisis. Not that science lacks legitimacy entirely, but that it has abandoned what he considered “beautiful” legitimation. Instead of pursuing truth or improving humanity’s condition, modern science operates under what Lyotard called the performativity principle: science must be profitable, generating returns on investment like any other business—or preferably more, to ensure continued funding.
For those who find “performativity” too philosophical, Feyerabend offered a blunter assessment:
20th-century science abandoned its philosophical pretensions to become big business. Rather than threatening society’s established order, science became one of its strongest pillars.
why not Plato?
Now, describing science as a “threat to society” sounds alarming—until you consider what kind of society we’re talking about. If society functions as an ideological system designed to reproduce inequality, supporting structures that keep a few people at the pyramid’s peak while they control resources and power relationships (Lyotard’s reproduction thesis)… then we need Science to be more threatening.
Examples of science serving the status quo rather than human progress are everywhere. One of my favorites: citation indexes.
Citation Indexes
Citation indexes (CI) are essentially ranking lists where scientific journals are ordered by their impact factor—a measure of how “important” their articles are to the scientific community. This sounds helpful enough, promising to show “a journal’s true place in the scholarly research world” and “measure research influence and impact at the journal and category levels” (Thomson Reuters, the editor of the JCR ranking, dixit).
It sounds nice, but it stinks. The same way JCR qualifies journals, these journals transitively determine which researchers are considered “good”—those whose work gets published in high-ranking journals— creating an argument from authority trap/loop… whether this is searched or unwanted.
C.I. Rankings promote inequality

First, top-list journals are expensive, so there is not global access to these, and this becomes a powerful source of inequality. “There are countless researchers without access to most impacting articles because journals abusive price: each paper costs about $30 and you should read lots of papers. If these articles are, arguably, the best scientific works, those people without access to them would have more difficulties in developing brilliant, innovative results, thinking science as an accumulative process.
Additionally, citation indexes make countless researchers all over the world systematically invisible as they are misrepresented. Their works are excluded from mainstream research not even because of their quality but because of where they are published and, indirectly but not less important, because of the language (the vast majority of journals in the first quartile are in English) or researchers’ relationships.
Of course, those researchers are not explicitly excluded. But the symbolic violence of this segregation is brutal, first because it is explained and legitimated in terms of quality of the research work, and second, due to the relative invisibility of this segregation.
An alternative to citation indexes?
First, top-tier journals are expensive, creating barriers to global access and becoming a powerful source of inequality. There are countless researchers without access to the most impactful articles because of journals’ pricing: each paper costs around $30, and serious research requires reading many papers. If these articles are arguably the best scientific work available, then researchers without access face significant disadvantages in developing brilliant, innovative results—especially considering science as a cumulative process where each advance builds on previous knowledge.
Additionally, citation indexes systematically render countless researchers worldwide invisible through misrepresentation. Their work gets excluded from mainstream research not because of quality, but because of where it’s published. Indirectly but equally important, exclusion happens because of language barriers (the vast majority of first-quartile journals publish in English) and researchers’ institutional connections or lack thereof.
Of course, no one explicitly excludes these researchers. But the symbolic violence of this segregation is brutal for two reasons: first, it’s explained and legitimized in terms of research quality rather than structural inequality; second, this segregation operates with relative invisibility, making it harder to challenge or even recognize.
The system presents itself as a meritocracy while functioning as a mechanism of exclusion.
Criticism has been dethroned by pseudo-democracy or pseudo-intersubjectivity mechanisms to focus literature or entertainment contents consumption. Habermas complains about the intellectuals’ lack of authority to direct public discussions. Science, a change engine by definition, seems to be one of the few places resisting this democratizing wave by maintaining authority argument in the form not only of peer review committees with shamanic powers to interact with Knowledge deities to decide what’s good or not.
That’s even worse when you know that sometimes those peer reviews can be fabricated or just hilariously stupid, made only to justify picking money from young researchers’ pockets.
I’m overtly not in love with mass pseudo-democratic mechanisms, easily influenced and cooked by advertising constructions or filter bubbles. But it is clear that we need to give voice to horizontal and open peer-review systems where anyone can be a peer. And national research certification systems could also take into account more open and modern impact measures, more aligned with what science and research should mean.


Leave a Reply