Impoverishing science

The university and research system has been trapped for years by an idea as seductive as it is misguided: that academic evaluation will be better if it is perfectly "objective." The recipe seems clear: list criteria, divide them into many parts, and give each part an exact score. That way, anyone—even someone unfamiliar with the field—could apply the scale and obtain a result. Total precision. Guaranteed objectivity. Objectivity… in theory.

When we talk about academic evaluation, however, we are not referring only to the assessment of publications in international journals. We are talking about a system that permeates all areas of academic life: the recruitment and promotion of research staff, access to competitive projects, the evaluation of career paths, and the way in which knowledge is produced and validated.

Cargando
No hay anuncios

The reality, however, is the opposite of the initial promise. This obsession with breaking everything down is turning evaluation into a bureaucratic and mechanical exercise. Endless rubrics, boxes to fill in, decimals that attempt to measure what is qualitative by definition. And, meanwhile, the essential element disappears: the expert eye.

Good research isn't detected by adding up points. Valuable work is recognized for contributing novelty, rigor, vision, and depth. And these elements can only be grasped by someone who truly knows the field, who compares and knows how to situate a result within a scientific tradition. This perspective cannot be replaced by a score sheet, because scientific quality doesn't work like a spreadsheet.

Cargando
No hay anuncios

Excessive quantification, moreover, is not neutral: it generates perverse effects. It incentivizes producing more, even if routinely. It rewards those who can best adapt to the point system, not necessarily those who do the best science. And it conditions researchers' practices, who tend to prioritize what is measurable and quickly publishable at the expense of risky or long-term research.

Little by little, this model impoverishes science itself. It limits intellectual freedom and favors the selection of people and work that conform to prevailing models, not the most innovative ones. The result is less open knowledge with a reduced capacity to tackle complex problems.

Cargando
No hay anuncios

All of this has led to a paradox: in the name of objectivity, we have created a less fair and less useful system. Because an evaluation that fails to recognize the true value of what it is judging is a failed evaluation, both for the scientific community and for society. It is not about returning to opacity or discretionary decisions. Transparency is essential, but it does not require turning everything into numbers; it requires explaining how expert judgment is exercised, not eliminating it.

If we want a university and research system that rewards good research and fosters excellence, it is necessary to recover a piece of evidence that bureaucratic debate has sought to obscure: to evaluate well means to know, to be an expert. And knowledge cannot always be broken down into scores.