Revista de Derecho Público: Teoría y Método
Marcial Pons Ediciones Jurídicas y Sociales
Vol. 1 | 2020 pp. 271-312
Madrid, 2020
DOI: 10.37417/RPD/vol_1_2020_35
© José María Rodríguez de Santiago
ISSN: 2695-7191
Recibido: 31/12/2019 | Aceptado: 29/01/2020
Editado bajo licencia Creative Commons Attribution 4.0 International License

THE ASSESSMENT OF LEGAL RESEARCH IN SPAIN.
IS EINSTEIN BETTER THAN KELSEN?

LA EVALUACIÓN DE LA INVESTIGACIÓN JURÍDICA EN ESPAÑA.
¿ES EINSTEIN MEJOR QUE KELSEN?

José María Rodríguez de Santiago

Catedrático de Derecho Administrativo
Universidad Autónoma de Madrid

ABSTRACT: This paper first examines the constitutional provisions that must guide public research assessment procedures. Freedom of research (Art. 20(1)(b) CE) requires these procedures to be “suitable” for science as well as for science to be involved therein. After presenting the concepts and notions used in the international debate on research assessment (bibliometrics, quantitative or qualitative research, peer review, etc.), there is an outline of the various types of research assessment procedures in place under Spanish law according to the subject of assessment (research works, researchers’ CVs, research projects and institutional assessments). The paper then looks into the characteristics of law that are relevant from the perspective of legal research assessment. The closing remarks include a set of proposals revolving around the idea that public authorities must create organizational structures and procedures allowing every scientific community to set the criteria to be applied to research assessment within each scholarly field.

KEYWORDS: research assessment; universities; higher education; freedom of research; sexenio; accreditation; research projects; law; legal science; legal research.

RESUMEN: En este trabajo se analizan, en primer término, las directivas constitucionales que deben informar los procedimientos estatales de evaluación de la investigación. La libertad de investigación científica [art. 20.1 b) CE] exige que esos procedimientos sean “adecuados” a la ciencia y que en ellos participe la ciencia misma. Tras la presentación de los conceptos utilizados en el debate internacional sobre la evaluación de la investigación (bibliometría, criterios cuantitativos o cualitativos, peer review, etc.), se exponen los diversos tipos de procedimientos de evaluación de la investigación existentes en el Derecho español en función del objeto evaluado (trabajos de investigación, currículum de investigadores, proyectos de investigación y evaluación institucional). Se tratan, a continuación, las características propias de la ciencia del Derecho que son relevantes desde la perspectiva de la evaluación de la investigación jurídica. Y se concluye con algunas propuestas que giran en torno a la idea de que sobre el Estado pesa la responsabilidad de crear organizaciones y procedimientos que sirvan a cada una de las comunidades científicas para fijar los criterios que serán aplicados en la evaluación de la investigación que se realiza en cada una de esas disciplinas.

PALABRAS CLAVE: evaluación de la investigación; Universidades; libertad de investigación científica; sexenios; acreditaciones; proyectos de investigación; ciencia jurídica.

SUMMARY: 1. OPENING REMARKS: WHY IS THERE A PUBLIC ASSESSMENT OF RESEARCH? —2. CONSTITUTIONAL PROVISIONS AFFECTING THE PUBLIC ASSESSMENT OF RESEARCH: 2.1. The objective dimension of freedom of research (Art. 20(1)(b) CE): evaluations suitable for science, evaluations involving research or science itself and evaluations assessed based on their effects on research. 2.2. The reserva de ley principle or statutory reservation (Art. 53(1) in connection with Art. 20(1)(b) of the Spanish Constitution): the core aspects of research assessment must be regulated by an act of Parliament or statute. 2.3. The fundamental right to personal data protection (Art. 18(4) CE).—3. RESEARCH ASSESSMENT: SUBJECT, CRITERIA, INSTRUMENTS AND PROCEDURES: 3.1. The “evaluators’ jargon” 3.2. Quantitative and qualitative criteria. 3.3. Research assessment tools: bibliometric indicators and peer review. 3.4. The assessment procedure.—4. TYPES OF RESEARCH ASSESSMENT IN SPANISH LAW: 4.1. Direct assessment of research works. 4.2. Research project assessment. 4.3. Assessment of researchers. 4.4. Institutional assessment of research.—5. CHARACTERISTICS OF LAW DEEMED SIGNIFICANT FROM THE PERSPECTIVE OF RESEARCH ASSESSMENT: 5.1. Legal science hinges on hermeneutics and mostly deals with legislative texts. 5.2. Ties between legal science and the practice of law. 5.3. The subject of legal science is segmented because it hinges on domestic legal orders. 5.4. Legal research is tied to the language of legal provisions. 5.5. Publication formats for research works. 5.6. Law is a discipline “of mostly individual authorship”.—6. QUALITY CRITERIA IN LEGAL RESEARCH ASSESSMENT.—7. CONCLUSIONS AND PROPOSALS.—8. REFERENCES

1.  OPENING REMARKS: WHY IS THERE A PUBLIC ASSESSMENT OF RESEARCH?

The 1998 Sorbonne Declaration and the Bologna Declaration of 1999 triggered a convergence process towards the European Higher Education Area (EHEA). This path towards convergence gave rise to the notion of “quality assurance” 1 within Member States’ higher education law, which is almost inextricably linked to assessments and evaluation procedures. Assessment, alongside quality assurance, is aimed at achieving two core objectives pursued by the EHEA: i) enhancing competitiveness and ii) promoting the internationalization of the European higher education system.

Teaching and research assessment methods and procedures lagged behind the EHEA project (i.e., they were prior in time and actually lagging behind at the level of ideas) and they were closely related to New Public Management stances, aimed at detaching public authorities from decision-making and control of productive activity, whilst focusing on output 2 and on performance-oriented approaches. When applied to higher education, these approaches brought along teaching performance assessment and research evaluation methods. As elected public officials lost decision-making powers (i.e., there was a lesser substantive input from democratically elected public authorities), accountability vis-à-vis parliaments and society as a whole took over, particularly regarding the allocation of financial resources. Accountability, efficiency and transparency in the allocation of scarce public resources became guiding principles governing academic activity through assessments and evaluation procedures 3. Furthermore, academic assessments proved effective in legitimizing political decision-making. In order to allow for sound public policy-making in the field of academic research (and thus for widely-accepted public policies), the results and information obtained from academic evaluation procedures have to be disclosed, thereby bringing academic performance into the open 4.

The aforesaid approaches and lines of reasoning provided research assessment advocates with the terminology for their claims: institutional autonomy (which allows to solely account for results), competition and competitiveness, performance-based resource allocation, efficiency and transparency, etc. Those against these approaches have also come up with their own response to what they call “evaluitis” 5. Over time, scientific research has moved forward on the basis of researchers’ intrinsic motivation or inner drive, self-determination and independence 6; research assessment replaces society’s reliance on scholars with such degree of control that undermines creativity and creates perverse incentives, including those that privilege quantity over quality in academic production. Also, performance evaluation in academia fails to appropriately reflect academic output and hinders research through red tape and burdensome and recurring requirements 7.

The aforesaid criticism also focuses on the perverse incentives that research assessment may have on the other cornerstone of university activity: teaching. Privileging research (which gives domestic and international prestige or recognition) may underrate teaching (which always has more of a local scope), thereby leading to a decrease in teaching quality. This is not inconceivable if, for instance, a given university rewards its so-considered best researchers with a reduced teaching load 8. Recent studies have shown i) that the quantity of research has no impact on teaching whatsoever, neither positive nor negative, whereas ii) the quality of research is apparently related to the quality of teaching, simply because top scholars tend to perform remarkably in any academic activity 9.

This paper does not address teaching evaluation (involving, for instance, the assessment or review of university curricula), which poses specific issues and has its own rationale aside from that of research assessment; if only that academic freedom in universities must be weighed against students’ right to quality education, education not only being a fundamental right, but also a duty imposed on public authorities 10. Teaching evaluation and research assessment are thus rooted on different theoretical premises.

This paper does not discuss whether or not academic research should be assessed. Rather, it examines the best possible way to evaluate performance in academia and, in particular, the best approach to assessing the quality of legal research. Nevertheless, I do wish that more heavily state-funded activities than research were subject to such a close scrutiny!

2.  CONSTITUTIONAL PROVISIONS AFFECTING THE PUBLIC ASSESSMENT OF RESEARCH

Research assessment by public authorities is subject to a set of constitutional requirements, mainly stemming from: a) freedom of research (Art. 20(1)(b) of the Spanish Constitution –Constitución Española or CE–); b) the Gesetzesvorbehalt or reserva de ley principle 11, under which the core aspects of research assessment must be governed by an Act of Parliament or statute: Art. 53(1) in connection with Art. 20(1)(b) CE; and c) the fundamental right to personal data protection (Art. 18(4) CE) which may be affected by the information flows within research assessment procedures.

2.1.  The objective dimension of freedom of research (Art. 20(1)(b) CE): evaluations suitable for science, evaluations involving research or science itself and evaluations assessed based on their effects on research

The legal consequences (regarding the granting of academic degrees, access to funding or career advancement of researchers) tied by the applicable provisions to the results of public research assessments clearly direct or steer academic research. Any professor who does not wish to be left out of the Spanish academic system is fully aware that every six years, he/she must submit 5 of his/her scholarly works for assessment by the National Commission for the Evaluation of Research Activity (Comisión Nacional Evaluadora de la Actividad Investigadora or CNEAI). This National Commission’s criteria will necessarily affect the professional activity of each of the “field” 12 members within the “social sub-system” 13 of scientific research. The soft law standards applied by the National Quality Assurance and Accreditation Agency (Agencia Nacional de Evaluación de la Calidad y Acreditación or ANECA) to the applications for accreditation have a decisive impact on the research decisions to be made by a professor on tenure track who wants to become a Full Professor. University research groups tend to focus on topics listed under programs funded by the National Research Agency (Agencia Estatal de Investigación or AEI), who is responsible for the prior and ex post assessment of research projects submitted by the said groups. Tying university funding to research results or performance is also an incentive that affects researchers.

In my view, the subjective dimension of the fundamental right to scientific research, aimed at safeguarding creativity (i.e., the independent quest for scientific findings) as well as the reporting of results by researchers (i.e., placing and contextualizing findings within the ongoing discourse that encourages scientific progress) 14, is not the best approach to research assessment analysis 15; particularly considering that the most significant evaluation procedures are –at least formally– optional or voluntary. Research assessment criteria applied by public authorities will be hardly noticeable (in terms of direction or driving force) within the sphere of creativity and self-determination of those scholars who fail to apply for sexenios (i.e., six-year research assessments that can lead to an acknowledgment of research performance and a small salary increase), who refuse to pursue the tenure track by applying for ANECA accreditations, or who do not request funding for research projects. However, these scholars might not be able to bypass the driving force or guiding effect of the criteria applied by (often) private publishers and owners of journals where researchers wish to publish their works following a peer review.

Most research assessment procedures are not explained only by the public authorities’ commitment to the citizens’ sphere of self-determination and freedom (the subjective dimension of the fundamental right guaranteed under Art. 20(1)(b) of the Spanish Constitution, CE), but also by the duty, undertaken by public authorities, to promote knowledge and scientific research (Art. 44(2) CE). Indeed, public authorities have “institutionalized” scholarly output or scientific creations through public entities, procedures and funding 16. From the objective dimension of freedom of research, it can be inferred that these instruments of public power (organization, procedure and funding), with a driving force or guiding effect, must be regulated so as to prevent public interference with “inherently scientific or scholarly activities”. Moreover, the aforesaid objective dimension also prevents public advancement of research from undermining science itself.

The foregoing has the following implications: i) research assessment should not interfere with the development of scientific knowledge subject to science’s own rulebook 17, i.e., research evaluation must be “suitable” for science; ii) scientific research itself should be involved in the prior design of assessment procedures and in the very performance of evaluations, and iii) public authorities are required (a) to monitor or follow-up (on an ex post basis) on the impact of research evaluations on scientific research and, where appropriate, (b) to remedy any possible flaws.

On the “suitability” of research assessment, it is worth noting that evaluation procedures are not inherently alien to science. For centuries, PhDs have been awarded following a peer review process of an original research work. Also, peer reviews of contributions submitted to journals or publishers arose from the social sub-system of scientific research with no public intervention whatsoever. Nonetheless, the suitability standard prevents public evaluations from creating perverse incentives for scientific advancement 18, as is the case with assessment criteria that privilege quantity over quality –the first being easier to evaluate– thereby encouraging “salami slicing” (i.e., dividing one significant piece of research into a number of smaller “slices” simply to increase the number of publications) 19 or the “Matthew effect” (i.e., being rewarded under a given indicator, such as being granted research funding, will probably allow researchers to get more credit on the basis of that indicator and others in the future, for instance, helping them to receive performance-based public funding considering the previously obtained private resources) 20. Suitability to science could also trigger the constitutional requirement that different assessment criteria be applied to different fields or subjects 21. Allocating university funds to the various schools on the basis of the amount of funding obtained through external contracts or agreements (under Art. 83 LOU) would privilege certain fields of research disregarding research quality and solely focusing on the market value of research outputs. If applied as a single or prevailing standard, this assessment criterion is ill-adapted to science. Generally, granting more funds to applied science than to hermeneutics research (a trend noted by certain authors) is detrimental to science itself, and thus unsuitable or inappropriate 22.

In this context, some criticism has been recently drawn by what has been called a “projectization” of science. “Plain old research” has been superseded by research projects 23. Science as a whole takes the form of a project, and projects give rise to a temporary structure (with a set of objectives, stages, expected findings and a budget) perfectly suited for evaluation. This results in a relationship based on reciprocity: assessments lead to the “projectization of science” and projects call for research assessment 24. Obviously, the losers in this context are those fields “of mostly individual authorship” (including law) 25, where the creative process basically entails a long-lasting “polarized thinking or chronical focus” by the researcher on his/her subject matter; and where the relevant findings lead the scholar to ask him or herself the following question: “How could I not come up with this earlier”? 26 It is because ideas, which cannot be scheduled in a research project application, “occur to us when they please, not when it pleases us” 27.

Furthermore, in order to prevent evaluation procedures from giving rise to driving or guiding effects alien to scientific rationality, the involvement of scientists themselves (researchers) is necessary in the prior design of these procedures and, particularly, when laying down the assessment criteria. This is also the only way to ensure that the specificities inherent to every discipline are taken into account in the assessment 28. The regulatory idea can be found in the 2011 Act on Science: the Advisory Body on Science, Technology and Innovation (Consejo Asesor de Ciencia, Tecnología e Innovación), in its capacity as a participatory body mostly engaging 29 the scientific or research community in science or research-related matters, is responsible for promoting the design of “thorough assessment mechanisms 30”.

Similarly, pursuant to the objective dimension of freedom of research, scientists’ opinions and considerations on their peers’ research activity must be decisive in the very performance of assessments (subject to previously established procedures and criteria governing applications for tenure or promotion, project funding or the granting of sexenios). Peer review is an essential instrument at the service of academic self-governance 31 and, according to the foregoing, it is also a constitutional requirement for most evaluations (namely, for any assessment that cannot be performed through automatic indicators).

Instruments of public power with a driving force or guiding effects (organization, procedure and funding) used by public authorities to fulfil their duty of promoting research (Art. 44(2) CE) must be “well-suited” for science. This guarantee of suitability requires public authorities to monitor –on an ex post basis and from this suitability perspective– any applicable research assessment procedures. The abovementioned guarantee of suitability also requires public authorities to remedy any unexpected flaws in the said assessment procedures. This (modern day) ex post monitoring by public authorities 32, “to verify compliance of the applicable rules and regulations with the set objectives” 33, is, within this context, a fundamental right requirement. Accordingly, public assessment of research must be periodically subject to evaluation, in order to examine all of the wanted and unwanted effects 34. The information stemming from these “evaluation of research assessments” (meta-assessment) 35 must be taken into account by lawmakers, by the competent rulemaking bodies, and by the public authorities conducting the assessment procedures, so that every body (pun intended) can make the relevant adjustments within their scope of powers 36.

2.2.  The reserva de ley principle or statutory reservation (Art. 53(1) in connection with Art. 20(1)(b) of the Spanish Constitution): the core aspects of research assessment must be regulated by an act of Parliament or statute

The regulatory density of research assessment provisions is governed by a set of constitutional principles that can be somewhat conflicting. On the one hand, the reserva de ley principle or statutory reservation enshrined in the Constitution regarding fundamental rights (Art. 53(1) in connection with Art. 20(1)(b) CE), realizing the rule of law and the constitutional principle of democracy, requires lawmakers (the legislature) to make any relevant decisions (there is no need for further elaboration at this point) as for the implementation of the fundamental right to freedom of research. On the other hand, as noted above, the provisions passed by political bodies with rulemaking powers (both parliaments and governments) cannot be as comprehensive as to leave no room for assessment criteria stemming from research or science itself 37. This is why, as a matter of principle, there is no problem (quite the opposite, in fact) for the specific substantive criteria applied in research evaluations to be laid down elsewhere other than in statutory or regulatory provisions 38.

From the perspective of the objective dimension of freedom of research, researchers from each field (by means of organizations or entities and procedures sufficiently regulated by a statute) must provide for the relevant assessment standards. Finally, the principle of separation of powers also plays a role in this regard. This principle should be construed as an efficiency requirement or directive for the fulfilment of public duties, i.e., public bodies should perform those tasks for which they are better suited considering the resources at their disposal 39. Self evidently, administrative organizational structures and procedures are, from this viewpoint, the most appropriate instruments for scientists or researchers themselves to clearly and predictably (yet flexibly) lay down research assessment criteria. In my view, a soft law administrative provision (as those published on ANECA’s website) prepared by a panel of researchers within a public agency is “better suited” for science, due to its origins and ultimate purpose, than a statutory or regulatory provision.

In the abstract, it is somewhat unclear which matters are actually subject to the reserva de ley principle (i.e., reserved for a statute) under Art. 53(1) CE in connection with Art. 20(1)(b). Apparently, lawmakers –through statutory provisions– should at least define the subject of assessment (for instance, a researcher’s scholarly work or a given university’s research performance) and the legal consequences thereof (with regard to university funding, career advancement of researchers, etc.); lawmakers should also lay down the core rules on the composition and functioning of research assessment bodies along with any relevant procedural provisions 40.

For example, one could assume that the reserva de ley principle requirements are fulfilled regarding the national accreditation for tenure –let’s look at Art. 57 of the Constitutional (Organic) Act on Universities (LOU). In contrast with the density of that regulation sexenios which, as shown below, are cornerstones of the public duty to promote research, remain governed (at least regarding their core aspects) by regulatory, i.e., non-statutory, provisions 41. Additionally, the Campus of International Excellence Program assessments (Programa Campus de Excelencia Internacional), of great importance for Spanish universities, are subject to ministerial orders barely covered or supported by substantive statutory provisions; thus, it could be argued that these assessments do not comply with the reserva de ley principle requirements either 42.

2.3.  The fundamental right to personal data protection (Art. 18(4) CE)

Over the last few years, a new fundamental rights perspective revolving around the fundamental right to the protection of personal data (Art. 18(4) CE) has made a strong entrance in the law of public assessment of research. This right has been highly compromised by the large bulks of information requested within administrative procedures conducted for research assessment purposes. Researchers who have filed an electronic application for the so-called “sexenios de transferencia” or “knowledge transfer sexenios” (i.e., six-year assessments that can lead to an acknowledgment of successful transfer of knowledge) at the first call of 2018 (implemented “on an experimental basis”) 43 have already experienced the impact on the electronic summary of the said application of the General Data Protection Regulation 44 and of Constitutional (Organic) Act 3/2018, of December 5, on Personal Data Protection and the Guarantee of Digital Rights. According to the requirements of the newly enacted provisions, the said electronic summary reports the identity of the controller, the purposes of data processing, the recipients of personal data, the rights of data subjects, etc.

This paper will not further examine the many issues posed by the fundamental right to data protection in the context of research assessment 45. However, given this evaluitis of the Spanish and European higher education system (although, as noted above, there are some admissible purposes for research assessment) the “data minimization” principle 46 could, generally speaking, streamline research assessment procedures. The regulatory design of evaluations should smoothly coordinate them so that the results of a given assessment are useful for other assessments, thereby eliminating the burden for researchers of filling out forms with already submitted information.

By way of example, it is worth mentioning some of the regulatory ideas guiding the “Funding scheme for public universities in the Region of Madrid for 2006-2010” 47. Part of the funding allocated to incentivize research performance results from the ratio between the number of sexenios granted to each university’s faculty and the number of sexenios that could have possibly been granted 48. The “Multi-annual funding scheme of the Valencia Regional University System for 2010-2017” 49 applies a similar mechanism regarding research performance-based funding 50. Considering that sexenios result from an ever-improving qualitative assessment, I think it is certainly right to use this research performance indicator for this purpose. It also prevents from re-submitting information for an additional assessment which would otherwise have to be conducted for the granting of performance-based funding to universities.

The data minimization principle suggests to use little (yet meaningful) research quality indicators in any research evaluation where those indicators may be significant and allow not to add information 51.

3.  RESEARCH ASSESSMENT: SUBJECT, CRITERIA, INSTRUMENTS AND PROCEDURES

Properly, assessment can be defined as an actual scientific practice entailing a systematic and transparent evaluation of an item or subject of assessment 52, and particularly a research-related subject if we are dealing with research assessment. Generally, the results of the assessment encourage those under evaluation to initiate learning and improvement processes, following, where appropriate, a meaningful debate and analysis involving stakeholders and parties concerned; especially if the assessed activity is that of an institution, such as a university as a whole. On top of that, assessments allow parties to be held accountable for a given activity (to whoever provides the funding, for instance), and they play a major legitimizing role for decision-making (e.g., regarding funding decisions), thereby rendering the decisions acceptable for the addressees 53. This legitimizing effect is mostly achieved if the criteria driving the results of the assessment, and the application thereof, are sufficiently transparent.

3.1.  The “evaluators’ jargon”

For several years now, the theory and practice of research evaluation (and, most notably, teaching performance assessment) has given rise to a true “evaluators’ jargon” only for the initiated. This slang, or, better said, its shrillness (which is often borderline pompous or simply hilarious) is partly responsible for the alienation that many researchers feel about this new approach to performance evaluation in academia. In the 2019 call for Docentia (a so far voluntary teaching performance assessment tool) in Universidad Autónoma de Madrid, there was a glossary including terms such as “functional learning” (“aprendizaje funcional”), “gamification or game-based learning” (“gamificación or ludificación”) and “flipped classroom” (“aula invertida”). Faculty members whose teaching performance was subject to assessment were asked to report how the “student-teacher feedback worked”, and they were also requested to “produce any relevant evidence in case of qualitative feedback”.

Nevertheless, debate and analysis on any topic obviously require a distinct and nuanced terminology allowing to identify, with sufficient level of detail, the various aspects of the matter at stake. Below, I will try to examine this terminology in a purposefully plain language.

Research assessments can be conducted by public bodies (on which we focus) or private persons or entities (such as the evaluation performed by an academic journal of a manuscript submitted for publication). They can also be performed prior to the actual research output (such as project assessments for funding decisions), during the research activity (to see progress) or on an ex post basis (to evaluate a research project’s degree of fulfilment of the relevant objectives). Additionally, research assessments can be classified as internal (self-evaluations conducted by certain universities, being the first stage of a more complex assessment procedure) or external (accreditation procedures for a professor on tenure track handled by ANECA where the researcher’s production is assessed). From the viewpoint of the subject of evaluation, assessments can be broken down into: i) direct assessment of research works (a doctoral dissertation or the five research contributions submitted for the sexenio application); ii) evaluation of research projects (which do not amount to actual research, but rather to work plans for research activity); iii) assessment of researchers (for accreditation or promotion purposes), and iv) institutional assessment (where the research activity of a university school or a university as a whole is subject to evaluation) 54. The latter classification will serve as an outline for the next section.

Assessments can have positive consequences, such as giving rise to the aforesaid debates and improvement processes. They can also have negative effects (to be monitored, found and remedied by public authorities), such as indirectly encouraging research malpractice due to the pressure and competition triggered by research assessment procedures 55; or underrating teaching if academic standing is made entirely dependent on the results of research assessments 56. However, we will now discuss the legal consequences that the rules governing research assessments tie to the outcome of these procedures. As stated above, the reserva de ley principle of Art. 53(1) in connection with Art. 20(1)(b) CE requires lawmakers (through a statutory provision) to decide on this core aspect of research assessment. The legal consequences tied by the rule to the positive or negative result of an assessment are the following: being awarded a PhD or a promotion (accreditations); achieving tenure; being granted funding for research projects or increased financing based on performance for universities or university schools 57; obtaining productivity bonuses (salary increases attached to the granting of sexenios) or other incentives such as a reduced teaching load (Art. 68(2) LOU), etc.

3.2.  Quantitative and qualitative criteria

In my view, the most significant aspect of research assessment procedures in terms of suitability for science or knowledge relates to the criteria applied to evaluate the subject of assessment. It should be obvious that a useful assessment criterion for a scientific discipline such as pathological anatomy (for example, the number of citations in an English-only journal database) can be an even damaging standard to evaluate research quality within Hispanic philology (due to the inherent ties between the language of publication and the language of the research field). If the assessment criteria are inappropriate or ill-adapted, the public direction of science and research through assessment procedures will be dysfunctional or even harmful for knowledge advancement. And this is not allowed by Art. 20(1)(b) CE (freedom of research) read in conjunction with Art. 44(2) CE (providing for the public duty of promoting science and research). I believe that the current debate on research assessment, and particularly on humanities research assessment, should be largely focused on this aspect 58.

Any criteria or standards that can be expressed numerically (for instance, the number of publications or the amount of funding obtained from agreements under Art. 83 LOU) should be considered quantitative criteria. Self-evidently, the advantages of these criteria are that they are easily manageable, they allow for lightening the workload attached to the assessment, they enable comparability and, where accepted by the relevant scientific community, they boldly legitimize decisions. However, remarkably, an increase in research activities sometimes has a diminishing marginal utility; for example, an extremely high number of PhD holders could evidence a bad PhD direction or low quality dissertation advisors 59. Qualitative criteria (e.g., assessing whether the reasoning contained in a monograph is precise and understandable) call for a description of the relevant contents, they make evaluations difficult, and they are often more vulnerable, since they are exposed to other subjective judgments 60.

3.3.  Research assessment tools: bibliometric indicators and peer review

For the purposes of this paper, assessment tools means the specific methods used to evaluate the relevant subject of assessment. Within these tools, we will deal with indicators, i.e., mostly quantitative instruments that can be expressed numerically, as well as with peer review, which could be considered the par excellence qualitative tool.

Self-evidently, indicators (for instance, the number of citations of a journal where an article has been published) do not assess actual quality. Rather, they refer the assessment to other variables that may be related (although not necessarily) to quality 61. Nevertheless, it would be inaccurate to claim that these indicators have nothing to do with quality: a paper is published in a journal with a high impact factor only following a highly stringent assessment 62. Bibliometric indicators are based on the application of mathematical and statistical methods to bibliographic information. Both the Science Citation Index (SCI) and the Social Sciences Citation Index (SSCI), subsequently created for social sciences, are very well-known. A journal’s impact factor shows the yearly average number of citations that papers published in a given journal received. Due to their shortcomings, these indicators have drawn criticism from scientific communities. As a result, increasingly more sophisticated indicators have been created, such as, for instance the h-index or Hirsch Index 63.

Generally, these indicators are based on data recorded in English journals, and they disregard monographs and chapters in books, which are frequently used as publication channels for humanities and social sciences 64 .Thus, many social science and humanities scholars strongly reject these indicators for the assessment of their scholarly output in those fields. Citation impact data often privilege “hot topics” as well as prestigious and renowned scholars. Accordingly, the impact of younger researchers and alternative topics (i.e., unconventional) or specialized research (as opposed to works with a broader scope or less specific) is downplayed. Also, note that a bad article can be highly cited yet for critical purposes 65. A wave of rejection to the dictatorship of bibliometrics gave rise to the 2012 San Francisco Declaration on Research Assessment 66, an outcry against journal-based metrics which, according to the Declaration, were originally created as a tool to help librarians identify journals to purchase but ended up being measures of the scientific quality of research.

University rankings pose similar (or even more serious) issues. The most famous ones are the Top University rankings published by the Jiao Tong University in Shanghai 67 and the Times Higher Education World University Rankings 68. There are also ratings aimed at comparing disciplines or university schools. Much has already been said about how the criteria underlying these comparative lists are ill-founded or unsubstantiated, and enough has been written about the lack of transparency of the procedures used to prepare these rankings. Nevertheless, they most certainly trigger competition both at a national and international level 69. The U-Multirank 70 ranking, within the framework of a European Commission initiative, is based on a methodological approach which has expressly shifted away from other approaches that allow to reward UK and US universities and colleges. There is certainly no objection to the information provided by a ranking based on valid and transparent criteria for comparison, or on assessments performed by researchers or scientists. It is quite obvious that public authorities cannot acritically embrace private rankings and ratings as a basis for public decision-making 71.

As stated above, peer review is the paramount example of qualitative research assessment, which amounts to an expression of institutionalized self-monitoring of research 72. When performing this role, evaluators or peer reviewers are the gatekeepers to the positive outcomes of the assessment procedure (a publication, an accreditation or the award of project funding). The success of peer review requires a set of standards acknowledged by the relevant scholars and to which they should feel bound 73. A “mid-range distance” 74 between peer reviewers and evaluated scholars is also necessary for the well-functioning of peer review. However, this is not always feasible in small scientific communities. Whoever requests a peer review should provide the evaluator with good predefined forms or standardized reporting templates (appropriately breaking down the relevant criteria) for him/her to prepare the report. Also, peer reviewers must specifically answer the questions included in the forms or checklists 75. The Internet does allow for, and it is actually giving rise to, alternative forms of online peer review.

Peer review draws criticism for its limited intersubjectivity 76 and due to its bias against interdisciplinarity and innovation 77. If peer reviewers take their role as guardians of existing standards too seriously, innovations can be delayed and paradigm shifts could be halted 78. We need to reflect on the selection of peer reviewers, since it can allow to somehow pre-determine the result of the evaluation by whoever requests the peer review report. Perhaps, the peer review process will improve if review-writing is carefully followed up on by the requesting body, if the peer reviewer is informed about the decisions made based on the reports, or if the evaluator receives other reports drafted on the same subject as his/hers for comparison purposes 79.

It is essential to ensure the peer reviewer’s objectivity and impartiality. It is debatable whether Arts. 23 and 24 of the Act on the Legal Regime applicable to Public Authorities –Ley 40/2015, de 1 de octubre, de Régimen Jurídico del Sector Público or LRJSP– (on the standing down and recusation or challenge of public authorities) apply directly to peer reviewers involved in the administrative procedures examined herein. It is unclear if these scientists are “authorities” or “public staff” within the meaning of Art. 23 LRJSP. However, the underlying regulatory rationale of these provisions, i.e., preventing conflicts of interest in order to ensure impartiality, should also govern these procedures, even if it is through other specific rules tailored to research and science, such as those laid down in codes of ethics or assessment forms used by public authorities conducting research assessment procedures.

Within this context, it is also worth examining the anonymity of peer reviewers. They would rather draft their report anonymously in order to perform their review more freely. This is objectively advantageous for science. Conversely, reviewed scholars seek as much information as possible on their reviewers, in order to get a grasp of the evaluator’s competence or expertise 80. On top of that, anonymity obviously prevents challenging reviewers. On this matter, regarding public resource allocation procedures, Art. 5(2) of the 2011 Act on Science is worded as follows: “The anonymity of peer reviewers will be preserved in any peer review process, although they will be identified in the relevant administrative file in order for the parties concerned to exercise any rights to which they may be entitled”. Admittedly, the latter part of the sentence does not have the most expressive wording. One could argue that anonymizing peer review is a rule “well -suited” for science encouraging scientific creation and thus sufficiently meaningful as to be weighed against the concerned party’s right of access to information regarding the reviewer’s identity, as in other information requests that may undermine “the secrecy required in decision-making procedures” under Art. 14(1)(k) of Act 19/2013, of December 9, on transparency, access to public information and good governance. Where anonymity prevented challenging an evaluator, it would be necessary to provide for an alternative guarantee, such as the need to request two reviews.

Public authorities have a major duty: ensuring, in cooperation with science and researchers, that the criteria and methods used in assessment procedures be “well- suited” for science. When deciding on these, which are central for performance evaluation in academia, the following aspects should be taken into account. First, the type of assessment being carried out, since there is a large difference between assessing the overall performance of a given university (where quantitative criteria and bibliometric indicators might play a more prominent role) and evaluating the scholarly output of a researcher applying for tenure or promotion (where exclusively using quantitative standards should be outright excluded) 81. Second, it could be accepted that assessments be designed based on a cost-benefit analysis (i.e., having regard to the cost of conducting the procedure and the benefits obtained from the inclusion of more complex or costly tools 82); the easy application of bibliometric indicators does not justify the use of bibliometrics in every case, but it would also be unreasonable to set aside bibliometric indicators in favour of the most expensive and time consuming assessment methods. Finally, scientific communities are responsible for engaging in the design of reasonable and appropriate methods to assess research within each of the relevant fields 83.

Further elaborating on these ideas can improve research assessment results. If solely the number of a researcher’s publications can trigger negative effects (such as the aforesaid “salami slicing” technique), this figure can be replaced by a maximum number of the most relevant publications. The 2015 amendment to the accreditation procedure included this idea. The said amendment provided that the application whereby researchers (seeking to achieve tenure or a promotion) initiate this accreditation procedure should include “the four contributions that the applicant considers to be the most relevant throughout his/her research career, in order for the competent committees to assess the quality and impact of the researcher’s output within his/her field of expertise” 84. A reasonable, well-balanced use of indicators shows that some bibliometric indicators can provide useful support to reports issued within the context of the so-called “informed peer review:” reviewers are asked i) to assess the set of bibliometric data according to their value within their field, and ii) to subsequently issue their report based on this premise 85. For certain kinds of assessments, such as research project evaluation, more efficient assessment procedures could be designed, for example: two-phase procedures, made up of a first stage where clear cases are identified (applications that are certainly being granted project funding and those for which there is no doubt that funding is denied), along with a group of applications that are halfway there for which a subsequent, more comprehensive peer review is requested. These two-phase procedures could also be made up of a first stage where the applicants are requested to submit summarized proposals; only the best ones are selected for the applicants to submit an extended application subject to an additional peer review 86.

3.4.  The assessment procedure

The notion of assessment procedure should only designate the administrative procedure (the broader category encompassing an assessment procedure and most certainly broader than a peer review process); in other words: the orderly series of proceedings and actions preceding and substantiating administrative decisions 87 that can be traced back to a public authority for which such public authority can be deemed responsible. Throughout this procedure, public authorities must ensure that all the information required for a correct assessment is duly gathered, as well as that the parties concerned are heard. The hearing stage following the experts’ assessment is expressly provided, for instance, in the national accreditation procedure for tenure 88, as well as in (at least) some research project funding procedures 89 and in the procedure for the validation of official qualifications 90 (although this is a teaching evaluation procedure, not involving research assessment). As pointed out above, science itself and researchers should be involved in the administrative procedure in order to submit comments and share their views and criteria. However, the competent public body conducting the procedure must be responsible for dealing with other aspects related to the general interest 91: collecting all the necessary information to make a decision, meeting the relevant deadlines, ensuring the evaluators’ impartiality or objectivity, and protecting the legal rights and interests of the evaluated scholars.

4.  TYPES OF RESEARCH ASSESSMENT IN SPANISH LAW

As has been stated, the subject of assessment procedures is not always exactly the same. Whereas sometimes a research work is directly evaluated (a doctoral dissertation, for example), the subject of an assessment procedure could very well be the entire research career of a researcher seeking tenure, a university’s research output altogether (which would amount to an institutional evaluation), or a research project’s quality and feasibility (thus, a research project proposal or application would be undergoing evaluation). This section focuses on the specific subjects of assessment.

4.1.  Direct assessment of research works

This subsection discusses the three most important research items subject to direct assessment: doctoral dissertations (also referred to as doctoral theses), the five major contributions produced for each sexenio application (i.e., six-year research assessments that can lead to an acknowledgment of research performance and a small salary increase), and the manuscripts submitted to public publishers or journals for publication or for public awards.

The oldest research assessment procedure is the one evaluating doctoral dissertations, i.e., scholarly works aimed at obtaining a PhD or doctorate degree. The applicable rules and regulations solely require that doctoral dissertations be “original research contributions” 92, whose defence or presentation should be authorized by a university collegiate body or committee 93 and whose scientific value must be assessed by a board made up of a variable number of members (often three or five) 94 who should be doctors with expertise in the field. The evaluation of doctoral dissertations amounts to a non-anonymous peer review subject to often implied quality standards allegedly known by the researchers in each field. Obviously, the doctoral dissertation approval and evaluation should also allow to identify (and, where appropriate, penalize) research malpractice by the doctoral candidate. When a politician takes office, journalists often inquire about, and often disclose, possible research malpractice. This assessment of research malpractice should become an essential element of the academic evaluation undergone to be awarded a PhD.

Research works are also subject to direct assessment in the procedure for the granting of sexenios 95. The researcher applying for a sexenio picks five of his/her contributions from at least the last unassessed six-year period which he/she voluntarily submits for assessment by the CNEAI’s advisory bodies. In 1989, these sexenios or six-year research assessments were evaluations that could only give rise to a mere productivity bonus for tenured faculty 96. However, they are currently a cornerstone of the Spanish university system with major legal consequences, including the following: in order to be eligible for the accreditation committees (for tenure applications), any researcher should have been granted two or three sexenios, as appropriate 97; three sexenios are required to be a member of the CNEAI’s advisory committees 98; Associate Professors (profesores titulares) and Full Professors (catedráticos) are entitled to reduce their maximum teaching load from 24 to 16 ECTS credits, respectively, if they have been acknowledged three or four sexenios 99.

As has been duly noted before, the organizational and procedural regulation of these assessments, laid down in regulatory (non-statutory) provisions 100, does not seem to meet the requirements of the constitutional reserva de ley principle enshrined in Art. 53(1) CE in connection with Art. 20(1)(b). The rules governing the granting of sexenios are so important within research activity, both subjectively (for researchers) and objectively (for research advancement in Spain), that any interpretative criterion, at least of Art. 53(1) CE (the requirement that fundamental rights be governed by statutory provisions) should lead to the conclusion that the main organizational and procedural pillars of research assessment (and the effects thereof) should be governed by an Act of Parliament or statute.

In my view, there are several salient aspects in terms of the organizational and procedural regulation of this assessment. First, it should be clearly set forth that the granting of a sexenio must not be rejected without the active involvement of an expert in the same field as the evaluated researcher (either belonging to the advisory committee or drafting an external report). What suits best the rules of research or science is that a negative assessment should only be issued by a researcher whose expertise in the relevant field allows him or her to know the discipline’s rulebook from the inside 101. Furthermore, there is no reason for this administrative procedure not to include a hearing for the evaluated researcher to submit comments on a negative assessment 102. This hearing should resemble the most modern accreditation procedure regulation 103. The decision rejecting the sexenio can obviously be challenged before the same administrative authority which made it and before the courts.

The quality criteria or standards governing the assessment of the top five contributions submitted by each researcher are laid down in a yearly CNEAI Resolution issued in parallel to the call for applications. At least regarding the assessment of researchers under Field 9 (“Law and Legal Science”), the definition of these quality criteria is fairly appropriate. These criteria are as follows: “originality, rigour, methodology and impact” of the scholarly works will be taken into account; also, those works that “bring knowledge and provide conceptual and analytical instruments to render legal rules more effective” will be given particular consideration. However, “merely descriptive works or the repetition of previous contributions will be disregarded” 104. Admittedly, these criteria are well-suited for science and, in particular, for legal science. The grounds for the assessment decisions should obviously focus on giving reasons for any positive or negative evaluations subject to the aforesaid criteria.

The Spanish Supreme Court has recently ruled on the subject of assessment for the granting of sexenios (“the works or contributions included in the abbreviated CV”, i.e., the top five contributions picked by the researcher). The Supreme Court has construed the applicable regulation as meaning that the assessment will depend on the content of the submitted contributions, and not on the publication “in journals or other media included in (certain) indices or lists” 105. It is worth focusing on the contents, not on the strict application of journal indices that can be inappropriate or ill-adapted to assess certain disciplines.

However, the contents should not always be fully separated from their package. As discussed below, in a different context, the results of scientific research do not only amount to the very expressed ideas. Research findings should be part of a communication process where ideas are received and reported again. The quality of scientific research depends to a great extent on correctly matching the findings with the context of communication to make them known 106. A scientific idea disseminated in a non-scientific journal cannot be assessed as a research finding.

The foregoing determines in different ways, and according to each discipline or field of research, the activity of evaluators in the procedures for the granting of sexenios. Within the law domain, as in the case settled by Supreme Court Judgment of June 12, 2018, no international journal index preventing the consideration of research works published elsewhere can be used (these indexes might exist in other disciplines). Nevertheless, a non-prestigious journal that is not widely circulated and which does not assess manuscripts is also inappropriate to ensure whether there has been a contribution to science. If there is a guarantee that the journal carries out a strict editorial selection process, the sexenio evaluator need not perform any additional content assessment: such an evaluation has already been carried out within the non-public research system. On the contrary, the sexenio evaluator will have to assess the contents published in more standard media for legal research not subject to peer review, such as monographs (for which, obviously, the publisher’s standing could be taken into account), chapters of co-authored or collective books, etc.

Generally, the sexenio procedure increasingly ensures high-quality assessments of research quality (redundancy intended). Thus, in line with the aforesaid data minimization principle –i.e., to use little (yet meaningful) research quality indicators in any research evaluation where those indicators may be significant and allow not to add information 107- it is appropriate to use this indicator (using formulas where the number of sexenios that could have been possibly granted is divided over the number of awarded sexenios) to lighten the burden of producing additional information for other assessments, such as universities’ research performance assessments for Regional Government’s funding decisions, evaluations of university schools’ scholarly output for internal funding decisions (within the university) involving grants or other financial resources, etc. The specificities of every discipline are taken into account for each evaluator’s assessment, and the sexenio indicator complies with the formal equality of all disciplines.

Finally, the evaluation of manuscripts submitted for publication in journals and publishing houses or presented to awards also qualifies as a direct research assessment procedure. These evaluations should probably be considered public procedures as long as the journal owners 108 or the award-giving bodies are public (let’s take, for instance, the extraordinary doctorate award given by many universities). Broadly speaking, the public nature of these entities simply reinforces the requirement that these procedures abide by the rules of science.

4.2.  Research project assessment

The evaluation of research projects entails the prior quality assessment of a work plan aimed at achieving knowledge-related results. This prior assessment is intended to allocate the relevant financial resources to the best proposals, which will be selected following a competitive procedure 109. Not selecting only the best proposals, as they have been submitted, and having regard to resource availability, but increasing the number of funded (selected) projects by deciding, ex officio, to allocate less funding than requested to all or most of the selected proposals, would amount to a misallocation of resources. Self-evidently, reducing the funds granted to the (truly) best proposals can have a negative effect on their schedules or plans of action.

The National Research Agency (AEI) plays a very prominent role within these procedures, but this is not the only relevant actor 110. Upon the submission of applications, these are pre-assessed. Following this pre-assessment, applicants who fail to meet those requirements “that do not involve a scientific or technical evaluation 111” are excluded. The scientific evaluation of requests provides for some of the aforesaid mechanisms aimed at streamlining procedures. This evaluation can take place in one or two stages. In the first case, at the beginning of the procedure, applicants will submit any information required to assess the proposal. As for two-phase evaluations, applicants will submit a simplified set of documents during the first stage; at this point, the assessment is solely based on essential criteria (for instance, the “originality of the proposal and its disruptive nature”), so that applications failing to meet these requirements are outright excluded. At the second stage, applicants are requested to submit additional documents, which are assessed under a set of general criteria discussed below. The relevant committees subsequently issue the proposed decisions (draft resolutions) in light of the expert reports on each of the applications or proposals. These committees can be made up of a majority of researchers (along with public officials, whether holding political positions or not), but it is surprising that this composition is not mandatory 112. As a general rule, the applicants must be heard prior to the final decision 113.

The criteria or standards applied to assess the applications (both numerically and in terms of reasoning) are conceived in quite an abstract manner, namely: proposal’s quality and feasibility, research team, or scientific-technical and social and economic impact of the results 114. These criteria are the same for all fields of research. Apparently, this poses no problem whatsoever, since any research project, regardless of the discipline, is always a prospective work plan generally assessable under the said abstract criteria (quality, feasibility, research team...). Arguably, “economic impact” is the only criterion that could privilege certain fields of research.

During the implementation of research projects, there are partial assessments (“follow-ups” to assess partial compliance with the projected results or set milestones), as well as an ex post evaluation of the findings. Negative results of these assessments and follow-ups “may be taken into account when the party concerned submits a new funding application (…) in the aspects regarding the research team or the proposal’s quality 115”. It would be desirable that researchers be informed of these unfavourable assessments, as well as that the negative consequences arising therefrom not be based on the unknown “blacklisting” of researchers by the AEI.

4.3.  Assessment of researchers

The most significant procedures aimed at evaluating a scholar’s research career are accreditation procedures, competitive procedures to gain tenure and internal promotion procedures within universities.

The 2001 Constitutional (Organic) Act on Universities (LOU) laid down the requirement that, following the positive assessment of the relevant applicant’s CV, any applicants seeking a position as Assistant Lecturer (ayudante doctor), Senior Lecturer (contratado doctor) or Private University Faculty (profesor de universidad privada) should obtain the ANECA accreditation (or otherwise an accreditation issued by the equivalent regional public body 116). The 2007 amendment to the LOU 117 set the same requirement in order to be accredited by ANECA to achieve a tenured position (following a competitive procedure) as Associate Professor (profesor titular) or Full Professor (catedrático) 118. The soft law criteria provided by the said public agency to assess scholars’ research careers followed the same structure in every case: on the one hand, there was a list of merits or achievements (in terms of research, teaching, management, etc.), where each merit or achievement was credited with points. On the other, a minimum number of points for each position (either tenured or tenure-track positions) was established. The specific evaluation of research was not quality blind, but –generally speaking– quantity prevailed over quality 119.

The system has remained unchanged since then for tenure-track positions. However, in 2015 120, the assessment procedure for the accreditation to achieve a tenured position (Associate Professors or Full Professors) was modified 121. This amendment was expressly intended to underscore qualitative criteria; as provided in the Explanatory Preamble of Decree 415/2015, the point was to conduct “a more well-balanced assessment, striking a fair balance between the quantitative and qualitative aspects of the applicant’s merits or achievements 122”.

To that end, the number of the evaluation committees was increased in order to bring the experts “closer” to the fields of research of the evaluated scholars. Additionally, the point system was replaced 123 by a letter rating system based on the quality of the merits or achievements (“A” being outstanding, “B” being good…), provided that a minimum quantity threshold is met. Also, and this clearly shows how the amendment is keen on highlighting quality, applicants were required to identify “the four contributions that the applicant considers to be the most relevant throughout his/her research career, in order for the competent committees to assess the quality and impact of the researcher’s output within his/her field of expertise 124”. Those applying for the position of Full Professor (catedrático) should put forward further “specific merits”. Under this newly enacted regulation, Full Professors are no longer older, more prolific Associate Professors; they now attain a specific position as faculty members with a background of leadership and recognition external to the higher education institution where they carry out their teaching and research activity. Being a “main researcher” in research projects (Investigador Principal or IP) or a doctoral dissertation advisor, participating in international academic networks, or obtaining funding through knowledge transfer agreements are now achievements of paramount importance to become a Full Professor.

The amendment was truly undermined by the soft law document passed in 2017 by ANECA’s Law evaluation committee titled “Assessable merits” (“Méritos evaluables 125”). This document fails to include a minimum threshold of merits or achievements that could provide the basis to subsequently rate the quality of contributions under the said letter system. Quite the opposite: the document provides, ad nauseam, a set of “compulsory merits” required to get an “A” in research (Full Professor position: 6 monographs, 20 book chapters, 20 papers; Associate Professor position: 3 monographs, 10 book chapters, 10 articles). Other “compulsory merits” to get a “B” in research are, for Full Professors, 4 monographs, 15 book chapters and 15 articles, and, for Associate Professors, 2 monographs, 6 book chapters and 6 articles.

Ultimately, this piece of soft law essentially ignores the clear direction set by the hard law provision (Decree 415/2015) for the discretionary decision making of public authorities. On top of that, this soft law document reformulates or reshapes (in quantitative terms) the committees’ obligation to assess the quality of the submitted achievements 126.

The university competitive procedures to achieve tenure-track or tenured positions (concursos de acceso, in Spanish) also qualify as procedures for the assessment of scholars’ research careers. This is particularly true regarding tenured positions (Associate Professors and Full Professors). In each of these concursos, and specially for the achievement of the latter positions, universities make a long-term decision regarding the future approach, content and significance of a given discipline within the institution 127. These competitive procedures or concursos are governed by Art. 62 LOU, which refers to each university’s regulations to further provide for the procedural and composition rules applicable to evaluation committees. Paragraph 4 of Art. 62 LOU more specifically defines the subject of assessment: 1) the applicant’s academic, teaching and research background; 2) the applicant’s teaching and research project (what he/she has to offer in terms of teaching and research to the university opening the call for positions), and 3) the applicant’s public speaking and debate abilities in the relevant field. The specific research assessment criteria for the applicants are laid down by the committee deciding on each concurso 128.

Over the last few years, spontaneously and somewhat disregarding the applicable legislation (probably as a collateral effect of the economic downturn), universities are suddenly conducting a new procedure to assess scholars’ research career: the procedure aimed at calling for the misleadingly designated “promotion positions” (in Spanish, “plazas de promoción”). During the long economic recession, no new positions were opened up, thus preventing faculty members to promote. Accordingly, there are many Senior Lecturers (contratados doctores) that are still in that position, in spite of having been accredited for an Associate Professor position for a while. This is also the case with Associate Professors seeking a Full Professor spot; they have achieved their full professorship accreditation long ago, yet they remain Associate Professors.

Universities are implementing internal “promotion” procedures (or position “upgrades”) which, in their simplest version, work as follows: a given associate professorship, for example, of whoever has been accredited for a Full Professor position the longest, turns (in the budget) into a full professorship by increasing the resources allocated to that position. There is a call for this new Full Professor position through an allegedly competitive concurso (although the position is not vacant, but rather held by an Associate Professor); the university opening up the position hopes for the Associate Professor holding the position to be the successful applicant. If this happens, everything works as expected and the Associate Professor becomes a Full Professor. However, if a better applicant is awarded the position, he/she will take the spot and the university will be forced to create a new budgetary accommodation for the applicant who has been set aside and has to remain as an Associate Professor. The system’s expectation is that this rarely or never occurs.

Self evidently, this procedure poses serious concerns from the perspective of civil service law and budgetary legislation. Internal (vertical) promotion (Art. 18 of the Public Employment Act, TRLEBEP) revolves around positions that are opened up based on service needs, not solely for internal promotion purposes. Furthermore, it is obvious that tenured faculty positions to be filled through a competitive procedure or concurso (Arts. 62(1) LOU and 70 TRLEBEP) cannot be already filled, because if another applicant is awarded the position, a financial obligation arises (to accommodate the unsuccessful candidate) for which there was no budget allocation.

These procedures have more sophisticated versions, where the positions to be “promoted” or, better said, “upgraded”, are chosen after assessing the achievements (not only determining who has been accredited for the longest time) of the voluntary applicants to this internal procedure. Within these sophisticated versions, the following aspects are taken into account: research merits or achievements (sexenios, having been a main researcher or IP in research projects...), teaching and management experience, etc. 129 Seniority in the accreditation (i.e., for how long an applicant has been accredited) is often significant, as well as simply the length of service in university. Understandably, placing a lot of importance on seniority in the accreditation (and, above all, in the length of service) without even tying it to a long-lasting research activity, creates an incentive which is “unsuitable” or “ill-adapted to science”, and that can jeopardize the achievement of the desired result (that the faculty member holding the position end up being awarded the “upgraded” position). As the merits or achievements considered for internal “promotion” move away from the achievements taken into account in the competitive concurso, the higher the risk that the desired result is not achieved.

4.4.  Institutional assessment of research

Institutional research assessment procedures evaluate research carried out by collective units. This subsection discusses i) the assessment of universities’ research output regarding performance-based funding; ii) the assessment of institutional projects (Excellence Campus or Campus de Excelencia), and iii) the evaluation of research groups.

The performance-based funding scheme for universities (concerning research, in particular) is well-known and it is implemented in many European countries 130. Ever since the 1990s, there are funding cooperation agreements or contratos-programa in Spain 131 entered into between a university and a regional public authority with powers in higher education matters. Under these contratos-programa, the relevant public authority provides extraordinary financial support if the relevant university accomplishes certain improvements in terms of research and, more importantly, in terms of teaching quality 132. The aforesaid “Funding scheme for public universities in the Region of Madrid for 2006-2010” 133 and the “Multi-annual funding scheme of the Valencia Regional University System for 2010-2017” 134 apparently also give rise to competition between the various universities struggling for limited financial resources within the same budget item 135.

The assessment criteria regarding research activity are often the following: the amount of resources obtained in public competitive procedures as well as in private funding procedures subject to Art. 83 LOU; the sexenios obtained by researchers, as well as the number of doctoral dissertations and publications. As has been stated, given the amount of research activity subject to assessment, the data being used should be meaningful (actually evidencing the value of research output) and readily available. In case of competitive procedures, the applied criteria should also be objective 136, in order to avoid undue disadvantages for certain disciplines, as is the case when the amount of private funding obtained (which is always less for humanities) or the number of listed publications in English (which should not be used for fields intrinsically tied to a specific language, as most legal disciplines) are given an unreasonable value.

Project assessment also has an institutional dimension. This dimension is embodied in a competitive procedure whereby university projects are awarded funding for future strategic action to enhance teaching or research activity allowing to transform campuses by internationalizing them through networks and alliances. Amongst these procedures, the most famous one is the so-called Campus of International Excellence Program or Programa Campus de Excelencia Internacional 137, which has been in place since 2009. The assessment and selection procedure is two-phased. At a first stage, the applying universities submit an overview of their projects, which are subsequently reviewed by a technical committee 138. The selected overviews are granted funds to prepare the “Duly extended and specified project for transformation into a campus of international excellence” (in Spanish, “proyecto de conversión a campus de excelencia internacional debidamente desarrollado y concretado”). These extended projects are assessed by an international committee who decides who is awarded the funds to implement the projected actions 139.

The project assessment criteria are identified and scaled (roughly assigning the points to each criterion) in the call: teaching improvement, scientific improvement, definition of fields of specialization, internationalization, etc. 140. The time periods granted to prepare and submit the project overviews are surprisingly short, which is clearly contrary to the rules of science. Within a month (of summer, as in the 2011 call) 141 or within a 5-day period (!) (as in the 2015 notice) 142, one can barely sketch a few general intuitions for a strategic university project. These time periods prevent internal debate within universities as well as the elaboration and reflection needed to guarantee the success of any proposal related to future scientific activity.

The abovementioned institutional research assessment procedures comprise the evaluation of research groups, i.e., units made up of researchers with shared lines of work 143. The assessment of this type of research activity should take into account an obvious rule of thumb: a group is not just the sum of its parts; rather, it is the quality of the combined collective work that must be assessed.

5.  CHARACTERISTICS OF LAW DEEMED SIGNIFICANT FROM THE PERSPECTIVE OF RESEARCH ASSESSMENT

Below, we specifically discuss legal research or, better said, research in the field of legal science. The design of legal research assessment procedures, and particularly the establishment of evaluation criteria, must have regard to some of this discipline’s specificities. Such distinct characteristics will determine, as a premise or starting point, whether these criteria and procedures are “well-suited” for law 144.

5.1.  Legal science hinges on hermeneutics and mostly deals with legislative texts

The set of ideas on sovereignty and State that can be traced back to Jean Bodin, and which historically lead to understanding law as a set of rules stemming from a sovereign State, is essential to outline the notion of legal science. As opposed to natural and empirical disciplines, which inquire about facts, the science of law is concerned with what ought to be rather than with what it is. Accordingly, the science of law has its own methodology aimed at understanding these legislative texts (or texts with rules or legal provisions) driven by the notion of system, and shaped by the distinct features of unity, order and coherence 145. Legal scholarly work is based on forms of thought or lines of thinking seeking to describe and classify legal material for the purpose of analysing and explaining it 146. Legal research also hinges upon lines of reasoning intended to identify coherence or inconsistencies, to come up with enriching proposals and solutions, etc. Admittedly, there are major differences between civil law or administrative law scholars and legal theory scholars in how they work with legal rules. It is also true that empirical studies (focused on reality, on “what it is”), as conducted by behavioural disciplines and economic analysis of law scholars, have recently made a strong entrance in the field of legal research, and particularly regarding rulemaking theories 147. In spite of these nuances, the statement that the science of law is largely based on hermeneutics and legislative texts (or texts with rules or legal provisions) still holds true today.

Whereas natural sciences advocate a linear understanding of progress (the new supersedes the old), law (and humanities) are governed by the “coexistence of competing ideas” from an increasingly large knowledge base. Generally, the findings of legal research can be challenged over and over. It is hard to seek a “final result” in legal research. Rather than aging or growing old, legal knowledge is constantly expanding 148.

Law is largely a “science of books” 149, which does not often require expensive infrastructures 150. In light of this feature, assessing legal research based on the amount of private funding obtained from third parties is “ill-adapted” to legal science. If this criterion governed the internal allocation of funds or scholarships amongst university schools, law and humanities would be severely and unduly disadvantaged. If this criterion was the basis for the granting of Regional Government funds to universities (as is the case with the “Funding scheme for public universities in the Region of Madrid for 2006-2010” 151 and with the “Multi-annual funding scheme of the Valencia Regional University System for 2010-2017” 152), polytechnic universities or institutes of technology would be unduly rewarded over more humanities-oriented colleges. Those disciplines requiring more expensive infrastructure (which, to some extent, are also the better funded by third parties) need more resources to perform their research activity. However, this difference should be taken into account for the basic allocation of funds, but not for any additional performance-based funding decisions.

5.2.  Ties between legal science and the practice of law

Law, just as medicine, falls within a group of disciplines taught and studied in schools with the aim of providing professional training, i.e., for students to be qualified to practice a profession. They are academic disciplines characterized by a close relationship between theory and practice. Legal scholars are, on the one hand, members of the social sub-system of scientific research; and, on the other, they belong to the legal social sub-system along with judges, lawyers and some public officials 153.

Natural sciences enable the distinction between basic research disciplines (also designated as fundamental or pure research) and applied fields of research. Nevertheless, within law itself, there is always a spectrum, ranging from attempts at grand, highly abstract theories, to legal studies and commentaries designed to solve specific problems. Law is a system of principles and rules aimed at solving social problems. Thus, although the science of law sometimes takes an abstract high ground, law is inevitably defined by the link between theory and practice: legal research departs from a theoretical foundation, whether large or small, but it should ultimately prepare (even remotely) or propose, practical solutions 154.

There are certain implications for research assessment that can be inferred from this feature of legal science. As a “science of books”, law will not attract large amounts of private funding to universities. But the ties between theory and practice in law do place legal researchers in a position to enter into knowledge transfer agreements subject to Art. 83 LOU 155. The amount of private funding obtained through these agreements will not be as high as that involved in the abovementioned applied research. However, the number of agreements can be considered a well-suited indicator for legal research.

Furthermore, from the standpoint of research quality assessment, the ties between theory and practice also determine that “the more abstract, the better” is not necessarily true for law. Systematizing positive law in close connection with the actual practice of law can be an outstanding work of legal research.

Additionally, the said link between theory and practice allows to positively assess a given academic proposal actually used by a court to settle a case-law issue or by the legislator to solve a problem by enacting a legal provision.

5.3.  The subject of legal science is segmented because it hinges on domestic legal orders

As opposed to math, physics or chemistry, the subject of study of legal science (legal provisions handled in a scientific manner) is not “transnational”. Rather, it is segmented into mostly domestic legal orders 156, i.e., legal frameworks largely dependent on the power of the State enacting the relevant legal provisions. The progressive globalization, internationalization and Europeanization of domestic law is most certainly affecting this feature of law, which is otherwise absent in Roman law, public international law or EU law. Nonetheless, most of the traditional legal disciplines still deal with domestic legal systems.

Obviously, this essentially domestic nature should dictate the dissemination criteria used to assess the publishing houses or journals where legal research is published. Most likely, a prestigious German journal on tax law will be more widely disseminated than a Spanish publication on the same subject, but the Spanish journal will have greater dissemination than a French-only Swiss publication. Data on journal dissemination and the fact that scholarly articles do not get into English journal databases, say nothing about the quality of articles published therein. There need to be different criteria to perform this quality assessment.

5.4.  Legal research is tied to the language of legal provisions

Legal science’s subject of study is shaped by language (legislative texts or texts with rules or legal provisions), and thus there are inextricable ties between legal research and the cultural context of the relevant language in which legal provisions are drafted 157. This is not the case with medicine or astrophysics. In contrast with natural sciences or economics, as for disciplines that largely depend on national provisions, it is uncommon (and it would be fairly unreasonable) to publish articles in English. Yet again, using as a quality criterion the publication in journals listed in databases of mostly English-written works is generally unsuitable or ill-adapted to legal research.

It is worth discussing research works published in languages spoken in Spain other than Castilian (Spanish); these are also official languages in their respective Autonomous Regions. Self-evidently, scholars who publish their works in Catalan or Basque will have fewer readers and citations than Spanish-written pieces. However, it is also obvious that such difference in readers and citations says nothing about quality 158. For instance, a scholar whose publications deal with Catalan civil law is definitely targeting a smaller scientific community than the Spanish civil law audience, considering the abovementioned link between the subject of legal research (civil law only applied in Catalonia) and the cultural context of the Catalan language.

However, regarding publications on national law written in co-official regional languages other than Castilian Spanish, we could further discuss whether or not the findings of a research work only amount to the ideas actually expressed on paper. These ideas should be framed within a communication process allowing to receive the thoughts and to subsequently communicate them again. The quality of scientific research largely depends on correctly matching the findings with the context of communication to disseminate them 159. This match can be undermined if the language used reduces the scope of the scientific community interested in the subject-matter.

5.5.  Publication formats for research works

Within the field of experimental and natural sciences, research results are typically disseminated through academic articles published in prestigious journals. Legal findings, however, are often published in book format, most notably in monographs, commentaries and collective books, handbooks and commemorative volumes or tribute books.

The essential qualitative shifts and the most “creative leaps” in legal science are likely to be left in monographs 160, whose purpose is to become reference works that put forward legal topics (having regard to the state of the art), further elaborate on them, and (at least in terms of definition or speculation) scientifically “close” these legal topics. No one should doubt that, from the perspective of scientific achievement, a good legal monograph is worth more than a good article. Handbooks are heterogeneous products: on one end, there are handbooks that qualify as original, innovative intellectual works acknowledged as a reference in the field 161; on the other end, there is room for more modest works successfully intended to become teaching support tools which should not be considered research works.

Within the social sub-system of Spanish legal science, it is currently hard to find reliable indicators to assess the quality of research pieces published in books. In order to reliably assess quality, each research work should be subject to peer review (in the sexenio assessment procedure, for instance). The publisher’s prestige or standing 162 is not an easily applicable criterion to any product that has access thereto, mostly if the author who intends to publish the manuscript also offers to totally or partially finance the publication. Within the Spanish legal context, the assessments included in book reviews (recensiones) are not reliable 163, they rarely give rise to a critical debate with the author about his/her work. The contributions to commemorative volumes are very much in need of a specific quality assessment, as a result of standard practice in the drafting of these tribute works. This situation might change if increasingly more publishers or collections within publishing houses subject manuscripts to peer review prior to the editor’s final publication decision.

5.6.  Law is a discipline “of mostly individual authorship”

Legal research has a long-standing tradition of individual authorship whose survival or preservation in terms of public support has only depended on being allocated sufficient basic resources 164. This tradition (alongside with some free-riding) explains why co-authorship has prompted some “suspicion” 165 from the standpoint of legal research assessment. Nevertheless, the system must learn to separate the wheat from the chaff; it should not only refrain from penalizing, but it must also encourage, those co-authored works “naturally resulting from genuinely multidisciplinary research” 166 and helping advance scientific knowledge.

6.  QUALITY CRITERIA IN LEGAL RESEARCH ASSESSMENT

Is there a closed list of universally accepted quality criteria within legal research assessment? When drafting an assessment report, we tend to be sure about our conclusion, i.e., we are positive about whether a research work is very good, just good, poor, or awful. However, it is hard to specifically state the criteria guiding our conclusion. Above all, it is not always easy that two reports by two well-respected legal scholars on the same work agree on the conclusion and on the supporting arguments and criteria.

Us legal scholars still need to reflect on what we do within a debate expressly held with a twofold purpose: i) to further standardize or formally establish the criteria applied to assess the quality of our work, and ii) to achieve a greater consensus about such criteria 167. If we want to be subject to proper assessment, i.e., if we want to be correctly evaluated, we should make this effort, since it is reasonable to believe that nobody will be able to do it better than us. It does not suffice to sit down and complain about how the quality of our research is assessed under inappropriate or unsuitable criteria. To a large extent, our task is to render transparent the criteria that are being implicitly applied 168.

Probably, legal researchers could easily agree on a few (highly abstract) criteria to assess the quality of legal research. These criteria would be placed on a scale ranging from the more substantive to the more formal: originality (which pertains to the very essence of research, i.e., new scientific knowledge); systematic relevance (for example, because a concept is created that allows to connect a lot of positive law); theoretical weight (although, as noted above, the ties between theory and practice are inherent to legal science); use of rigorous and correct arguments and lines of reasoning; clarity in the exposition and precise language 169, appropriate structure, use of relevant references and correct citations, etc. 170.

As has been stated, in general terms, and at least regarding the assessment of researchers under Field 9 (“Law and Legal Science”), it is safe to say that the quality criteria that have long been used within sexenio assessment procedures are well-defined. These criteria are as follows: “originality, rigour, methodology and impact” of the scholarly works will be taken into account; also, those works that “bring knowledge and provide conceptual and analytical instruments to render legal rules more effective” will be given particular consideration, as well as “case law analyses based on rulings settling related cases aimed at clarifying case law criteria and the evolution thereof”. However, “merely descriptive works or the repetition of previous contributions will be disregarded” 171. This could be a good starting point to state the quality standards of our disciplines.

7.  CONCLUSIONS AND PROPOSALS

Public authorities competent for research assessment matters are responsible for creating organizational structures, as well as for coming up with research assessment procedures that should be well-suited for science. Within this statement, “well-suited for science” means appropriateness of research assessment for each and every scientific discipline. Considering that only every individual discipline can come up with “the best” assessment criteria, the aforesaid organizational and procedural public responsibilities translate into the following obligation: public authorities (lawmakers or rulemaking bodies) must create organizations and come up with procedures allowing for the various scholarly fields to set the criteria applicable to public assessment of research. Public authorities should create and open up organizational and procedural structures within their administrative bodies so as to welcome scientific communities to bring knowledge that only they would have; this knowledge is much needed by public authorities to fulfil their duty of promoting scientific research.

Moreover, each of these scientific communities should be aware, as a group, of their responsibility in drawing up assessment criteria and standards well-suited for their research works and researchers. If the representatives of a given scholarly field are unable to agree on their quality standards, this scientific community will no longer have the right to complain (which is not uncommon) about the unsuitability of the criteria applied thereto 172.

This can have some implications in practice. For instance, ANECA may enter into agreements with professor associations from each discipline (if deemed sufficiently representative) in order for each of them i) to draft a document laying down the criteria applicable to accreditations or sexenio assessments; ii) to prepare appropriate peer review forms for each assessment; or even, for these associations iii) to nominate their own scholars for the evaluation committees. In the absence of associations of scholars, the competent public authorities should create other appropriate fora to allow for the representatives of scientific disciplines to fulfil these tasks. The point is to enhance or refine something that, to a great extent, already exists. There is no doubt that the general criteria for sexenio assessments within the legal domain were drafted by legal scholars, and it is also obvious that the document titled “Assessable merits” (which drew some criticism above) was prepared by ANECA’s Law evaluation committee. We should further advance the idea that public authorities create procedures and organizational structures whilst scientific disciplines (aware of their collective responsibility) use them to get involved in assessment procedures contributing their know-how.

As for the assessments of researchers’ CVs (accreditations and concursos), which obviously do not allow the evaluation committee or the external expert to access all the contents of the researcher’s work, scientific communities should also be responsible for preparing lists of prestigious journals and publishers in each field; these would serve as quality indicators for publications. If this task is not performed by each field of knowledge, the results are often questionable and actually challenged 173. These lists can also be used to calculate the number of publications to be taken into account in institutional assessment procedures. It would be very useful (and it would reduce the burden of submitting information within assessment procedures) for the documentation and database services of university schools to work with these lists, so that the number of publications in the relevant journals by the researchers from every school will be constantly updated.

Using both the quality criteria applicable to research works within each field and the indicators on prestigious journals and publishers allows to draft suitable forms for the abovementioned “informed” peer review processes, where experts are required to appropriately substantiate their assessment in a sufficiently precise manner and expressly addressing the criteria pertaining to their disciplines.

In order to lighten red tape and optimize the data minimization principle, institutional assessments require a design that allows for using little (yet meaningful) data obtained in other assessments, such as the sexenio assessments of university researchers. It is essential to use indicators that comply with the formal equality of all disciplines, among others: the lists of journals drafted by each discipline, and not indicators that reward certain disciplines over others, or the number of agreements entered into under Art. 83 LOU instead of the amount of private funding obtained therefrom.

It is worth concluding by listing a few specific proposals regarding Spanish law on research assessment that touch on detailed aspects: i) the doctoral dissertation approval and evaluation should also allow to identify (and, where appropriate, penalize) research malpractice by the doctoral candidate; ii) not selecting only the best proposals, as they have been submitted, and with regard to resource availability, but increasing the number of funded (selected) projects by deciding, ex officio, to allocate less funding than requested to all or most of the selected proposals, would amount to a misallocation of resources; iii) main researchers within research projects should be informed of the negative assessments on an ex post basis; otherwise, no negative (and unknown) consequences from these negative evaluations can be inferred for future calls or applications; iv) within sexenio assessments, in case of negative evaluations, the parties concerned must be granted a hearing; also, the granting of a sexenio cannot be denied if a researcher from the same field as the applicant has not been involved in the procedure, either as a member of the committee or as an external expert; v) ANECA’s soft law provisions applicable to accreditations for tenured positions (associate professorships and full professorships) should be re-elaborated in order for them to appropriately assess quality (and not quantity) as expressly required by RD 415/2015; vi) the procedure regarding the so-called “promotion positions” (plazas de promoción or “positions to be upgraded”) must be rethought and redesigned altogether. Perhaps, the underlying problem of this procedure can only be solved by the enactment of a statutory provision or Act of Parliament.

8.  REFERENCES

Vicenç AGUADO I CUDOLÀ, “La selección de los cuerpos docentes universitarios: el sistema de acreditación”, Revista de Educación y Derecho, no. 10, 2014, pp. 1 et seq.

Juan Manuel ALEGRE ÁVILA, “El nuevo sistema de selección del profesorado universitario funcionario”, Revista Española de Derecho Administrativo, no. 135, 2007, pp. 437-457.

Carlos-Alberto AMOEDO-SOUTO, “Infrafinanciación cronificada, condicionalidad financiera y autonomía universitaria: notas para un abordaje jurídico”, in Fernando LÓPEZ RAMÓN, Ricardo RIVERO ORTEGA and Marcos M. FERNANDO PABLO (coord.), Organización de la Universidad y la ciencia, Madrid, 2018, pp. 163 et seq.

Luis ARROYO JIMÉNEZ, “Las bases constitucionales de la actividad administrativa de adjudicación de derechos limitados en número”, in Luis ARROYO and Dolores UTRILLA (dirs.), La administración de la escasez. Los fundamentos de la actividad administrativa de adjudicación de derechos limitados en número, Madrid, 2015, pp. 90 et seq.

Simon CADEZ, Vlado DIMOVSKI and Maja ZAMAN GROFF, “Research, teaching and performance evaluation in academia: the salience of quality”, Studies in Higher Education, 42:8, 2017, pp. 1455-1473.

Claus-Wilhelm CANARIS, El sistema en la Jurisprudencia, Spanish translation of the 2nd German edition (Systemdenken und Systembegriff in der Jurisprudenz of 1983), Madrid, 1998.

Jacques CHEVALLIER, El Estado postmoderno (translation into Spanish from the French original of 2008), Bogota, 2011.

Mercè DARNACULLETA, “Libertad de investigación científica y promoción de la ciencia en beneficio del interés general”, in Fernando LÓPEZ RAMÓN, Ricardo RIVERO ORTEGA and Marcos M. FERNANDO PABLO (coord.), Organización de la Universidad y la ciencia, Madrid, 2018, pp. 232 et seq.

Silvia DÍEZ SASTRE, La formación de conceptos en el Derecho público. Un estudio de metodología académica: definición, funciones y criterios de formación de los conceptos jurídicos, Madrid, 2018.

Gabriel DOMÉNECH, “Por qué y cómo hacer análisis económico del Derecho”, Revista de Administración Pública, no. 195, 2014, pp. 99-133.

— “Que innoven ellos. Por qué la ciencia jurídica española es tan poco original, creativa e innovadora”, InDret 2/2016.

— “Malas prácticas universitarias (I): la recensión”, 2016. Available at: https://almacendederecho.org/malas-practicas-universitarias-i-la-recension/

— “Malas prácticas universitarias (II): la interdisciplinariedad”. 2016. Available at: https://almacendederecho.org/malas-practicas-universitarias-ii-la-interdisciplinariedad/

Antonio Eduardo EMBID TELLO, La libertad de investigación científica. Una interpretación integrada de sus dimensiones subjetiva y objetiva, Valencia, 2017.

- “Calidad normativa y evaluación ex-post de las normas jurídicas”, Revista General de Derecho Administrativo, no. 50, 2019.

Michael Fehling, “Wissenschaftsfreiheit (Art. 5 Abs. 3 GG)”, in Rudolf DOLZER and Klaus Vogel (dir.), Bonner Kommentar zum Grundgesetz, Heidelberg, 2004, pp. 1 et seq.

Severiano FERNÁNDEZ RAMOS, “Tramos de investigación y transparencia”, in Fernando LÓPEZ RAMÓN, Ricardo Rivero Ortega and Marcos M. FERNANDO PABLO (coord.), Organización de la Universidad y la ciencia, Madrid, 2018, pp. 273 et seq.

Bruno S. FREY, “Evaluitis – Eine neue Krankheit”, Center for Research in Economics, Management and the Arts, working paper no. 18, 2006.

Luis Ignacio Gordillo Pérez, “Los contratos-programa y la Universidad”, Revista Vasca de Administración Pública, no. 74, 2006, pp. 183-236.

Fernando GURREA CASAMAYOR, “Los contratos-programa entre las Comunidades Autónomas y las Universidades: el modelo adoptado por la Comunidad Autónoma de Aragón”, Revista Aragonesa de Administración Pública, no. 18, 2001, pp. 319-356.

Andreas LIENHARD, Thierry TANQUEREL, Alexandre FLÜCKIGER, Fabian AMSCHWAND, Kari BYLAND and Eva HERRMANN, Forschungsevaluation in der Rechtswissenschaft. Grundlagen und empirische Analyse in der Schweiz, Bern, 2016.

Niklas LUHMANN, Grundrechte als Institution, Berlin, 1965.

Ute MAGER, “Freiheit von Forschung und Lehre”, in Josef Isensee and Paul KIRCHHOF, Handbuch des Staatsrechts der Bundesrepublik Deutschland, volume VII, Heidelberg, 2009, pp. 1075 et seq.

Araceli MANGAS, “La evaluación de la investigación jurídica en España”, El Cronista del Estado Social y Democrático de Derecho, no. 23, 2011, pp. 60-71.

Harmut MAURER, Staatsrecht, Munich, 1999.

Alba NOGUEIRA, “Doce notas y una reflexión sobre el modelo de Universidad y empleo público docente que propician los criterios de acreditación en Derecho”, in Fernando LÓPEZ RAMÓN, Ricardo RIVERO ORTEGA and Marcos M. FERNANDO PABLO (coord.), Organización de la Universidad y la ciencia, Madrid, 2018, pp. 283 et seq.

Michael OCHSNER, Sven Hug and Ioana GALLERON, “The future of research assessment in the humanities: bottom-up assessment procedures”, Palgrave communications. 3:17020 doi: 10.1057/palcomms.2017.20.

Julia ORTEGA BERNARDO, “La transferencia de conocimiento en las Universidades: razones y claves de su articulación jurídica”, in Fernando LÓPEZ RAMÓN, Ricardo RIVERO ORTEGA and Marcos M. FERNANDO PABLO (coord.), Organización de la Universidad y la ciencia, Madrid, 2018, pp. 372 et seq.

Santiago RAMÓN Y CAJAL, Reglas y consejos sobre la investigación científica. Los tónicos de la voluntad, Madrid, 2008 (second printing of the 1920 sixth edition).

José María RODRÍGUEZ DE SANTIAGO, “Libertad de investigación científica y sexenios”, Revista catalana de dret públic, no. 44, 2012, pp. 225-252.

Pablo SALVADOR CODERCH, Albert AZAGRA MALO and Carlos GÓMEZ LIGÜERRE, “Criterios de evaluación de la actividad investigadora en Derecho civil, Derecho privado y análisis del Derecho”, InDret, 3/2008.

Diana SANTIAGO IGLESIAS, “Algunas claves para el éxito del procedimiento de innovación en el ámbito universitario”, in Fernando LÓPEZ RAMÓN, Ricardo RIVERO ORTEGA and Marcos M. FERNANDO PABLO (coord.), Organización de la Universidad y la ciencia, Madrid, 2018, pp. 474 et seq.

Íñigo SANZ RUBIALES, “La Universidad: entre el servicio público y la competencia”, in Fernando LÓPEZ RAMÓN, Ricardo RIVERO ORTEGA and Marcos M. FERNANDO PABLO (coord.), Organización de la Universidad y la ciencia, Madrid, 2018, pp. 49 et seq.

Helmut SCHULZE-FIELITZ, “Was macht die Qualität öffentlich-rechtlicher Forschung aus?”, Jahrbuch des Öffentlichen Rechts der Gegenwart, no. 50, 2002, pp. 1-68.

Gunnar Folke SCHUPPERT, Verwaltungswissenschaft. Verwaltung, Verwaltungsrecht, Verwaltungslehre, Baden-Baden, 2000.

Margrit SECKELMANN, Evaluation und Recht, Tübingen, 2018.

Marc TORKA, Die Projektförmigkeit der Forschung, Baden-Baden, 2009.

Hans-Heinrich TRUTE, Die Forschung zwischen grundrechtlicher Freiheit und staatlicher Institutionalisierung, Tübingen, 1994.

Gabriele VESTRI, “El acceso a la docencia-investigación en el sistema universitario español”, in Fernando LÓPEZ RAMÓN, Ricardo RIVERO ORTEGA and Marcos M. FERNANDO PABLO (coord.), Organización de la Universidad y la ciencia, Madrid, 2018, pp. 153 et seq.

Max WEBER, La ciencia como profesión, Madrid (Espasa Calpe), 1992 (Spanish version of the conference delivered by the author in 1919).

WISSENSCHAFTSRAT, Empfehlungen zur Bewertung und Steuerung von Forschungsleistung, 2011.

WISSENSCHAFTSRAT, Perspektiven der Rechtswissenschaft in Deutschland. Situation, Analysen, Empfehlungen, 2012.

WISSENSCHAFTSRAT, Peer Review in Higher Education and Research. Position Paper, 2017.


1 Regarding Spain, see Art. 35 (quality assurance) of Constitutional (Organic) Act 6/2001, of December 21, on Universities (Ley Orgánica 6/2001, de 21 de diciembre, de Universidades, LOU). Other abbreviations used in this paper: AEI - Agencia Estatal de Investigación (National Research Agency); ANECA - Agencia Nacional de Evaluación de la Calidad y Acreditación (National Quality Assurance and Accreditation Agency); CE - Constitución Española (Spanish Constitution); CNEAI - Comisión Nacional Evaluadora de la Actividad Investigadora (National Commission for the Evaluation of Research Activity); SCI - Science Citation Index; SSCI - Social Sciences Citation Index; LPAC - Ley 39/2015, de 1 de octubre, de Procedimiento Administrativo Común de las Administraciones Públicas (Act on the Standard Administrative Procedure for Public Authorities); LRJSP - Ley 40/2015, de 1 de octubre, de Régimen Jurídico del Sector Público (Act on the Legal Regime applicable to Public Authorities).

2 See, in a nutshell, Gunnar Folke SCHUPPERT, Verwaltungswissenschaft. Verwaltung, Verwaltungsrecht, Verwaltungslehre, Baden-Baden, 2000, pp. 999 et seq.; Jacques CHEVALLIER, El Estado postmoderno (translation into Spanish from the French original of 2008), Bogota, 2011, pp. 124 et seq.

3 In this connection, see Margarit SECKELMANN, Evaluation und Recht, Tübingen, 2018, pp. 12-13, 339.

4 SECKELMANN, 2018: 49-60.

5 See the title of Bruno S. FREY’s work, “Evaluitis – Eine neue Krankheit”, Center for Research in Economics, Management and the Arts, working paper no. 18, 2006, available at: http://www.crema-research.ch/papers/2006-18.pdf

6 See Mercè DARNACULLETA, “Libertad de investigación científica y promoción de la ciencia en beneficio del interés general”, in Fernando LÓPEZ RAMÓN, Ricardo RIVERO ORTEGA and Marcos M. FERNANDO PABLO (coord.), Organización de la Universidad y la ciencia, Madrid, 2018, pp. 232 et seq. This work rejects the “utilitarian approach to research” underlying research evaluation procedures and the modern notion of researcher (in contrast with a more scholarly or scientific notion). Academic utilitarianism “substantially limits the freedom of research to be (allegedly) encouraged thereby”.

7 Along these lines, see WISSENSCHAFTSRAT, Empfehlungen zur Bewertung und Steuerung von Forschungsleistung, 2011, pp. 2-9. The reports and publications of these advisory body for the German Bund and Länder regarding public policy on academic research are available at: https://www.wissenschaftsrat.de

8 On these potentially perverse incentives, see WISSENSCHAFTSRAT, 2011: 31.

9 Referring to a study carried out in the University of Maribor (Slovenia); Simon CADEZ, Vlado DIMOVSKI and Maja ZAMAN GROFF, “Research, teaching and performance evaluation in academia: the salience of quality”, Studies in Higher Education, 42:8, 2017, pp. 1455-1473.

10 However, broadly speaking (in line with Art. 1(1) LOU), the public service provided by universities includes research and study along with teaching. On this matter, see Íñigo SANZ RUBIALES, “La Universidad: entre el servicio público y la competencia”, in Fernando LÓPEZ RAMÓN, RICARDO RIVERO ORTEGA and Marcos M. FERNANDO PABLO (coord.), Organización de la Universidad y la ciencia, Madrid, 2018, pp. 49 et seq., in particular, pp. 62-63.

11 “Reserva de ley” has often been translated as “statutory reservation”, “statutory requirement”, “requirement for a statute” or “to be defined by an Act of Parliament”. This notion refers to a principle under which a matter must be regulated by a statutory provision or an Act of Parliament. See http://legalspaintrans.com/legal-translation/how-to-translate-reserva-de-ley-into-english-using-a-descriptive-strategy/ or http://transblawg.eu/2014/12/18/gesetzesvorbehalt/ for further details.

12 As for applying for sexenios, the scientific community is divided into 11 fields. Concerning the most recent call, see the Resolution issued by CNEAI on November 14, 2018 disclosing the specific criteria approved for each of the fields of assessment. Available at: https://www.boe.es/diario_boe/txt.php?id=BOE-A-2018-16138

13 Niklas LUHMANN, Grundrechte als Institution, Berlin, 1965, pp. 23 and 186 et seq., should be credited for this expression.

14 According to SECKELMANN, 2018: 226-227, creativity and reportability are the two defining features of scientific research. Regarding the notion of “creativity”, Antonio Eduardo EMBID TELLO, La libertad de investigación científica. Una interpretación integrada de sus dimensiones subjetiva y objetiva, Valencia, 2017, pp. 173 et seq. suggests to “merge” the subjective and objective dimensions of freedom of research.

15 Nevertheless, SECKELMANN, 2018: 425-429, for instance, looks at research assessment procedures as an interference with the fundamental right to scientific production on the basis of the proportionality principle (suitability or adequacy, necessity and weighing or proportionality stricto sensu).

16 The approach follows-up on that used in José María RODRÍGUEZ DE SANTIAGO, “Libertad de investigación científica y sexenios”, Revista catalana de dret públic, no. 44, 2012, pp. 225-252, particularly, pp. 227-235.

17 In this vein, see Michael FEHLING, “Wissenschaftsfreiheit (Art. 5 Abs. 3 GG)”, in Rudolf DOLZER and Klaus VOGEL (dir.), Bonner Kommentar zum Grundgesetz, Heidelberg, 2004, pp. 1 et seq., specifically, pp. 40 and 62; Ute Mager, “Freiheit von Forschung und Lehre”, in Josef ISENSEE and Paul KIRCHHOF, Handbuch des Staatsrechts der Bundesrepublik Deutschland, vol. VII, Heidelberg, 2009, pp. 1075 et seq., in particular, §12 and 25; and Hans-Heinrich Trute, Die Forschung zwischen grundrechtlicher Freiheit und staatlicher Institutionalisierung, Tübingen, 1994, pp. 80 et seq.

18 SECKELMANN, 2018: 432.

19 On this matter, see, for example, SECKELMANN, 2018: 378.

20 Described by Robert K. MERTON in 1968 and named after the Gospel of Matthew, Chapter 25, verse 29: “For to every one who has will more be given, and he will have abundance; but from him who has not, even what he has will be taken away”.

21 This has been expressly stated by the German Federal Constitutional Court (first senate) in its Judgments of October 26, 2004 (regarding the Brandenburg Regional Act on Universities), par. 171; and of February 17, 2016 (regarding the review of curricula), par. 60. On this topic, see SECKELMANN, 2018: 384; she graphically claims that one size does not fit all.

22 In this connection, see EMBID TELLO, 2017: 239.

23 See the work of Marc TORKA, Die Projektförmigkeit der Forschung, Baden-Baden, 2009, among others, p. 9.

24 SECKELMANN, 2018: 242 and 352.

25 In sum (regardless of further analysis provided herein), see Helmut SCHULZE-FIELITZ, “Was macht die Qualität öffentlich-rechtlicher Forschung aus?”, Jahrbuch des Öffentlichen Rechts der Gegenwart, no. 50, 2002, pp. 1-68, specifically, p. 12.

26 This is how Santiago RAMÓN Y CAJAL described intellectual work in Reglas y consejos sobre la investigación científica. Los tónicos de la voluntad, Madrid, 2008 (second printing of the 1920 sixth edition), particularly, pp. 49-52, 55, 66.

27 Max WEBER, La ciencia como profesión, Madrid (Espasa Calpe), 1992 (Spanish version of the conference delivered by the author in 1919), p. 62.

28 WISSENSCHAFTSRAT, Empfehlungen zur Bewertung und Steuerung von Forschungsleistung, 2011, p. 35.

29 The Advisory Body on Science, Technology and Innovation (Consejo Asesor de Ciencia, Tecnología e Innovación) is also a participatory body for economic and social stakeholders (Art. 9(1) of Act 14/2011, of June 1, on Science, Technology and Innovation), but at least two thirds of the Advisory Body’s members should be “prominent members” of the scientific, technology or innovation community (Art. 9(3) of Act 14/2011).

30 See Art. 9(2)(e) of Act 14/2011, of June 1, on Science, Technology and Innovation.

31 See, among others, Andreas LIENHARD, Thierry TANQUEREL, Alexandre FLÜCKIGER, Fabian AMSCHWAND, Kari BYLAND and Eva HERRMANN, Forschungsevaluation in der Rechtswissenschaft. Grundlagen und empirische Analyse in der Schweiz, Bern, 2016, p. 115; WISSENSCHAFTSRAT, Peer Review in Higher Education and Research. Position Paper, 2017, p. 9.

32 See, among others, Antonio Eduardo EMBID TELLO, “Calidad normativa y evaluación ex-post de las normas jurídicas”, Revista General de Derecho Administrativo, no. 50, 2019.

33 See Art. 130(1) of Act 39/2015, of October 1, on the standard administrative procedure for public authorities (LPAC).

34 WISSENSCHAFTSRAT, 2011: 34.

35 This terminology can be found in LIENHARD et al., 2016: 101-102.

36 Along these lines, SECKELMANN, 2018: 428.

37 Within German law, see the two judgments cited above: Judgments of the German Federal Constitutional Court (first senate) of October 26, 2004 (regarding the Brandenburg Regional Act on Universities), par. 171; and of February 17, 2016 (regarding the review of curricula), par. 60.

38 However, EMBID TELLO, 2017: 253, has an opposing view.

39 See, for instance, Harmut MAURER, Staatsrecht, Munich, 1999, p. 371.

40 Along these lines, see SECKELMANN, 2018: 398-404.

41 On this matter, see RODRÍGUEZ DE SANTIAGO, 2012: 235-239.

42 Art. 62 of Act 2/2011, of March 4, on Sustainable Economy, provides for the projects that will have priority within the program, but the procedure and organization regarding project assessment will be subject to Ministerial Order EDU/1539/2011, of June 2, including the call for funding applications for 2011 regarding the excellence sub-program of the Campus of International Excellence Program and implementing the procedure to grant the Campus of International Excellence award and to enter into agreements with Autonomous Regions (Regional Governments) within the scope of Ministerial Order EDU/903/2010, of April 8.

43 See i) Resolution issued by the National Commission for the Evaluation of Research Activity (Comisión Nacional Evaluadora de la Actividad Investigadora or CNEAI) on November 14, 2018 disclosing the specific criteria approved for each of the fields of assessment, and ii) Resolution issued by the State Secretariat for Universities, Research, Development and Innovation providing for the procedure and time periods to apply for research assessment by CNEAI.

44 Regulation (EU) 2016/679 of the European Parliament and of the Council, of 27 April 2016, on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation).

45 On this topic, see SECKELMANN, 2018: 421-424.

46 Art. 5(1)(c) of the General Data Protection Regulation.

47 Modelo de financiación de las Universidades públicas de la Comunidad de Madrid (2006-2010). Available at: http://www.madrid.org/cs/BlobServer?blobkey=id&blobwhere=1220468875910&blobheader=application%2Fpdf&blobheadername1=Content-Disposition&blobheadervalue1=filename%3DModelofinanciacion.pdf&blobcol=urldata&blobtable=MungoBlobs

48 See pp. 16 and 23 of the document cited in the previous footnote.

49 Plan plurianual de financiación del sistema universitario público valenciano 2010-2017. Available at: https://gerencia.ua.es/es/documentos/documentos433/plan-plurianual-de-financiacion-2010-2017.pdf

50 See pp. 9 and 28 of the document cited in the previous footnote.

51 In this vein, see WISSENSCHAFTSRAT, 2011: 39.

52 In this connection see LIENHARD et al., 2016: 64; see also, SECKELMANN, 2018: 159.

53 On this matter, see SECKELMANN, 2018: 59-60.

54 These rankings can be found, for instance, in LIENHARD et al., 2016: 12-15, 66-72.

55 See WISSENSCHAFTSRAT, 2011: 31.

56 See WISSENSCHAFTSRAT, 2011: 32.

57 On this aspect, in Germany, see WISSENSCHAFTSRAT, 2011: 24-27.

58 On this discussion, see, among others, Michael OCHSNER, Sven HUG and Ioana GALLERON, “The future of research assessment in the humanities: bottom-up assessment procedures”, Palgrave communications. 3:17020 doi: 10.1057/palcomms.2017.20.

59 See WISSENSCHAFTSRAT, 2011: 38.

60 In this connection, see LIENHARD et al., 2016: 171.

61 See LIENHARD et al., 2016: 170.

62 See LIENHARD et al., 2016: 106.

63 This index was created in 2005 by Jorge E. HIRSCH. A researcher’s h-index will equal h where h of his/her scholarly works receive a number of citations equal or greater than h. For example, Stephen W. HAWKING had an h-index of 62: he had published 62 papers that had been cited at least 62 times each. On this matter, see Pablo SALVADOR CODERCH, Albert AZAGRA MALO and Carlos GÓMEZ LIGÜERRE, “Criterios de evaluación de la actividad investigadora en Derecho civil, Derecho privado y análisis del Derecho”, InDret, 3/2008, pp. 14-15.

64 See SECKELMANN, 2018: 381; and OCHSNER et al., 2017: 9.

65 This critical remarks can be found in EMBID TELLO, 2017: 256, 265.

66 DORA: San Francisco Declaration on Research Assessment; available at: https://sfdora.org/

67 https://cwur.org/2019-2020.php

68 https://www.timeshighereducation.com

69 See WISSENSCHAFTSRAT, 2011: 23-24.

70 https://www.umultirank.org/

71 See SECKELMANN, 2018: 462.

72 WISSENSCHAFTSRAT, 2011: 16.

73 In this connection, LIENHARD et al., 2016: 116.

74 SECKELMANN, 2018: 374.

75 OCHSNER et al., 2017: 9.

76 OCHSNER et al., 2017: 9.

77 LIENHARD et al., 2016: 120.

78 In this vein, WISSENSCHAFTSRAT, Peer Review in Higher Education and Research, Position Paper, 2017, p. 8.

79 WISSENSCHAFTSRAT, 2017: 28.

80 WISSENSCHAFTSRAT, 2017: 11.

81 See WISSENSCHAFTSRAT, 2017: 39.

82 In this connection, see WISSENSCHAFTSRAT, 2017: 24.

83 Along these lines, LIENHARD et al., 2016: 124.

84 See Art. 14(2)(a) 2 of Decree 1312/2007, of October 5, on the national accreditation to achieve tenure following the amendment provided by Decree 415/2015, of May 29.

85 See EMBID TELLO, 2017: 265-266; WISSENSCHAFTSRAT, 2011: 40.

86 In this vein, WISSENSCHAFTSRAT, 2017: 25.

87 See Art. 70(1) (on the administrative file) of the Act on the standard administrative procedure for public authorities (LPAC).

88 Art. 57(3) LOU.

89 See Art. 17 of the Resolution issued by the State Secretariat for Universities, Research, Development and Innovation along with the Chair of the National Research Agency (AEI), opening the call for an expedited processing of the 2019 funding allocation procedure for R&D&i projects “Challenges-Cooperation” (“Retos-Colaboración”) of the Public Research, Development and Innovation Program for Social Challenges within the framework of the National Scientific, Technical and Innovation Research Program 2017-2020 (Plan Estatal de Investigación Científica y Técnica y de Innovación 2017-2020).

90 See Art. 25(5) of Decree 1393/2007, of October 29, on official university studies.

91 In this connection, see SECKELMANN, 2018: 404-407.

92 See Arts. 38 LOU and 13(1) of Decree 99/2011, of January 28, on official doctoral studies.

93 See Art. 2(6) of Decree 99/2011, of January 28, on official doctoral studies.

94 For example, in the Universidad Autónoma de Madrid (UAM), see Art. 2(2) of the procedural rules on the board of examiners (also designated as doctoral dissertation reading committee or, in Spanish, “tribunal”), presentation or defence, and evaluation of doctoral dissertations in UAM (passed in 2012 and amended several times since then).

95 On this procedure, see the recent work of Severiano FERNÁNDEZ RAMOS, “Tramos de investigación y transparencia”, in Fernando LÓPEZ RAMÓN, Ricardo RIVERO ORTEGA and Marcos M. FERNANDO PABLO (coord.), Organización de la Universidad y la ciencia, Madrid, 2018, pp. 273 et seq.

96 See Art. 2(4) of Decree 1086/1989, of August 28, on the remuneration of university faculty members.

97 Art. 6(2) of Decree 1312/2007, of October 5, on the national accreditation to achieve tenure.

98 Art. 9(2) of the CNEAI Internal Regulations (enacted by Ministerial Order ECI/3184/2005, of October 6).

99 Art. 68(2) LOU.

100 In particular, in the CNEAI Internal Regulations (enacted by Ministerial Order ECI/3184/2005, of October 6); and in Ministerial Order, of December 2, 1994, providing for the procedure to assess research activity implementing Decree 1086/1989, of August 28, on the remuneration of university faculty members.

101 On this topic, see RODRÍGUEZ DE SANTIAGO, 2012: 245-247.

102 This hearing is neither provided in Art. 9 of the said Ministerial Order of December 2, 1994, nor granted in practice.

103 See Art. 57(3) LOU.

104 See Section 4 of the criteria for Field 9 of the Resolution issued by CNEAI on November 14, 2018 disclosing the specific criteria approved for each of the fields of assessment. Available at: https://www.boe.es/diario_boe/txt.php?id=BOE-A-2018-16138

105 Supreme Court Judgment of June 12, 2018 (cassation appeal no. 1281/2017), legal basis 5 and 6.

106 See SCHULZE-FIELITZ, 2002: 16.

107 See WISSENSCHAFTSRAT, 2011: 39.

108 The Revista de Administración Pública and the Revista Española de Derecho Constitucional are published by the Center for Political and Constitutional Studies (Centro de Estudios Políticos y Constitucionales). The journal titled Documentación Administrativa is published by the National Institute for Public Administration (Instituto Nacional de Administración Pública). As is well-known, there are many other examples.

109 The regulatory approach is taken from Ministerial Order CNU/320/2019, of March 13, approving the regulatory framework for the granting of public financial aid within the National Program for Knowledge Generation and Scientific and Technological Advancement of the R&D&i System and within the framework of the Public Research, Development and Innovation Program for Social Challenges within the National Scientific, Technical and Innovation Research Program 2017-2020 (Plan Estatal de Investigación Científica y Técnica y de Innovación 2017-2020) addressed to research and knowledge dissemination bodies.

110 See Art. 16 of the said Ministerial Order CNU/320/2019.

111 See Art. 18 of Ministerial Order CNU/320/2019.

112 See Art. 19 of Ministerial Order CNU/320/2019.

113 See paragraphs (3) and (4) of Art. 21 of Ministerial Order CNU/320/2019.

114 See Annex III to Ministerial Order CNU/320/2019.

115 See Art. 27(4) of Ministerial Order CNU/320/2019.

116 See Arts. 50(a) (Assistant Lecturer), 52(a) (Senior Lecturer) and 72(2) (Private University Faculty) of the LOU. This accreditation procedure was further implemented by Decree 1052/2002, of October 11, regulating the procedure for the assessment and subsequent certification thereof by the National Quality Assurance and Accreditation Agency (Agencia Nacional de Evaluación de la Calidad y Acreditación or ANECA), for the purpose of hiring university faculty. The assessment criteria were laid down by two soft law instruments: i) Annex IV to the Resolution issued by the General Directorate for Universities on February 18, 2005 modifying certain aspects related to the ANECA assessment application procedure for the hiring of university faculty, as well as the assessment criteria, set forth in Resolutions issued by the General Directorate for Universities on October 17, 2002 and June 24, 2003; and ii) a document titled “Principles and guidelines for the application of assessment criteria” (“Principios y orientaciones para la aplicación de los criterios de evaluación”) available on ANECA’s website.

117 Made by Constitutional (Organic) Act 4/2007, of April 12, amending Constitutional (Organic) Act 6/2001, of December 21, on Universities. The accreditation to achieve a tenured position as Associate Professor (profesor titular) or Full Professor (catedrático) is governed by Arts. 57, 59 and 60 LOU, implemented by Decree 1312/2007, of October 5, on the national accreditation to achieve tenure.

118 On this matter, see Juan Manuel ALEGRE ÁVILA, “El nuevo sistema de selección del profesorado universitario funcionario”, Revista Española de Derecho Administrativo, no. 135, 2007, pp. 437-457.

119 For instance, see Araceli MANGAS, “La evaluación de la investigación jurídica en España”, El Cronista del Estado Social y Democrático de Derecho, no. 23, 2011, pp. 60-71, specifically, pp. 63-64; Gabriel DOMÉNECH, “Que innoven ellos. Por qué la ciencia jurídica española es tan poco original, creativa e innovadora”, InDret 2/2016, pp. 15-19; EMBID TELLO, 2017: 259.

120 The amendment was performed through Decree 415/2015, of May 29, modifying Decree 1312/2007, of October 5, on the national accreditation to achieve tenure.

121 On the precedents leading to the amendment, see Vicenç AGUADO I CUDOLÀ, “La selección de los cuerpos docentes universitarios: el sistema de acreditación”, Revista de Educación y Derecho, no. 10, 2014, pp. 8-9.

122 On the negative effects on researchers arising from the instability and fluctuations of assessment criteria, see Diana SANTIAGO IGLESIAS, “Algunas claves para el éxito del procedimiento de innovación en el ámbito universitario”, in Fernando LÓPEZ RAMÓN, Ricardo RIVERO ORTEGA and Marcos M. FERNANDO PABLO (coord.), Organización de la Universidad y la ciencia, Madrid, 2018, pp. 474 et seq., specifically, pp. 477-478.

123 In favour, however, of a “strict and almost mathematical scaling system”, Gabriele VESTRI, “El acceso a la docencia-investigación en el sistema universitario español”, in Fernando LÓPEZ RAMÓN, Ricardo RIVERO ORTEGA and Marcos M. FERNANDO PABLO (coord.), Organización de la Universidad y la ciencia, Madrid, 2018, pp. 153 et seq., in particular, p. 160.

124 See the 2nd indent of Art. 14(2)(a) of Decree 1312/2007, following the amendment provided by Decree 415/2015, of May 29.

125 Available at: http://www.aneca.es/Programas-de-evaluacion/Evaluacion-de-profesorado/ACADEMIA/Criterios-de-evaluacion-noviembre-2017

126 See Alba NOGUEIRA, “Doce notas y una reflexión sobre el modelo de Universidad y empleo público docente que propician los criterios de acreditación en Derecho”, in Fernando LÓPEZ RAMÓN, Ricardo RIVERO ORTEGA and Marcos M. FERNANDO PABLO (coord.), Organización de la Universidad y la ciencia, Madrid, 2018, pp. 283 et seq., in particular, pp. 288-289.

127 WISSENSCHAFTSRAT, Perspektiven der Rechtswissenschaft in Deutschland. Situation, Analysen, Empfehlungen, 2012, p. 47.

128 As for Universidad Autónoma de Madrid (UAM), see, for instance, rule 5(4) on competitive procedures to achieve tenure (Resolution issued by the Rector or President on March 23, 2009 publishing the Decision of the University’s Governing Board dated March 13, 2009).

129 See, by way of example, the 2019 call of the Universidad Autónoma de Madrid: https://www.uam.es/UAM/Convocatorias-Internas/1446772807610.htm?language=es

130 Concerning the German Leistungsorientierte Mittelvergabe (LOM) system, see WISSENSCHAFTSRAT, 2011: 23 et seq.

131 On this topic, see Fernando GURREA CASAMAYOR, “Los contratos-programa entre las Comunidades Autónomas y las Universidades: el modelo adoptado por la Comunidad Autónoma de Aragón”, Revista Aragonesa de Administración Pública, no. 18, 2001, pp. 319-356; and Luis Ignacio GORDILLO PÉREZ, “Los contratos-programa y la Universidad”, Revista Vasca de Administración Pública, no. 74, 2006, pp. 183-236.

132 See, for instance, Art. 88 of the Recast Text of the Andalusian Regional Act on Universities (enacted through Legislative Decree 1/2013, of January 8); Art. 53 of Regional Act 5/2005, of June 14, on the University System of Aragon; and Art. 48 of Regional Act 3/2003, of March 28, on Universities of Castilla y León.

133 See the document cited in footnote 47.

134 See the document cited in footnote 49.

135 On the ever-increasing importance placed on conditional funding of Spanish universities, see Carlos-Alberto AMOEDO-SOUTO, “Infrafinanciación cronificada, condicionalidad financiera y autonomía universitaria: notas para un abordaje jurídico”, in Fernando LÓPEZ RAMÓN, Ricardo RIVERO ORTEGA and Marcos M. FERNANDO PABLO (coord.), Organización de la Universidad y la ciencia, Madrid, 2018, pp. 163 et seq.

136 On this matter, see Luis ARROYO JIMÉNEZ, “Las bases constitucionales de la actividad administrativa de adjudicación de derechos limitados en número”, in Luis ARROYO and Dolores UTRILLA (dirs.), La administración de la escasez. Los fundamentos de la actividad administrativa de adjudicación de derechos limitados en número, Madrid, 2015, pp. 90 et seq.

137 Concerning the 2011 call, see Ministerial Order EDU/1539/2011, of June 2, including the call for funding applications for 2011 regarding the excellence sub-program of the Campus of International Excellence Program and implementing the procedure to grant the Campus of International Excellence award and to enter into agreements with Autonomous Regions (Regional Governments) within the scope of Ministerial Order EDU/903/2010, of April 8.

138 Art. 6 of the said Ministerial Order EDU/1539/2011, of June 2.

139 Art. 7 of Ministerial Order EDU/1539/2011, of June 2.

140 Art. 5 of Ministerial Order EDU/1539/2011, of June 2.

141 Art. 4(1) of Ministerial Order EDU/1539/2011, of June 2.

142 Section 5.1 of Resolution issued by the State Secretariat for Education, Vocational Training and Universities, calling funding applications for the consolidation of excellence projects for universities.

143 See, for instance, the Universidad Autónoma de Madrid regulations on research groups: https://www.uam.es/UAM/Grupos-de-Investigaci%C3%B3n/1242647861998.htm?language=es&
nodepath=Grupos%20de%20Investigaci%C3%B3n

144 This approach can be found in LIENHARD et al., 2016: 34.

145 See Claus-Wilhelm CANARIS, El sistema en la Jurisprudencia, translation into Spanish of the 2nd German edition (Systemdenken und Systembegriff in der Jurisprudenz of 1983), Madrid, 1998, pp. 20-21, 21-26, passim.

146 See the purpose of legal concepts highlighted by Silvia DÍEZ SASTRE, La formación de conceptos en el Derecho público. Un estudio de metodología académica: definición, funciones y criterios de formación de los conceptos jurídicos, Madrid, 2018, pp. 142 et seq.

147 On this topic, see, in a nutshell, Gabriel DOMÉNECH, “Por qué y cómo hacer análisis económico del Derecho”, Revista de Administración Pública, no. 195, 2014, pp. 99-133, in particular, pp. 112-114 and 120 et seq.

148 In this vein, see WISSENSCHAFTSRAT, Perspektiven der Rechtswissenschaft in Deutschland. Situation, Analysen, Empfehlungen, 2012, p. 29.

149 WISSENSCHAFTSRAT, 2012: 14.

150 OCHSNER et al., 2017: 3; LIENHARD et al., 2016: 170-171.

151 See pp. 17 and 23 of the document cited in footnote 47.

152 See pp. 14 and 29 of the document cited in footnote 49.

153 WISSENSCHAFTSRAT, 2012: 5.

154 See SCHULZE-FIELITZ, 2002: 13-14.

155 On this matter, see Julia ORTEGA BERNARDO, “La transferencia de conocimiento en las Universidades: razones y claves de su articulación jurídica”, in Fernando LÓPEZ RAMÓN, Ricardo RIVERO ORTEGA and Marcos M. FERNANDO PABLO (coord.), Organización de la Universidad y la ciencia, Madrid, 2018, pp. 372 et seq.

156 In this vein, see LIENHARD et al., 2016: 34.

157 WISSENSCHAFTSRAT, 2012: 70.

158 See LIENHARD et al., 2016: 38, 163, 173, although multilingualism in Switzerland has a different legal framework from that of regional languages in Spain.

159 See SCHULZE-FIELITZ, 2002: 16.

160 See SCHULZE-FIELITZ, 2002: 18.

161 Let’s think of Derecho civil de España by Federico DE CASTRO (the first volume was published in 1943); or of Derecho constitucional. Sistema de fuentes by Ignacio DE OTTO (the first edition was released in 1987), among other salient examples. In Germany, it is almost a cliché to cite, in this context, the Grundzüge des Verfassungsrechts der Bundesrepublik Deutschland, by Konrad HESSE (the first edition dates back to 1967 and the 20th edition was released in 1995).

162 SALVADOR CODERCH et al., 2008: 54-55, suggest to use this criterion.

163 See Gabriel DOMÉNECH, “Malas prácticas universitarias (I): la recensión”, 2016. Available at: https://almacendederecho.org/malas-practicas-universitarias-i-la-recension/

164 In this vein, see SCHULZE-FIELITZ, 2002: 12; WISSENSCHAFTSRAT, 2012: 38; OCHSNER et al., 2017: 5.

165 Criterion 2 applicable to Field 9 (“Law and Case Law”) provided in the Resolution issued by CNEAI on November 14, 2018 disclosing the specific criteria approved for each of the fields of assessment (within the procedure for the granting of sexenios) is worded as follows: “The number of authors of a contribution should be justified on grounds of complexity and length, as well as it should be justified in light of the topic. The relevant applicants must state, giving reasons, their substantive contribution to the co-authored work”.

166 See Gabriel DOMÉNECH, “Malas prácticas universitarias (II): la interdisciplinariedad”, 2016. Available at: https://almacendederecho.org/malas-practicas-universitarias-ii-la-interdisciplinariedad/

167 See SCHULZE-FIELITZ, 2002: 3.

168 In this connection, see OCHSNER et al., 2017: 6.

169 RAMÓN Y CAJAL, 1920: 136-137, suggests to follow Gracián’s advice regarding research works: “Thou shall speak as in wills, where the fewer the words, the fewer the quarrels”.

170 In this connection, see SCHULZE-FIELITZ, 2002: 26 et seq.; Lienhard et al., 2016: 169 et seq.

171 See Section 4 of the criteria for Field 9 of the Resolution issued by CNEAI on November 14, 2018 disclosing the specific criteria approved for each of the fields of assessment. Available at: https://www.boe.es/diario_boe/txt.php?id=BOE-A-2018-16138

172 With similar approaches, see Lienhard et al., 2016: 169; WISSENSCHAFTSRAT, 2012: 8, 48; and OCHSNER et al., 2017: 9, who refer to this issue regarding the setting of criteria (from each of the scientific communities to research assessment public bodies) as “bottom-up approach”.

173 In the field of administrative law, for example, in spite of the methodological rigour, one could question the ranking of legal journals prepared by the Spanish Foundation for Science and Technology (Fundación Española para la Ciencia y Tecnología, FECYT) published in 2019. This ranking includes neither Indret, nor the Revista Española de Derecho Administrativo, maybe because the assessment procedure is optional or voluntary. The ranking is available at: file:///C:/Users/Usuario/Downloads/ranking_revistas.pdf