Keeping a Better World in Mind

A dean's blog by andrew karolyi, andrew karolyi.

Charles Field Knight Dean of the Cornell SC Johnson College of Business

Harold Bierman, Jr. Distinguished Professor of Management

Faculty Profile

Connect on LinkedIn

Responsible research in business and management comes alive

The aim of research has to be to distill foundational knowledge toward best practices, the most innovative of which can and should inform progress. Among the core guiding principles for research in business and management has been the value placed on pluralism in its theories, grounded in different assumptions as a social science about how humans behave. This value, when viewed through multiple perspectives, can be called multidisciplinary collaboration, and includes alternative models of business and its role in society. Business school deans, journal editors, sponsors and funders, accreditation agencies, and certainly corporate stakeholders, all value multiple research contributions — both basic and applied.

We can only be responsible when we’re expertly conscious of the wide-ranging effects of our work. What we who lead business schools need to do better is to improve the visibility and prove the relevance of masterful, ongoing research through creative publishing and related methods of dissemination. Our work as leaders in business education is forever rippling through communities and systems.

Multidisciplinary research has always been a Cornell strength, and this true range and diversity of expertise and attitudes, still growing in our college, is certain to grow in value as we continue to develop as educators of responsible business leaders. It will be our combined strengths and commitments that will move us forward as a society and as a force for good. We see this more every day.

This is why our newest development is such a timely and meaningful event. The Paul Rubacha Department of Real Estate—established with the generosity of alumnus Paul Rubacha ’72, MBA ’73 and jointly led by our SC Johnson College of Business and Cornell’s College of Art, Architecture, and Planning—will build on the existing Baker Program in Real Estate. This new department weaves responsible design, development, operations, and financing to continue the Cornell way of breaking down traditional barriers to create opportunities for our students and the industries requiring their forward-thinking expertise. I cannot wait to see what the future holds for this new initiative.

Creating new rewards for use-inspired research

If these statements about guiding principles look familiar, it is because they are a distillation of the seven principles of responsible science that were outlined in a 2017 position paper, “A Vision of Responsible Research in Business and Management: Striving for Useful and Credible Knowledge,” by a new organization, the Responsible Research in Business and Management Network . In full disclosure, I am an original signatory to the RRBM position paper and currently serve as an elected member of its working board. The position paper paints a picture of a future—Vision 2030—in which business schools and scholars worldwide will have successfully transformed their research toward responsible science, producing useful, credible knowledge that addresses problems important to business and how business links to society. If I have piqued your interest, go read the position paper . The vision is based on a core belief that business is a force for good in society if it is informed by responsible research.

I see a lot of commonality between this vision and the one our very own Cornell SC Johnson College of Business set out for itself in 2016.

In additional to defining a set of seven principles to guide responsible research in business and management, there is an impetus at RRBM for changing the current business research ecosystem toward one that emphasizes business and societal relevance. Changing the incentives and culture around publications is going to be essential for promoting credible and impactful research.

Why are these issues top of mind for me this month?

This was all the subject of the 2022 Responsible Research Summit: Rewarding Responsible Research hosted at the Wharton School in June. After a couple of years of virtual gatherings, we were back in person and in full force. This year’s agenda featured eleven deans from universities across North America, Europe, and Asia, the heads of AACSB and the UN Global Compact’s Principles for Responsible Management Education, and other distinguished corporate speakers, faculty members, and journal editors. Two sessions I thoroughly enjoyed were by the two co-founders of B Lab , the nonprofit network transforming the global economy to advance B Corp, which provides ratings of companies based on their social impact, and by Sally Susman, Chief Corporate Affairs Officer and Executive VP of Pfizer Foundation, who described Pfizer’s coordinated support for disaster and humanitarian relief and recovery efforts around the world over the past few years. Andrew Jack, Global Education Editor at Financial Times, joined us to lead a segment on exemplary impact research by business school professors. He featured a few leading authors of articles receiving the first FT Responsible Business Education Awards. Our own Professor Chris Marquis was shortlisted for the award for his book, Better Business , on the B Corp movement and how it is changing capitalism in the world today.

I moderated a panel of fellow deans, called the Deans Perspective on Social Impact, with Susan Christoffersen, dean of the Rotman School of Management at the University of Toronto, and Bill Boulding, dean of the Fuqua School of Business at Duke University. This session seemed to energize the attendees; the focus was on what incentives deans can change to encourage their faculty to conduct more impactful research, and what tools, resources, and institutions deans can invest in to make it easier for those faculty who are already willing to conduct more impactful research. My favorite part of the Q&A was when we were asked how we, as leaders, define and communicate the impactfulness and relevance of research to our current/prospective donors, alumni, students, and other external stakeholders. Some asked us whether our research centers and institutes that depend on external funding deliver more credible and impactful research. I am not sure we answered all of these questions in a satisfactory way. The attendees of the RRBM Summit are eager to make change and make it fast. I get it—even as we recognize the institutional constraints that make it hard.

I had another favorite session beyond my own. Our own award-winning Sachin Gupta sat on the RRBM Fireside Chat with the Editors: Perspectives on “How can journals encourage and use ‘use-inspired’ research in service of society?” Sachin is Editor-in-Chief of the Journal of Marketing Research (JMR) and he offered keen insights on the topic along with the lead editors of Cornell’s own Administrative Science Quarterly (USC Marshall’s Christine Beckman), Manufacturing and Service Operations Management (MIT Sloan’s Georgia Perakis), and the Journal of Public Policy and Marketing (Florida State’s Maura Scott). Well done by Sachin in representing Cornell and showcasing the innovative new social-impact initiatives at JMR.

Our own commitment to the Principles of Responsible Management Education

UN PRME executive director Mette Morsing served on the Steering Committee of the RRBM event. I have been pleased to interact with Mette more lately given my new board seat on UN PRME, as well as a panel at the June 2022 PRME Global Forum. In her introduction to that event, Mette reported the results of a PRME student poll as a call-to-action: “Business and leadership students want to understand more about climate change and the environment, and they want responsible management education to be more innovative and active, engaging.” Noted! You can watch that panel, Innovative Pedagogies for Faculty in Challenging Times, here .

It’s with great pride that I close this post with a note on our college’s official signatory status to the UN Principles of Management Education (PRME) . We now belong to this growing global cohort of 800+ business schools and colleges devoted to embedding the 17 UN Sustainable Development Goals (SDGs) into every aspect of business education. It’s ambitious and necessary, and has already engaged inspiring partners. I’m particularly pleased to have been elected to the PRME Board, where we all intend to share expertise and best practices in the undertaking of the challenge of our time. Mette Morsing is a dedicated leader. “We’re seeing willingness from deans and students,” she said recently, “not just the single ethics professor. When rankings start paying attention, more will happen.” Mette also cautioned us, in no uncertain terms, that finance programs in business schools and colleges are being compelled to develop ways to teach the new skills, to “revise the ways we educate our students on this.” As a finance professor, I sat up and listened to these cautions to my own discipline.

With the push from the RRBM and UN PRME movements, it is hard to ignore that the world wants to see a change in academia, in the world of business education. We have accepted this charge with our own inaugural Sharing of Information on Progress and welcome the structure to keep our work on track.

And I couldn’t leave without mentioning:

The field of climate finance research is growing, and not a minute too soon. To this end, John Tobin-de la Puente and I have issued a call for research into financing nature. I used this new paper as the basis for my distinguished keynote address to the gathering of the Western Finance Association meetings in Portland, also in June. You can read it here .

The inaugural Cornell University ESG Investing Research Conference just happened in mid-July. Sponsored by the Parker Center for Investment Research, the Center for Sustainable Global Enterprise, and the Investing@Cornell and Business of Sustainability interdisciplinary themes, the conference featured worldwide finance scholars currently working on research in ESG investing, and linked them up with investment professionals managing ESG assets. There were great presentations by scholars from Harvard Business School, UC Boulder, the Federal Reserve Bank of New York and the Federal Reserve Board, UVA’s Darden School and the London Business School, among others. There was a policy panel with leaders from Calvert Research & Management, NineDot Energy, and Moody’s, as well as investment panels with leaders from BlackRock, Fidelity, Clearbridge Investments, American Century, Avantis Investors, and Amundi Asset Management. The paper Is History Repeating Itself? The (Un)predictable Past of ESG Ratings presented by MIT Sloan’s Florian Berg (along with Frankfurt School of Finance & Management’s Kornelia Fabisik and Zacharias Sautner) won the Avantis Investors Best Paper Prize (as voted on by the 100+ attendees).

Finally, I’m pleased to join the editorial board of a brand new journal, Accountability in a Sustainable World Quarterly (ASWQ) , sponsored by Center for Accounting Research and Education (CARE) at Notre Dame’s Mendoza College of Business. Aiming to meet the “immediate need for dialogue among academics and practitioners about sustainability, accountability, data and measurement, related assurance, high quality information to inform (responsible) investment decisions, and accountability in setting of personal, corporate, and public sector goals.” ASWQ will be introduced this coming November.

Recent Posts

Another call for “wicked knowledge co-creation”.

Looking at new year ahead, and at a key knowledge co-creation event from a few years back In 2015, as editor of the Review of...

Measuring societal value added

It’s the season of assessments Yes, final exams, final grades, job offers, acceptances for our students draw near But it is...

Strengthening our community with fresh perspectives

Strengthening our community with fresh perspectives At our first faculty meeting of the year earlier this month, Dean of...

More collaboration than ever.

People are wonderfully contradictory Our attitudes about change are so inconsistent I don’t know anyone who doesn’t seek...

Subscribe by Email

Get every new post delivered right to your inbox.

Please, insert a valid email.

Thank you, your email will be added to the mailing list once you click on the link in the confirmation email.

Spam protection has stopped this request. Please contact site owner for help.

This form is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

figshare

A Vision of Responsible Research in Business and Management: Striving for Useful and Credible Knowledge

Usage metrics.

Rotterdam School of Management

  • Other commerce, management, tourism and services not elsewhere classified

CC BY 4.0

improving access to quality, locally relevant management and entrepreneurship education

Home Opportunities Responsible Research in Business & Management (RRBM) Dare to Care Dissertation Scholarships

Responsible Research in Business & Management (RRBM) Dare to Care Dissertation Scholarships

RRBM and its co-sponsors are offering up to eight scholarships of $10,000 each to doctoral students in business schools to conduct dissertation research that follows the  principles of responsible research . The research topic should focus on  economic inequality, racial, gender or other forms of social justice in organizations , thereby contributing to meeting one or more of the United Nation’s  Sustainable Development Goals.

Call for Applications

Applications accepted beginning : November 1, 2021

Application deadline: December 1, 2021

Award decision: March 1, 2022

responsible research position paper

To support young scholars taking on the grand challenges of our world through responsible research in business and management.

Possible Research Topics and Methods

The Selection Committee welcomes dissertation research that will generate knowledge or ideas to reduce income inequality, increase racial and gender equity, or address other forms of social justice that enhance stakeholder well-being, especially focusing on the role of business organizations. Research that contributes to meeting one or more of the United Nations’ Sustainable Development Goals related to these social justice issues are of special interest to this dissertation scholarship program.

This scholarship program supports dissertation research that is inter-disciplinary and that involves stakeholders in the research process. We encourage the use of multiple methods, including qualitative (case studies, observations, text analysis), quantitative (surveys, archival empirical), and experimentation (lab and field), as explained in the  principles of responsible research . Intervention field studies (e.g., randomized controlled experiments) that robustly test theory-informed ideas/treatments to address the aforementioned justice issues are especially valuable.

The Eligible Applicant

  • Is a doctoral candidate (generally after the qualifying exam) at the beginning stage of the dissertation research;
  • Is studying in a business school in any of the disciplines as long as the research falls within the domain of the research topics described above;
  • Is familiar with the RRBM Principles of Responsible Research (e.g., as an endorser of the position paper, an attendee of RRBM webinars, or through other engagements);
  • We recommend attending the  Philosophical Foundation of Responsible Research course  which will be offered online September to mid-November 2021. The course covers the topics of uncertainties in scientific reasoning, inductive risks, values in science, objectivity and responsibility, science and policy, science and society, and progress in science – foundational ideas of responsible research. The final assignment of this course is to develop a research idea related to the UN Sustainable Development Goals.  Registration for the 2021 course offering has closed.  Missed the course? You can still apply for the scholarship. We recommend reviewing FAQ #5 below and taking some time to review the  course syllabus and recommended reading .

The Application & Proposal Content

Eligible Applications will be submitted online and include (I) a proposal; (II) two letters of recommendation; and (III) the applicant’s CV. Applications should adhere to the detailed guidelines available for download  here .

Applications will be accepted beginning November 1, 2021 and must be received by December 1, 2021.

Evaluation of Proposals

Proposals will be evaluated using the  Seven Principles of Responsible Research , ensuring that the proposed research meets the standards of high relevance to the research domain specified in this program and strong methodological rigor with promise of credible findings. Additional information regarding the evaluation process and Selection Committee is available  here .

Award winners will be announced March 1, 2022.

responsible research position paper

Position Paper

Including Responsible Research and Innovation (RRI) in the development and implementation of Horizon Europe

Why RRI in Horizon Europe?

The European Union has an ambition to be a global leader in sustainable and value-driven research and innovation. The European Union, including the upcoming framework programme Horizon Europe, builds upon the United Nations’ Sustainable Development Goals and has committed to the European Green Deal, where ‘the full range of instruments available under the Horizon Europe programme will support the research and innovation efforts needed’. It is stated that ‘Conventional approaches will not be sufficient. Emphasising experimentation, and working across sectors and disciplines, the EU’s research and innovation agenda will take the systemic approach needed to achieve the aims of the Green Deal. The Horizon Europe programme will also involve local communities in working towards a more sustainable future, in initiatives that seek to combine societal pull and technology push’. Responsible Research and Innovation (RRI) is an approach that can support this agenda.

RRI refers to an approach rolled out in Framework programmes 7 and 8 emphasising the on-going process of aligning research and innovation to societal values, needs and expectations [1] . RRI guides researchers/innovators and research/innovation conducting and funding organisations in anticipating the implications of their work, including citizens and stakeholders upstream, and reflecting and responding on society’s values and concerns. In this way, co-design and co-responsibility for the outcomes of the research and innovation can be facilitated, increasing societal uptake and acceptability of research and innovation. The last decade of the RRI agenda also provides important resources for operationalizing the Open Science agenda in its broad sense, beyond Open Access and Open Data, and the co-creation and citizen science agendas. Experiences from the concluded and ongoing RRI projects (in total 2633 were tagged as RRI projects by January 2020) should thus inform research and innovation investments as they will be outlined in the first Strategic Plan for Horizon Europe, including the sub-chapters on more specific actions.

  • make research and innovation more societally legitimate, when it is developed in line with societal values
  • help research and innovation be an instrument for meeting the sustainability goals
  • in this way ensure broader societal support for research and innovation investments that are necessary to keep Europe as a competitive region globally

Making RRI more visible in Horizon Europe – practical measures

RRI is included in Horizon Europe as operational objective 2 (c) (article 2 of the Specific programme implementing Horizon Europe: promoting responsible research and innovation, taking into account the precautionary principle . Recital 26 of the regulation for Horizon Europe states: ‘With the aim of deepening the relationship between science and society and maximising benefits of their interactions, the Programme should actively and systematically engage and involve citizens and civil society organisations in co-designing and co-creating responsible research and innovation agendas and contents, promoting science education, making scientific knowledge publicly accessible, and facilitating participation by citizens and civil society organisations in its activities. It should do so across the Programme and through dedicated activities in the part ‘Strengthening the European Research Area’. The document Orientations towards the first Strategic Plan for Horizon Europe also gives a mandate for RRI: ‘Engaging and involving citizens, civil society organisations and end-users in co-design and co-creation processes and promoting responsible research and innovation will improve trust between science and society, and the uptake of scientific evidence-based public policies and innovative solutions.’ These mandates must be followed up with concrete actions, in essence; RRI should be specifically outlined as a requirement of research and innovation in each programme line of Horizon Europe and should be funded as a research and innovation action on its own terms in Reforming and Enhancing the European R&I system.

This means:

  • Drafting committees of each programme line should at an early point consider what RRI measures are appropriate for their respective programmes. The level of integration of RRI aspects should be proportional to the potential societal implications of the research and innovation funded in the lines. RRI should be included as assessment criteria and KPIs in (i) the agenda-setting for the Work Programmes; (ii) the definition of calls and guidance for applicants; (iii) the review process and grant agreements; (iv) the monitoring processes and (v) impact evaluation.
  • Budgets must be devoted to RRI actions in projects funded under each programme line.
  • In this work, the drafting committees should consider seeking the support of RRI experts, who can (before a European network is formed) be found among the major Horizon 2020 RRI projects (see for instance https://www.rri-practice.eu/participants-and-networks/affiliated-networks-and-related-projects/ ). In the running of Horizon Europe such a support system should be organised as a permanent structure.
  • It must be clear that citizen science, open science and co-creation are aspects of RRI, but responsibility in research and innovation also includes being anticipatory, inclusive, reflexive and responsive, and includes considerations of ethics, fairness (social, gender, etc.) and sustainability. Open science, citizen science and co-creation agendas should be considered in this broader perspective and reference to RRI should be made.
  • In order to maintain the investment in RRI competence from Horizon 2020, to be used for high quality RRI engagement in Horizon Europe, a dedicated space (with an appropriate budget) for RRI competence building and further research should be allocated to Reforming and Enhancing the European R&I system , together with citizen science and open science.
  • A hub on RRI should be funded by the EC in order to ensure quality in the mainstreaming of RRI, co-creation, public engagement and citizen science in the whole framework programme. This hub should build on and further cultivate the RRI knowledge base. It should advise, train, consult, assess and provide quality control, and be a resource for those who include RRI related activities in Horizon Europe programmes and projects. It should also provide experts for the assessment of these aspects of research and innovation proposals and project activities, and for relevant committees and boards.
  • There should be RRI NCPs from each Member State, which should guide and assess the operationalisation of RRI in Horizon Europe, to allow for learning processes at programme and project levels.

These recommendations build on evidence and experiences from previous projects on RRI and public engagement, and on extended dialogues with RRI experts, funders and policy makers.

[1] https://ec.europa.eu/programmes/horizon2020/en/h2020-section/responsible-research-innovation

Share this:

' src=

  • Already have a WordPress.com account? Log in now.
  • Subscribe Subscribed
  • Copy shortlink
  • Report this content
  • View post in Reader
  • Manage subscriptions
  • Collapse this bar

Book cover

Handbook of Bioethical Decisions. Volume II pp 441–472 Cite as

Research Assessments Should Recognize Responsible Research Practices. Narrative Review of a Lively Debate and Promising Developments

  • Noémie Aubert Bonn 4 , 5 &
  • Lex Bouter 6  
  • Open Access
  • First Online: 29 June 2023

890 Accesses

1 Citations

10 Altmetric

Part of the Collaborative Bioethics book series (CB,volume 3)

Research assessments have been under growing scrutiny in the past few years. The way in which researchers are assessed has a tangible impact on decisions and practices in research. Yet, there is an emerging understanding that research assessments as they currently stand might hamper the quality and the integrity of research. In this chapter, we provide a narrative review of the shortcomings of current research assessments and showcase innovative actions that aim to address these. To discuss these shortcomings and actions, we target five different dimensions of research assessment. First, we discuss the content of research assessment, thereby introducing the common indicators used to assess researchers and the way these indicators are being used. Second, we address the procedure of research assessments, describing the resources needed for assessing researchers in an ever-growing research system. Third, we describe the crucial role of assessors in improving research assessments. Fourth, we present the broader environments in which researchers work, explaining that omnipresent competition and employment insecurity also need to be toned down substantially to foster high quality and high integrity research. Finally, we describe the challenge of coordinating individual actions to ensure that the problems of research assessments are addressed tangibly and sustainably.

  • Research assessment
  • Research culture
  • Research environment
  • Research integrity
  • Research careers

Download chapter PDF

Brief Introduction to Research Assessments

Throughout their careers, researchers will face dilemmas and need to make decisions regarding the ethics and the integrity of their work. Earlier chapters in this volume illustrate the substantial challenges and dilemmas involved and the impact that researchers’ decisions can have on research, knowledge, and practices. But decisions are not limited to research practices, they also need to be made about researchers themselves. Deciding which researchers should receive grants, which are allowed to start a career in academia, which are promoted, and which obtain tenure are complex issues that shape the way in which research systems operates.

In this chapter, we provide an overview of the complexities of research assessments. More specifically, we provide a critical overview of the problems that current research assessments generate and showcase innovative actions that are introduced with a view to improve the process Footnote 1 . We start by briefly introducing research assessments Footnote 2 and the debate on whether they are fit for purpose. We then discuss problems of research assessments on five different dimensions: the content; the procedure; the assessors; the environments; and the coordination between these dimensions (Fig. 27.1 ).

A stacked Venn diagram presents 5 dimensions of researcher assessments. 1. Content. 2. Procedure. 3. Assessors. 4. Environments. 5. Coordination. The first 4 dimensions the questions of what, how, who, and why.

The five dimensions of researcher assessments addressed in this chapter

Research assessments entail important decisions about what matters (i.e., what should be valued in academic careers and research outputs), about who decides what matters, and about how what matters can be measured. In addition to the inherent complexity, the decisions needed for research assessments depend on several stakeholders with their own distinct interests. Given the profound complexity, the high stakes, and the many actors involved in such decisions, it is no surprise that research assessments raise substantial controversies. Before introducing the problems and latest innovations in research assessments, we thought that it may help to provide a quick historical snapshot of the evolution of the discourse. This historical snapshot is high-level initially, but we will detail and document each point in greater depth throughout this chapter.

Scientists have scrutinised the attribution of success in academic research for well over half a century (Hagstrom, 1975 ; Merton, 1957 ; Zuckerman & Merton, 1971 ), yet we can pin the beginning of the debate on research assessments on the 1980’s, when the growing investments in research led to a substantial growth of the academic workforce (Alberts et al., 2015 ). This growth introduced a stronger need for fair distribution of research resources, for example in funding allocation, hiring, tenure, and promotion. Publication metrics which had made their appearance some years earlier – namely publication counts, the H index, citations counts, and journal impact factors – started being used in research assessments as an opportunity for broad scale, rapid, and comparative research assessment that provides a greater sense of objectivity than traditional peer-review qualitative assessment (Gingras, 2016 ). Quite rapidly however, it became clear that the newly adopted metrics influenced the publication practices of researchers also in less desirable ways. Early metrics focused on quantity, for instance by using the number of scientific papers researchers published as an indicator of success. This focus on quantity invited high volumes of lower quality scholarly outputs (Butler, 2003 ). To address this problem, journal impact factors and citation counts started being used in assessments, asking researchers to place impact before volume. This change had the desired effect and redirected the scholarly output towards prestigious high impact journals (Larivière & Sugimoto, 2018a ). With occasional exceptions, assessors and researchers overall appeared to be satisfied with the new methods until the early 2000’s. The beginning of the twenty-first century brought with it a vivid interest in meta-research, research integrity, and bibliometrics. Researchers started understanding that research was vulnerable to misconduct and inaccuracies (Ioannidis, 2005 ; Martinson et al., 2005 ), and that research assessments could influence research in harmful ways (Abbasi, 2004 ). Not only did impact-metrics influence the types of research being done, but they also made research move away from important integrity and quality aspects such as reproducibility and open science (Moher et al., 2018 ). At the same time, researchers were growing more aware of the high pressure and highly competitive environment they worked in and the impact this had on their work (Anderson et al., 2007 ; De Vries et al., 2006 ). Consequently, researchers and research communities joined forces to address these challenges and in started demanding change in the way in researchers are assessed.

The San Francisco Declaration on Research Assessments (DORA; American Society for Cell Biology, 2013 ), The Metric Tide (Wilsdon et al., 2015 ), and the Leiden Manifesto (Hicks et al., 2015 ) were among the first key documents to specifically address and raise awareness on the faults of the current assessment. Mostly focused on metrics, these pioneer works were then followed by position statements from numerous groups and organizations who broadened the issue towards research climates, research careers, and research integrity. In Table 27.1 , we showcase a selection of position statements and documents from general and broad-reaching groups. The 11 documents displayed in Table 27.1 are only a tiny selection of the booming number of positions papers, initiatives, perspectives, and recommendations now available from different research institutions, research funders, learned associations, and policy groups. Consequently, it would be fair to say that the debate on research assessments has reached strong momentum, and that substantive changes likely are underway.

Problems and Innovative Actions

Changing research assessments is a complex endeavour that requires multiple stakeholders, coordination, and finetuning. In the following sections we introduce a selection of key problems with current research assessments and describe a number of promising actions currently taken to address these problems and improve research assessments.

Problems with research assessments can happen on several interconnected dimensions, some of which are incredibly difficult to tackle. As a starting point, it is essential to address problems with the indicators and the approaches contained in the assessments themselves. But although the content of assessments is a necessary starting point for tackling assessments, it is not the only dimension that needs to be addressed to fully make research assessments fit for purpose. The procedure followed and the assessors responsible for assessing researchers are also important in enabling changes. Even if the indicators, the procedure, and the assessors are optimal, the research culture plays an additional role in ensuring that changes to research assessments indeed improve the practices and decisions of researchers. Consequently, the environment in which researchers work, albeit complex and difficult to address directly, also needs a place in initiatives that aim to change assessments and help foster better research. Finally, a good coordination of efforts is needed to ensure that the changes are profound, coherent, and sustainable.

In the following section, we describe key problems and innovative action on the content , procedure , assessors , environments , and coordination of research assessments. Table 27.2 summarizes the main points addressed.

Reflection on research assessments should necessarily start with the elements of researcher’s professional behavior that are assessed and their impact on the quality and relevance of research. Understanding the problems with the core elements that are used within research assessments is an important starting point to better understand what needs to change.

The problems related to the content of research assessments are too numerous to be able to cover in a book chapter. For simplicity, we selected five key issues that we believe play an important part in the current discourse on research assessments: i) the exaggerated focus on research outputs; ii) the valuation of quantity over quality; iii) the inadequacy of currently used metrics; iv) the narrow definitions of impact; and v) the obstacles current research assessments impose on diversity.

An Exaggerated Focus on Research Outputs

The problem.

When looking at research assessments in practice, it is clear that these depend almost exclusively on research outputs , most notably on scholarly papers published in international peer-reviewed journals. Footnote 3 This focus on outputs has nothing surprising. Considering that a large proportion of research is funded by public investments, it is natural to expect that researchers generate products (in this case research reports) that will ultimately enable tangible benefits for society. Yet, the way in which research outputs are currently measured is problematic in a number of ways.

For one, the exaggerated emphasis on research outputs means that current assessments are oblivious to most of researchers’ commitments. Publishing papers, as important as it is, is far from the only activity researchers spend their time and efforts on (Ziker, 2014 ). Teaching and providing services — the two other pillars of academic careers — and other essential tasks such as mentoring, reviewing or team contributions almost always take second place or are even ignored in research assessments (Schimanski & Alperin, 2018 ). And within the pillar of ‘research’, many activities and processes that would provide invaluable information on how the research is conducted are largely ignored from current output-oriented assessments, creating a culture “that cares exclusively about what is achieved and not about how it is achieved” (Farrar, 2019 ). For example, the detailed methods, the approaches, the specific contributions, or the translation of research in practice are rarely considered in research assessments (Aubert Bonn & Pinxten, 2021b ). This lack of consideration for research processes risks losing sight of important procedural concepts thought to be highly important in advancing science, such as quality, integrity, and transparency (Aubert Bonn & Pinxten, 2021a ).

Innovative Action

In the past few years, there has been an increasing awareness that linking research assessments almost exclusively to research outputs may be problematic (Farrar, 2019 ). Principle 5 of the Hong Kong Principles, and recommendations 3 and 5 of the DORA directly address this issue, stating that a broader range of research activities should be considered in research assessments. One concrete initiative which may be a first step in solving this problem is the provision of greater visibility to a range of activities that are part of researchers’ daily tasks. The Open Science badges — registration, open data, open materials — are a good example of a simple change that allows readers or eventually assessors to quickly capture open science practices behind published works (Kidwell et al., 2016 ). The presence of reporting guidelines, such as those available on the EQUATOR (Enhancing the QUAlity and Transparency Of health Research) Network (EQUATOR network, n.d. ) can also summarize details and procedures and provide information on the transparency and reproducibility of the work. The increasing availability of open and transparent peer-review and initiatives that provide visibility of peer-review commitments such as Publons (Publons, n.d. ) or ORCID (Open Researcher and Contributor ID) (ORCID, n.d. ) are other examples that can help enrich the indicators used to assess researchers. The Contributor Role Taxonomy (CRediT) which provides more information on the roles, and responsibilities that researchers take is another example we will discuss further in Sect. 27.2.1.5 (Alperin et al., 2019 ; CASRAI).

Broader indicators are increasingly visible in more formal assessments procedures. For instance, the Academic Careers Understood through MEasurement and Norms (ACUMEN) portfolio provides a template that considers indicators from a very diverse array of activities (European Commission, 2019 ). While the ACUMEN remains largely quantitative, its broad coverage of research activities is a good reminder that assessments can be much more comprehensive. The European Commission’s Open Science Career Assessment Matrix (OS-CAM) is a similar model of assessment that includes a broad array of research activities such as teaching, supervision and mentoring, professional experience and even has an explicit section on research processes (European Commission, 2017 ). We will discuss other ways of broadening assessments such as narrative CVs and portfolios in Sect. 27.2.1.4 .

Quantity Over Quality

Another important problem of researcher assessments is their tendency to value quantity over quality. Many researchers feel encouraged to publish as many papers as possible and are sometimes offered tangible incentives such as financial rewards to publish more (Hedding, 2019 ; Muthama & McKenna, 2020 ). Assessing researchers on the number of published papers does indeed lead to more publications, but it tends to do so at the detriment of research quality (Butler, 2003 ; Moed, 2008 ). It can also encourage questionable research practices such as ‘salami slicing’ — “the spreading of study results over more papers than necessary” (Embassy of Good Science, 2021 ) — and can tempt researchers to favour journals where acceptance rates are high rather than journals suited for their work or journals with thorough peer-review procedures. Unsurprisingly, the longing for quantity also works in favour of predatory publishers and paper mills whose business model is targeting authors desperate to publish regardless of quality (Hedding, 2019 ; Vogel, 2017 ).

To address this problem, research and funding institutions are increasingly modifying their assessment procedures to focus on impact rather than on quantity. Nevertheless, the impressive numbers of peer-reviewed publications or books that are very often stated in researchers’ biographies reminds us that productivity is still considered an important indicator of accomplishment within the research community and the research culture. Quantity indicators also remain key to institution-level assessments; a point we will discuss further in the Coordination section.

The obvious solution to reduce the focus on quantity should be to look more at quality. But even though ways to assess quality are starting to pierce, the endeavour is a bit more complex that it may seem. For example, Eyre Walker and colleagues showed that, when scientists assess a published paper without knowing the journals in which the paper was published, they are generally inconsistent and unable to judge its intrinsic merit or to estimate the impact factor of the journal in which the paper was published (Eyre-Walker & Stoletzki, 2013 ). However, assessing quality of publications is not the only way assessments can deviate from quantity indicators. In the past few years, several research and funding institutions diverted assessments away from quantity by asking researchers to select only a subset of their work — generally three to five key accomplishments or contributions (e.g., publications, events, changes in practice, committee participation, etc.) — and to describe why these accomplishments matter (see for example (Cancer Research UK, 2018 )). Focusing on a limited number of outputs enables a more in depth assessment which is likely to refocus the assessors’ attention away from quantity towards content, meaning, and quality.

Inappropriate Use of Metrics

As we mentioned above, most research assessments swapped volume-metrics for impact-metrics to incite researchers to publish in more prestigious journals. Among those, the journal impact factor, citations count, and the H-index raise important challenges.

Of all impact-informed metrics available, the journal impact factor is probably the most widely used in current research assessments. In a review of their use in North American academic review, promotion, and tenure document, McKiernan and colleagues found that 40% of research intensive institutions explicitly mention journal impact factors (McKiernan et al., 2019 ). The journal impact factor of a given year is the ratio between the number of citations received in that year for publications in that journal that were published in the two preceding years and the total number of “citable items” published in that journal during the two preceding years. (Larivière & Sugimoto, 2018b ; Wikipedia, 2021 ). The journal impact factor was designed to help librarians select the journals they should subscribe to, but it was never intended to influence researcher evaluations. On the contrary, Eugene Garfield — widely known as the father of journal impact factors — explicitly warned against using journal impact factors for assessing individual scholarly articles (Garfield, 1998 ). Nevertheless, the seductive power of a single metric that would allow to quantify the ‘value’ of journal articles quickly won over research assessments. Unfortunately impact factors introduced substantial problems of their own. First, the mere fact that journal impact factors became recognized as a measure of success reduced their objectivity as a measure of success; a phenomenon known as Campbell’s law (Hatch & Schmidt, 2020 ). In fact, journal impact factors incite strategic responses from researchers, many of which are now considered to be questionable research practices. These include among others selective reporting, ‘spin’, p-hacking, HARK-ing (hypothesizing after results are known) and non-publication of negative results (de Rijcke et al., 2015 ; Gingras, 2016 ; Larivière & Sugimoto, 2018a ; Wouters, 2014 ). Journal impact factors further suffer from fundamental weaknesses that allow them to be gamed relatively easily (Ioannidis & Thombs, 2019 ). Footnote 4 In addition, impact factors are a journal-level metric and are therefore not a valid measure for the impact of individual papers or of the authors of that paper. Indeed, the distribution of citations in a journal tends to be so skewed that impact factors provide little information on the number of citations individual papers in that journal can expect (Brito & Rodriguez-Navarro, 2019 ; Larivière et al., 2016 ). Finally, by the way journal impact factors are calculated, they ignore slow citation (i.e., citations two or more years after publication), thereby potentially bias against innovative research (Schmidt, 2020 ). Despite these fundamental flaws, journal impact factors are still widely used in researcher assessments and are frequently described as an indicator of the quality of individual research papers (Aubert Bonn & Pinxten, 2021b ).

Without even entering the colossal debate on the relationship between citation metrics and research quality, it may be relevant to consider the actual number of citations which are also frequently used in researcher assessments despite the fact that these require more time to accumulate. Citations are problematic in different yet connected ways. To begin, numbers of citations provide no information on the reasons a paper is cited. Citations used to provide background information, to build an argument, to support a theory, to raise a problem, or to criticize a paper all count in the same way (Larivière & Sugimoto, 2018b ). Citations can also be manipulated, for example through peer-reviewer or editor requests, or by forming citation cartels (Baas & Fennel, 2019 ; Fong & Wilhite, 2017 ). They are also prone to biases unrelated to the intrinsic merit of a paper (Urlings et al., 2021 ). And finally, direct citations are often only partially and sometimes not at all supported by the cited article, suggesting that researchers often cite papers without reading or even downloading them (Drake et al., 2013 ).

The H-index — or Hirsch Index for its inventor Jorge E. Hirsch — is another indicator that is frequently used in research assessments. The calculation is quite simple: a researcher has an h-index of x when she or he published at least x papers which were cited at least x times each. In other words, the h-index combines impact and productivity to provide information at an individual level. Nonetheless, the H-index is also strongly criticized. First, the misleading simplicity of a single number to judge researchers is already problematic, especially when comparing researchers from different fields of expertise. Furthermore, although the H-index combines paper and citation counts, it will never be higher than the total number of papers a researcher has published, regardless of the number of citations these papers have (e.g., a researcher with 10 papers cited 10 times each will have a higher H-index than a researcher with 9 papers cited 100 times each) (Larivière & Sugimoto, 2018b ). Similarly, as an ever-growing metric, the H-index provides senior researchers with a clear advantage that makes them largely invincible when compared to junior researchers, even after they stop being active in research. Jorge E. Hirsch himself stated that the H-index could “fail spectacularly and have severe unintended negative consequences” (Hirsch, 2020 , p. 4), and several metrics experts have deemed it inappropriate in measuring researcher’s overall impact (Waltman & van Eck, 2012 ). Despite all this, the H-index continues to be used often in research assessments.

Although many other metrics exist, the journal impact factor, citation count, and H-index are the three most frequently used in researcher assessment. On top of their individual flaws, an overarching criticism of these metrics is that they fail to capture the core qualities they aim to measure. More specifically, while several institutions use these metrics as a proxy to assess the quality and impact of the work (McKiernan et al., 2019 ), they provide very little information that could be validly interpreted as quality or impact (Aubert Bonn & Pinxten, 2021b ). Instead, these metrics provide information on the visibility, the attention, and the citation patterns within academia (Larivière & Sugimoto, 2018b ; Sugimoto & Larivière, 2018 ). Garfield himself qualified citations as an indicator of “the utility and interest the rest of the scientific community finds in [the work]” (Garfield, 1979 , p. 372), not as a measure of quality. Knowing that impact-informed metrics are even believed to “ discourage rigorous procedures, strict replication/confirmation studies and publication of negative, nonstatistically significant results ”, it is important to rethink how we use — or at least interpret — impact metrics (Lindner et al., 2018 ).

Once again however, reinterpreting the role of impact metrics on research assessments requires changes at the core of research communities. Researchers who have spent decades building a career on inadequate indicators may find it daunting to give up their high rankings to adopt a new system in which they may rank less excellent or even poorly. Increased awareness, discussion, and mobilisation are still needed.

The Declaration on Research assessments (DORA, 2021 ) strongly advocates against using the impact factor in individual research evaluations, Footnote 5 supports the consideration of value and impact of all research outputs, and argues that evaluations of scientific productivity must be transparent and explicit. Along the same line, the Leiden Manifesto and The Metrics Tide pledge for the development and adoption of better, fairer, more transparent and more responsible metrics (Hicks et al., 2015 ; Wilsdon et al., 2015 ). These three initiatives, recently joined by the Hong Kong Principles for assessing researchers (Moher et al., 2020 ), play a crucial role in raising awareness about the shortcomings of widely used research metrics. Awareness is only the first step towards actual change but these initiatives have brought together a community that supports the change. DORA already has nearly 20,000 signatories — over 2000 of which are organizations. And changes are indeed starting to happen at the research institutions, funders, and policy level. For instance, several research institutions now make sure that metrics are not used in isolation, but only as a complement to reflective qualitative peer-review (examples of institutions that have concretized these changes are available in the repository ‘ Reimaginging academic assessment: stories of innovation and change ’ developed by DORA in collaboration with EUA and SPARC Europe (DORA, 2021 )).

As part of the Horizon 2020 program, the European Commission also created an Open Science Policy Platform in which several expert groups were created to discuss better research assessments and indicators. These include the Working Group on Rewards, the Expert Group on Indicators, and the Mutual Learning Exercise on Open Science – Altmetrics and Rewards (Open Science Policy Platform, 2017 ).

New metrics are also becoming available to help balance research assessments. Simple paper downloads, for example, may capture readers who do not cite works, such as non-academic users of the work (Winker, 2017 ). More complex composite metrics have also been built. Altmetrics is a prime example of the diversification of the elements that can be captured on a single piece of work. Altmetrics include a wide array of inputs, such as open peer reviews reports, social media capture, citations on Wikipedia and in public policy documents, mentions on research blogs, mass media coverage, and many more aspects which help provide a broader overview of how the work is being used. The PlumX metrics, although governed by different calculations, works in similar ways. These innovative metrics are gaining increasing visibility on publisher’s websites, but their use in formal researcher assessment is still very limited.

Narrow Views of Impact

In addition to the overreliance on outputs and the problem of inadequate metrics we delineated above, indicators currently used in research assessments can be criticized because they provide a very narrow view of research impact . Two main dimensions deserve to be discussed here.

The first dimension concerns the impact research has on practice, policies, or society. As we previously mentioned, researchers are often expected to dedicate a portion of their time to the key pillar of ‘Services’, but typically their involvement in ‘Services’ is almost entirely absent from researcher assessments (Schimanski & Alperin, 2018 ). In addition, in the rare instances where ‘Services’ are considered in review, promotion, and tenure assessments, their consideration almost exclusively targets services provided within the institution or the research community — such as participation on university boards or editorial boards — rather than services provided to the public or to society (Alperin et al., 2019 ). Citations-based metrics only consider recognition and visibility within the scientific (and citing) community and provide only a restricted view of academic impact (Lebel & Mclean, 2018 ). Impact on practice, policy and society are not captured and are even obscured by these narrow metrics. For example, the need to publish in high impact factor journals often translates in a need to publish in English-language international journals; a decision that can reduce the societal impact of locally relevant research projects (Gingras & Mosbah-Natanson, 2010 ). Academic environments themselves, through their funding objectives, missions, and expectations, value discovery but largely disregard how we can best implement discoveries in practice (El-Sadr et al., 2014 ).

A second dimension that is important to reconsider is the impact that research has on knowledge advancement. In fact, current assessments tend to conflate impact with ground-breaking findings (Aubert Bonn & Pinxten, 2021b ). While this idea has long been embedded in the notion of scientific discovery, it also undermines the importance of non-ground-breaking work in advancing knowledge. Borrowing the words of Ottoline Leyser, chief executive officer of UK Research and Innovation:

It is worth remembering that the term “ground-breaking” comes from construction. There is often a ground-breaking ceremony, but then the building must be erected. This comes only after much preparation, from determining the ideal location to securing all the planning permissions. Likewise, for every ground-breaking discovery, a huge amount of work has paved the way, and follow-up work to solidify the evidence and demonstrate reproducibility and generality is essential. High-quality work of this sort is rarely recognized as excellent by the scientific enterprise but is excellent nonetheless, and without it, there would be no progress. (Leyser, 2020 : 886)

The overemphasis on ground-breaking discovery has shaped a research system in which replication studies and negative results are largely invisible despite their crucial value in solidifying knowledge (Bouter & Riet, 2021 ; Ioannidis, 2018 ; Munafò et al., 2017 ).

To better capture the impact that research has on practice, policies, society, or research itself, research assessors need to broaden the scope of indicators they use. We already mentioned that alternative metrics can help capture interest that would otherwise be missed. Another notable effort that may help capture societal impact in research is the Research Quality Plus (RQ+) evaluation approach used at the International Development Research Centre (IDRC) in Canada (Ofir et al., 2016 ). Although emphasising expected impact in a funding application is sometimes criticized for being artificial and highly theoretical (Brooks, 2013 ; Kirschner, 2013 ), the RQ+ provides a structured method through which societal impact can be estimated before the research takes place. Since the RQ+ is used for evaluating research proposals , it is not directly applicable to assessing researchers’ past accomplishments. Nonetheless, it might be a good model to inspire areas of impact that could be considered in future research assessments.

To capture the impact that the research has in building knowledge, several research institutions and funders started adopting narrative CVs in which researchers are encouraged to describe, in their own words, the impact of their work. A good example of these narrative CVs is the Résumé for researchers provided by the Royal Society in the UK (Royal Society, n.d. ). In the Résumé for researchers, applicants are provided with unstructured space to discuss their contributions to the generation of knowledge, the development of individuals, the wider research community, and the broader society. These open descriptions enable assessors to consider a broader, more diverse, and more personal perspective of impact that may have been invisible otherwise. While these narrative CVs are not easy to write and more demanding to assess than quantitative metrics, they are increasingly adopted in research institutions. Several other funders, such as the Health Research Board Ireland, the Dutch Research Council, and the Swiss National Science Foundation are also experimenting with open and narrative CVs (Hatch & Curry, 2020 ).

Obstacle to Diversity

In addition to the issues presented above, current research assessments also often fail to promote diversity and inclusion in research. Gender inequalities, for example, are seen in both citation metrics and publication outputs (Beaudry & Lariviere., 2016 ; Larivière et al., 2013 ), even more so in the disrupted working conditions of the COVID-19 pandemic (Minello, 2020 ; Viglione, 2020 ). Women are also more likely to be strongly involved in teaching, in the hands-on facets of research, or in other contributions that are essential to science but are less likely to result in first- or last-author publications (Astegiano et al., 2019 ; Macaluso et al., 2016 ). Similar issues also afflict ethnic groups and geographic regions, not only in funding opportunities and access (Check Hayden, 2015 ), but also in the fair attribution and recognition of their work (Powell, 2018 ; Rochmyaningsih, 2018 ). The same hurdles are faced by researchers with disability, even when policies are in place to tackle the injustice (Brock, 2021 ). Consequently, research assessment’s excessive reliance on publication metrics may further tax diversity and inclusion issues in academia. But diversity and inclusion is not only about disadvantaged groups. Diversity of skills, contribution, and career profiles is also an essential aspect that is largely ignored in current assessments and inclusion policies. Indeed, research assessments tend to assess researchers individually and to expect them to fit a one-size-fits-all model of success in research (Aubert Bonn & Pinxten, 2021b ). This individual and uniform model of assessment contradicts the highly collaborative, differentiated, and complementary roles that are intrinsic to research (Bothwell, 2019 ). Overlooking the still growing differentiation of research tasks disregards the unique contributions from non-leading members of research teams as well as the essential role of research support staff (Payne, 2021 ). Individual assessments and uniform expectations also increase competition between researchers; a feature which is known to be highly problematic and is often mentioned as a cause for research misconduct and questionable research practices (Anderson et al., 2007 ; Aubert Bonn & Pinxten, 2019 ).

The lack of diversity in research is a priority on the agenda of several large funders and research organisations. The Athena Swan Charter, for example, plays an important role in inciting research institutions to achieve gender inclusivity (“Athena Swan Charter, n.d. ”). Several institutions already have internal policies, quotas, and initiatives to promote greater diversity in hiring and promotion, yet some of these policies have raised hefty debates in the past (“College oordeelt over voorkeursbeleid TU Eindhoven”, 2020 ; Dance, 2019 ). Going one step further, the Indiana University – Purdue University Indianapolis (IUPIU) decided not only to encourage activities that promote equality, diversity, and inclusion, but also to recognize their inherent value by considering them in researchers’ tenure and promotion application (“IUPUI approves new path to promotion and tenure for enhancing equity, inclusion and diversity”, 2021 ). Despite these important initiatives, the impact that the indicators used in assessing researchers have on diversity and inclusion is rarely addressed, and there is growing realization that diversity and inclusion should be more prominent in research assessments (Labib & Evans, 2021 ).

The role an individual has in the research team has also received increasing attention in the past few years. Assessors realise that knowing the ways in which researchers collaborate can provide invaluable information. As a result, interesting initiatives that enable greater visibility on the team aspect of research are starting to pierce. The Contributor Role Taxonomy (CRediT), for example, provides an added level of granularity to authorship and helps to understand the dynamics, roles, and responsibilities in team research (Alperin et al., 2019 ; CASRAI, n.d. ). Although contributor roles have not yet fully secured their place in research assessments, more and more journals provide contributorship sections to the papers they publish. Whether the future of academia is one in which contributor roles take over authorship, however, remains to be seen (McNutt et al., 2018 ; Smith, 1997 ). Another interesting initiative in the recognition of teamwork is the Diversity Approach to Research Evaluation (DARE; Bone et al., 2020 ). The DARE approach provides tools to measure and understand how collaborators connect and deal with diversity. While the approach is more informative than evaluative, knowing more about the dynamics in research teams is a starting point to gather information on the characteristics of strong research teams.

There is also a growing belief that the lack of diversity in the profiles of individuals that succeed in academia may weaken effective team work (Aubert Bonn & Pinxten, 2021c ). Diversifying the profiles of academic employment, therefore, may help build research climates in which success comes from joint efforts rather than from competition between individuals. One early example of such initiative is the Open University in the UK, where more flexibility is given to researchers to enable to focus on different pillars of their work (Parr, 2015 ). As a result, researchers could pursue a career in which knowledge exchange is valued before their teaching and research achievements. The recently implemented career track at Ghent University, Belgium and the Dutch Recognition and Reward Programme are two other well-known initiatives to address the need for diversifying researchers’ profiles (Ghent University Department of Personnel & Organization, 2018 ; VSNU et al., 2019 ). The position paper ‘Room for everyone’s talent’ from the Dutch Recognition and Reward Programme nicely illustrates how such a diversification may take shape. Specifically, researchers have the opportunity to select a unique combination of key areas they wish to specialise in and be assessed on. These key areas include research, education, impact, leadership, and patient care. While all researchers are expected to demonstrate sufficient competencies in the research and education areas, they can choose the extent to which they favour these and any other areas and can change areas of specialties at different stages of their career.

Finally, the initiative contains a clear acknowledgement of the need to reward team efforts, The Dutch’s highest research awards, the Spinoza and the Stevin prizes, are now also open to team applications, making another step forward in the recognition of research as team work (Hoger Onderwijs Persbureau, 2019 ).

Changing researcher assessments is a complex endeavour that extends far beyond the elements and indicators assessed. It is also important to discuss the time and resource commitments that research assessments simply.

Researchers need to invest substantial time in building a prestigious CV and in applying for research funding. While the peer-reviewed process through which research is funded is most likely essential for good quality research, the low success rate of current funding schemes (typically 5–10% of the applications are granted) suggests that a lot of efforts are ultimately wasted. Past research has shown that many researchers consider the preparation of funding proposals to be the most “ unnecessarily time-consuming and ultimately most wasteful aspect of research-related workload ” (Schneider et al., 2014 , p. 41) and that researchers wished they could spend less of their time on it (Aubert Bonn & Pinxten, 2020a ). In fact, Herbert and colleagues estimated that the amount of time spent preparing grants for the Australian National Health and Medical Research Council in 2012 (Herbert et al., 2013 ) reached 550 working years of researchers’ time — the equivalent of 66 million Australian dollars (around 42.5 million Euros at the time of writing). Considering the low success rate of these applications, competitive funding channels come with phenomenal research time investments. Building a tenure dossier and applying for different research positions is also no small task, and since grants and non-tenured research positions are typically short-term, the time investment involved is substantial.

In turn, the colossal demands for research money and opportunities also lead to increasing numbers of applications which raise faster than the investments in research funding (Rockey, 2012 ). This growing demand creates a pressure on funders who face an excess of applications to review, and who will, in turn, require peer reviewers and selection committee members — most of the time researchers themselves — to invest their already scarce research time in the review process (Aubert Bonn & Pinxten, 2020b ; Gingras, 2016 ).

With the large amount of demands for funding and career opportunities, it is difficult to reduce the volume of research assessments. Nevertheless, there are ways in which the time and resource investment can be reduced to alleviate the burden of both researchers and assessors. One such initiative is the post-peer-review lottery of funding applications which proposes that, after a first thorough quality check to select proposals that are sound and methodologically adequate, assessors should select the winning applications randomly rather than through lengthy deliberation. This radical idea would not only increase efficiency of research funding assessments (Gross & Bergstrom, 2019 ), but it would also guard against the ‘natural selection of bad science’ by allowing unusual and unfashionable topics with high risk of negative findings to be funded (Smaldino et al., 2019 ). The lottery approach may even help reduce career insecurity in academia, a point we will discuss further in Sect. 27.2.5 (ISE task force on researchers’ careers, 2020 ). Another way to reduce the burden of research assessment is to reduce the frequency at which researchers are evaluated. Longer terms funding and research contracts could help in this matter, while further alleviating worries around the lack of security of research careers. Similarly, reduced evaluative frequency for employed researchers may help reduce the evaluative burden. Ghent University is currently experimented this change in its new career track, moving from a review interval of 3 year to one of 5 years starting in 2020 (Ghent University Department of Personnel & Organization, 2018 ).

The assessors themselves are not so frequently on the agenda for change to research assessments, despite their direct relevance to assessment processes. Particularly, when reflective and qualitative peer-review takes precedence, a great deal of subjectivity is introduced in the assessment process. Subjectivity is not a bad thing but it leaves substantial room for personal biases and involuntary discrimination in research assessments. For instance, assessors will naturally be tempted to cherry pick the information that confirms their already formed opinion (confirmation bias), to base their assessment on easily accessible anecdotal information (accessibility bias) or to let contextual aspects such as the reputation of universities listed on the CV of applications shape their views of individual candidates (halo effect (see for e.g., Clauset et al., 2015 ; Kwon, 2021 )), to name only a few (Hatch & Schmidt, 2020 ). In addition, many assessment procedures ask assessors to value highly abstract concepts – for example ‘excellence’, ‘high impact’ – differences in interpretation, misunderstandings, and unfortunately biases can then easily happen (Hatch, 2019 ).

Diversity is an important keyword if we want to reduce the influence of biases. Indeed, guidelines explicitly recommend that research and funding organisations should strive to ensure that reviewer pools and hiring committees contain diverse profiles (Science Europe, 2020 ). In addition, diversity should target not only gender and ethnicity, but also the profiles of assessors and their seniority. For example, there is increasing realisation that the input for researcher assessments, for example the reference letters used, should come from superiors as well as from those supervised or managed by the researcher being evaluated (i.e., 360° feedback; Vitae, n.d. ). Other ways to reduce biases on research assessments have been proposed, for example avoiding photos of the candidate on the application or moving educational history with potentially biasing university names to the end of the evaluation, but the efficacy of such approaches remains largely undocumented (Hatch & Curry, 2019 ). Finally, training assessors to ensure that they have a clear understanding of the assessment process and providing unambiguous definition of the key concepts that are assessed (e.g., impact, excellence, quality, etc.) can help reduce biases (Hatch, 2019 ; Science Europe, 2020 ). A few universities and organisations are starting to implement these recommendations. For example, Tampere University now informs and trains evaluators across campus about responsible evaluation practices (DORA, 2021 ). Similarly, the Health Research Board (HRB) Ireland also started raising awareness, training staff, and providing guidance for reviewers as a way to minimize gender inequalities and reduce unconscious biases (Health Research Board, 2019 ), much like the Dutch Recognition and Reward Program in which training and instructions are provided to assessment committees (VSNU et al., 2019 ). Others also started defining the terms they use to assess researchers. For instance, Norway Universities added clear definitions of the key concepts needed in assessments (DORA, 2021 ), while the ‘Room for everyone’s talent’ position paper explicitly defines the concept of impact. Such initiatives are still scarcely exploited and not yet evaluated, but there is growing awareness of the need to inform, train, and support those who assess researchers.

Research Environments

We know that the environments in which researchers operate are problematic since they impose high pressures on researchers to perform and publish (Metcalfe et al., 2020 ; Nuffield Council of Bioethics, 2014 ; The Wellcome Trust and Shift Learning, 2020 ). Changing research assessments can likely help to reduce the ‘publish or perish’ culture. Yet, other elements in the environment of researchers are also important to consider to avoid wasting the huge efforts invested in changing research assessments.

First, the lack of stability in research careers is an essential aspect to consider. At the moment, there is a huge discrepancy between junior (temporary) and senior (permanent) positions in academia, and only between 3% and 20% (depending on the countries’ estimates and faculties) of young researchers will be able to pursue the career in academia to which they aspire (Alberts et al., 2014 ; Anonymous, 2010 ; Debacker & Vandevelde, 2016 ; Larson et al., 2014 ; “Many junior scientists”, 2017 ; Martinson, 2011 ; van der Weijden et al., 2016 ). In turn, this lack of stability creates an unhealthy working environment in which stress, mental health issues, and burn out thrive (Levecque et al., 2017 ; “The mental health of PhD”, 2019 ; Padilla & Thompson, 2016 ). Furthermore, the scarcity of senior positions creates a perverse hyper-competition between junior scientists who wish to survive in academia. Hyper-competition not only worsens the situation, but it is also known to be an important driver of questionable research practices (Anderson et al., 2007 ; Aubert Bonn & Pinxten, 2019 ).

Beyond these interpersonal issues, the support, resources, and infrastructures that researchers receive is also essential to ensure that changes in research assessments are implemented effectively. Currently, junior researchers and PhD students often feel unsupported (Heffernan & Heffeman, 2019 ; Van de Velde et al., 2019 ) and the transition towards new expectations can generate frustration if the resources to fulfil these new expectations are lacking. For example, expecting researchers to preregister their research protocols or to make their data open and FAIR (i.e., Findable, Accessible, Interoperable, and Reusable (Wilkinson et al., 2016 )) is a great step towards better research, but it comes with important needs for adequate infrastructures, training, and most importantly researchers’ time. Similarly, demanding open access publication is increasingly requested by funders and institutions, but it needs to come with a budget for covering article processing charges, without which inequalities may ensue (Aubert Bonn & Pinxten, 2021a ).

There are several initiatives that aim to improve research environments, and in many ways, the innovative actions mentioned throughout this chapter would help create a healthier, more collaborative research climate. Yet, we would like to provide more details on a three types of initiatives that target research environments directly. First, there are initiatives that play a crucial role in raising awareness and opening the discussion on the problem. Examples include the Initiative for Science in Europe (ISE) position paper on precarity in academic careers and its associated webinar series (ISE task force on researchers’ careers, 2020 ), the French movement of ‘Camille Noûs’ from Cogitamus Laboratories (Cogitamus Laboratory, 2020 ), and the University College Union strikes that took place at 74 Universities across the UK in early 2020 to denounce — among other things — the casualization and the lack of employment security of research careers (University and College Union, 2020 ). Second, more forceful initiatives also start to appear. For instance, at the end of 2020, Sweden produced a national bill to change to the way in which it funds research so that a greater share of researchers’ salary would come from governmental non-competitive funding (Regeringskansliet, 2020 ). This bill came in response to a thorough investigation in which it was discovered that the constant search for competitive funding ultimately undermined research quality (Hwang, 2018 ; Regeringskansliet, 2019 ). In helping researchers to have a more stable salary, Sweden aims to reduce the hyper-competition and to lower the employment insecurity of researchers. The third initiative that is highly relevant when discussing research environments is the Standard Operating Procedures for Research Integrity (SOPs4RI) European Commission project that is ongoing until 2022 (Mejlgaard et al., 2020 ). The SOPs4RI project is creating a toolbox of best practices and guidelines to help research and funding institutions build research integrity promotion plans. In doing so, the SOPs4RI emphasizes that research integrity is not only a responsibility of researchers, but also of research and funding institutions whose operating procedures should foster healthy research environments. Simultaneously, the project is also empirically creating its own guidelines on topics that are overlooked in existing research guidance documents. One of the guidelines being produced directly targets ways in which institutions can build better and more collaborative research environments that foster research integrity.

Coordination

The final point that we find important to discuss is the need for thorough, intense, and continued coordination between different actors of the research system. In fact, to fully address the problems we described in this chapter, an open dialogue and thorough coordination between researchers, funders, research institutions, and policy makers as well as other actors such as publishers and metrics providers is needed.

Without coordination between stakeholders, changing research assessments is difficult and unlikely to happen on a large scale. For instance, in many countries, governments use performance-based attribution to fund research institutions, meaning that the share of funding received by research institutions largely depends on quantity indicators of outputs (Jonkers & Zacharewicz, 2016 ). Although using bibliometric indicators to distribute funding at an institutional level does not mean that universities should assess researchers using the same criteria (Debackere & Glänzel, 2004 ), the fear of underperforming often leads universities to use these indicators internally at a researcher-level (Aubert Bonn & Pinxten, 2021c ; Engels & Guns, 2018 ). Similarly, the way in which universities are recognized is profoundly influenced by university rankings. University rankings strongly depend on impact factors and other publication metrics, and there is increasing awareness that they have profound flaws and should be interpreted carefully (Gadd, 2020 ). Yet, rankings are still a dominant way of attracting funding, researchers, and students, and most universities take strategic, organizational, or managerial action to improve their rankings (Hazelkorn, 2007 ). Lack of coordination with metrics-providers also play a role in the problem. In fact, most major metrics belong to profitable companies whose external agendas differ from those of the research communities (Larivière & Sugimoto, 2018c ). Thorough communication with publishers is needed if we hope to shape metrics that align with the objectives of the research communities.

Changing researcher assessments is also something that is difficult to implement in single institutions. In the absence of a common approach of research assessments, there is a worry that researchers building a profile to succeed in one proactive institution may later be penalised if they want to migrate to another research setting in which their profile might be undervalued. In other words, the perceived ‘first-mover’s disadvantage’ favours a stagnant status quo and builds a feeling of hopelessness that the highly needed changes will occur (Aubert Bonn & Pinxten, 2021c ).

Ensuring the coordination of all stakeholders around the same objectives — and finding the means to achieve these objectives — is an extremely challenging task. Among others, the European University Association (EUA) briefing and The Metric Tide provide insights on this crucial need for coordinating actions at the level of research assessments, not hiding the complexity of the tasks it implies (Saenen & Borell-Damián, 2019 ; Wilsdon et al., 2015 ). Despite the challenge, best practice examples mentioned throughout this chapter have shown that coordinated changes are possible in practice.

Actors with broad influence and substantial budgets are essential here. For example, the European Commission’s ‘Towards 2030’ vision statement addresses the issue of ranking, calling research institutions to move beyond current ranking systems for assessing university performance because they are limited and “overly simplistic”. (Gadd, 2020 ). Broad reaching groups such as the European Commission Open Science Policy Platform we mentioned earlier and DORA also plays a role in coordinating changes by uniting different research institutes and member states to agree on a strategic plan of action. In South America, the Latin American Forum for Research Assessment (FOLEC) provides a platform for discussion between stakeholders on issues of research assessments (Latin American Forum for Research Assessment (FOLEC), 2020a , 2020b , 2020c ). University alliances can also help coordinate changes. For example, in 2019 the consortium Universities Norway put together a working groups aiming to build a national framework for research career assessments. The group issued a report in 2021 in which they propose a toolbox for recognition and rewards of academic careers (Universities Norway, 2021 ). The Academy of Finland went through a similar process to create national recommendations for responsible research evaluation (Working group for responsible evaluation of a researcher, 2020 ), and more and more university associations and academies are following this lead.

In a slightly more drastic approach, since 2021 the major UK research funder Wellcome decided that it would only provide funding to researchers working in organizations that can demonstrate that their researcher assessments are fair and responsible (Gadd, 2020 ). This strategic decision incites efforts from both the institution, which would be at a disadvantage if it did not work to ensure its eligibility to Wellcome funding, and the researchers who will push their institutions to ensure they remain eligible for this important source of funding.

Finally, the program ‘Room for everyone’s talent’ we described above is an inspiring example to prove that profound coordination is possible. In ‘Room for everyone’s talent’, five public knowledge institutions and research funders joined forces to ensure that Dutch research institutions would abide by the new assessment models. In addition, in the position paper announcing the new model, the five parties acknowledge their responsibility to take steps towards even tighter coordination. The position paper describes their commitment to connect with international organisations such as the European University Association, Science Europe, and Horizon Europe to encourage changes and harmonisation at a European level.

Way Forward

Changing researcher assessments is difficult and requires huge investments and efforts from a diverse array of stakeholders. We have argued that current research assessments have profound inadequacies, but that promising pioneering actions are starting to address these inadequacies and to align research assessments with responsible research practices.

To continue moving forward, we need to think of research assessments in their entire complexity, addressing not only their content, but also the processes, assessors, environment, and coordination needed for change. For each dimension, we must understand the problem, raise awareness, take action, and coordinate efforts to enable change.

Even though research institutions, research funders, and policy makers have a clear responsibility in enabling the change towards more responsible assessments, we, as researchers, also have an important role to play. For one, we should remember the biases and problems of research assessments when acting as peer-reviewers or assessors and ensure that we avoid shortcuts and biases as much as we can. But we should also play a role in shaping the tenacious research culture, helping to raise awareness and mobilise action around us. In the end, when we look at what was accomplished by DORA — which started from a small group of researchers and editors within the research community — researchers can help to drive the change.

But changing research assessments is not the end in itself. To avoid falling in the same pitfalls we are fighting with today, it is essential to understand whether the changes to research assessments help contribute high quality and high integrity research (Moher et al., 2018 ). In this regard, research on research assessments is essentially important to allow us to understand, inform, and realign research assessments towards a better future. In short, we need evidence-based research assessment policies.

Note that this chapter was submitted in the summer of 2021. Given the speed at which initiatives in research assessment are moving, we recognise that this chapter fails to include important recent developments, including the Agreement on Reforming Research Assessment and the Coalition for Advancing Research Assessment linked to it, the Future Research Assessment Programme in the UK, numerous advances in piloting narrative CVs, and other core initiatives which gained momentum after the chapter was drafted.

Throughout this chapter, we use the term ‘Research assessment’ interchangeably to refer to the assessment of researchers, research teams, research institutes or research proposals. Given that the term ‘research assessment’ is most commonly used in current discussions to describe the process through which research resources — be it funding, hiring, recognition, tenure, or promotions — are distributed, we used this term in its broad, interchangeable sense throughout this chapter.

Although research papers are now the most common output currency for career advancement in academia, other indicators such as patents, books, or conference proceedings are also being used in different disciplines. Nevertheless, scholarly papers are dominating the assessment even in disciplines in which they were not common decades ago and in which they have a limited relevance for the transmission of knowledge.

From these problems, we can mention the unequal citation practices for different topics or article types as well as the imbalance between the numerator — which contains all citations to a journal for the given years — and the denominator — which only contains the number of ‘citable items’, and thereby excludes editorials, commentaries, news and views, and other items that are increasingly taking predominance in high impact factor journals (Ioannidis & Thombs, 2019 ; Larivière & Sugimoto, 2018a ).

In fact, DORA’s first principle states directly that assessors should “not use journal-based metrics, such as Journal Impact Factors, as a surrogate measure of the quality of individual research articles, to assess an individual scientist’s contributions, or in hiring, promotion, or funding decisions” (American Society for Cell Biology, 2013 ).

Abbreviations

Academic Careers Understood through MEasurement and Norms

Contributor Role Taxonomy

Diversity Approach to Research Evaluation

San Francisco Declaration on Research Assessments

European Open Science Cloud

Enhancing the Quality and Transparency Of health Research

European University Association

Findable, Accessible, Interoperable, and Reusable

Latin American Forum for Research Assessment

Health Research Board Ireland

International Development Research Centre

Initiative for Science in Europe

Indiana University – Purdue University Indianapolis

Royal Netherlands Academy of Arts and Sciences

Netherlands Federation of University Medical Centres

Dutch Research Council & Institutes

Open Researcher and Contributor ID

Open Science Career Assessment Matrix

Research Quality Plus

Standard Operating Procedures for Research Integrity

University College Union

Association of Universities in the Netherlands

Abbasi, K. (2004). Let’s dump impact factors. British Medical Journal, 329 (7471). https://doi.org/10.1136/bmj.329.7471.0-h

Alberts, B., Kirschner, M. W., Tilghman, S., & Varmus, H. (2014). Rescuing US biomedical research from its systemic flaws. Proceedings of the National Academy of Sciences, 111 (16), 5773. https://doi.org/10.1073/pnas.1404402111

Article   Google Scholar  

Alberts, B., Kirschner, M. W., Tilghman, S., & Varmus, H. (2015). Opinion: Addressing systemic problems in the biomedical research enterprise. Proceedings of the National Academy of Sciences, 112 (7), 1912. https://doi.org/10.1073/pnas.1500969112

Alperin, L., O’Connell, A., & Kiermer, V. (2019). How can we ensure visibility and diversity in research contributions? How the Contributor Role Taxonomy (CRediT) is helping the shift from authorship to contributorship. Learned Publishing, 32 (1), 71–74. https://doi.org/10.1002/leap.1210

Alperin, J. P., Muñoz Nieves, C., Schimanski, L. A., Fischman, G. E., Niles, M. T., & McKiernan, E. C. (2019). How significant are the public dimensions of faculty work in review, promotion and tenure documents? eLife, 8 , e42254. https://doi.org/10.7554/eLife.42254

American Society for Cell Biology. (2013). San Francisco declaration on research assessment . Retrieved from https://sfdora.org/read/

Anderson, M. S., Ronning, E. A., De Vries, R., & Martinson, B. C. (2007). The perverse effects of competition on scientists' work and relationships. Science and Engineering Ethics, 13 (4), 437–461. https://doi.org/10.1007/s11948-007-9042-5

Anonymous. (2010, December, 18). The disposable academic . The Economist. Retrieved from https://www.economist.com/christmas-specials/2010/12/16/the-disposable-academic

Astegiano, J., Sebastián-González, E., & Castanho, C. D. T. (2019). Unravelling the gender productivity gap in science: A meta-analytical review. Royal Society Open Science, 6 (6), 181566. https://doi.org/10.1098/rsos.181566

Athena Swan Charter. (n.d.). Retrieved from https://www.advance-he.ac.uk/equality-charters/athena-swan-charter .

Aubert Bonn, N., & Pinxten, W. (2019). A decade of empirical Research on Research integrity: What have we (not) looked at? Journal of Empirical Research on Human Research Ethics, 14 (4), 338–352. https://doi.org/10.1177/1556264619858534

Aubert Bonn, N., & Pinxten, W. (2020a). Advancing science or advancing careers? Researchers’ opinions on success indicators. In bioRxiv (pp. 2020.2006.2022.165654).

Google Scholar  

Aubert Bonn, N., & Pinxten, W. (2020b). Rethinking success, integrity, and culture in research (Part 2) – A multi-actor qualitative study on problems of science. In bioRxiv .

Aubert Bonn, N., & Pinxten, W. (2021a). Advancing science or advancing careers? Researchers’ opinions on success indicators. PLoS One, 16 (2), e0243664. https://doi.org/10.1371/journal.pone.0243664

Aubert Bonn, N., & Pinxten, W. (2021b). Rethinking success, integrity, and culture in research (part 1) — A multi-actor qualitative study on success in science. Research Integrity and Peer Review, 6 (1), 1. https://doi.org/10.1186/s41073-020-00104-0

Aubert Bonn, N., & Pinxten, W. (2021c). Rethinking success, integrity, and culture in research (part 2) — A multi-actor qualitative study on problems of science. Research Integrity and Peer Review, 6 (1), 3. https://doi.org/10.1186/s41073-020-00105-z

Baas, J., & Fennell, C. (2019). When peer reviewers go rogue – Estimated prevalence of citation manipulation by reviewers based on the citation patterns of 69,000 reviewers Paper presented at the ISSI 2019. https://ssrn.com/abstract=3339568

Beaudry, C., & Lariviere, V. (2016). Which gender gap? Factors affecting researchers' scientific impact in science and medicine. Research Policy, 45 (9), 1790–1817. https://doi.org/10.1016/j.respol.2016.05.009

Bone, F., Hopkins, M. M., Ràfols, I., Molas-Gallart, J., Tang, P., Davey, G., & Carr, A. M. (2020). DARE to be different? A novel approach for analysing diversity in collaborative research projects. Research Evaluation, 29 (3), 300–315. https://doi.org/10.1093/reseval/rvaa006

Bothwell, E. (2019, October 14). Award Nobels to teams, not individual ‘heroes’, say scientists . Times Higher Education. Retrieved from https://www.timeshighereducation.com/news/award-nobels-teams-not-individual-heroes-say-scientists

Bouter, L. M., & Riet, G. t. (2021). Replication research series-paper 2: Empirical research must be replicated before its findings can be trusted. Journal of Clinical Epidemiology, 129 , 188–190. https://doi.org/10.1016/j.jclinepi.2020.09.032

Brito, R., & Rodríguez-Navarro, A. (2019). Evaluating research and researchers by the journal impact factor: Is it better than coin flipping? Journal of Informetrics, 13 (1), 314–324. https://doi.org/10.1016/j.joi.2019.01.009

Brock, J. (2021, 19 January). “Textbook case” of disability discrimination in grant applications . Nature Index. Retrieved from https://www.natureindex.com/news-blog/textbook-case-of-disability-discrimination-in-research-grant-applications

Brooks, R. (2013, 27 March). Centuries wasted applying for grants? The Conversation. Retrieved from https://theconversation.com/centuries-wasted-applying-for-grants-13111

Butler, L. (2003). Modifying publication practices in response to funding formulas. Research Evaluation, 12 (1), 39–46. https://doi.org/10.3152/147154403781776780

Cancer Research UK. (2018). Improving how we evaluate research: how we’re implementing DORA . Retrieved from https://www.cancerresearchuk.org/funding-for-researchers/research-features/2018-02-20-improving-research-evaluation-dora

CASRAI. (n.d.). CRediT – Contributor roles taxonomy . Retrieved from https://casrai.org/credit/

Check Hayden, E. (2015). Racial bias continues to haunt NIH grants. Nature, 527 (7578), 286–287. https://doi.org/10.1038/527286a

Clauset, A., Arbesman, S., & Larremore, D. B. (2015). Systematic inequality and hierarchy in faculty hiring networks. Science Advances, 1 .

Cogitamus Laboratory. (2020). Camille Noûs . Retrieved from https://www.cogitamus.fr/camilleen.html

College oordeelt over voorkeursbeleid TU Eindhoven. (2020, July 3). College voor de Rechten van de Mens . Retrieved from https://mensenrechten.nl/nl/nieuws/college-oordeelt-over-voorkeursbeleid-tu-eindhoven

Curry, S., Rijcke, S. d., Hatch, A., Pillay, D. G., Weijden, I. V. D., & Wilsdon, J. (2020). The changing role of funders in responsible research assessment: progress, obstacles and the way ahead . Retrieved from https://rori.figshare.com/articles/report/The_changing_role_of_funders_in_responsible_research_assessment_progress_obstacles_and_the_way_ahead/13227914

Dance, A. (2019). How a Dutch university aims to boost gender parity . Nature Career News. https://doi.org/10.1038/d41586-019-01998-7

Book   Google Scholar  

de Rijcke, S., Wouters, P. F., Rushforth, A. D., Franssen, T. P., & Hammarfelt, B. (2015). Evaluation practices and effects of indicator use—A literature review. Research Evaluation, 25 (2), 161–169. https://doi.org/10.1093/reseval/rvv038

De Vries, R., Anderson, M. S., & Martinson, B. C. (2006). Normal misbehavior: Scientists talk about the ethics of Research. Journal of Empirical Research on Human Research Ethics, 1 (1), 43–50. https://doi.org/10.1525/jer.2006.1.1.43

Debacker, N., & Vandevelde, K. (2016). From PhD to professor in Flanders . ECOOM Brief (no. 11). Retrieved from https://biblio.ugent.be/publication/8043010

Debackere, K., & Glänzel, W. (2004). Using a bibliometric approach to support research policy making: The case of the Flemish BOF-key. Scientometrics, 59 (2), 253–276. https://doi.org/10.1023/B:SCIE.0000018532.70146.02

Dijstelbloem, H., Huisman, F., Miedema, F., & Mijnhardt, W. (2013). Why science does not work as it should and what to do about it . Retrieved from http://www.scienceintransition.nl/app/uploads/2013/10/Science-in-Transition-Position-Paper-final.pdf

DORA. (2021). Reimagining academic assessment: Stories of innovation and change . Retrieved from https://sfdora.org/dora-case-studies/

Drake, D. C., Maritz, B., Jacobs, S. M., Crous, C. J., Engelbrecht, A., Etale, A., et al. (2013). The propagation and dispersal of misinformation in ecology: Is there a relationship between citation accuracy and journal impact factor? Hydrobiologia, 702 (1), 1–4. https://doi.org/10.1007/s10750-012-1392-6

El-Sadr, W. M., Philip, N. M., & Justman, J. (2014). Letting HIV transform academia — Embracing implementation science. New England Journal of Medicine, 370 (18), 1679–1681. https://doi.org/10.1056/NEJMp1314777

Embassy of Good Science. (2021). Salami publication . Retrieved from https://embassy.science/wiki/Theme:95c69cce-596a-42b5-9d86-e0aabaf00a85#Salami_publication

Engels, T. C. E., & Guns, R. (2018). The Flemish performance-based research funding system: A unique variant of the Norwegian model. Journal of Data and Information Science, 3 (4), 45–60. https://doi.org/10.2478/jdis-2018-0020

EQUATOR network. (n.d.) Enhancing the quality and transparency of health research . Retrieved from https://www.equator-network.org .

European Commission. (2017). Evaluation of research careers fully acknowledging open science practices . Retrieved from Brussels.

European Commission. (2019, 1 August). Academic careers understood through measurement and norms . Retrieved from https://cordis.europa.eu/project/id/266632

European Open Science Cloud. (2021). Draft vision for FAIReR assessments . Retrieved from https://avointiede.fi/sites/default/files/2021-02/eosc_cocreation_vision_for_fairer_assessments.pdf

Eyre-Walker, A., & Stoletzki, N. (2013). The assessment of science: The relative merits of post-publication review, the impact factor, and the number of citations. PLoS Biology, 11 (10), e1001675. https://doi.org/10.1371/journal.pbio.1001675

Farrar, J. (2019). Why we need to reimagine how we do research . Retrieved from https://wellcome.ac.uk/news/why-we-need-reimagine-how-we-do-research

Fong, E. A., & Wilhite, A. W. (2017). Authorship and citation manipulation in academic research. PLoS One, 12 (12), e0187394. https://doi.org/10.1371/journal.pone.0187394

Gadd, E. (2020). University rankings need a rethink. Nature, 587 (523). https://doi.org/10.1038/d41586-020-03312-2

Garfield, E. (1979). Is citation analysis a legitimate evaluation tool? Scientometrics, 1 (4), 359–375. https://doi.org/10.1007/BF02019306

Garfield, E. (1998). Der Impact Faktor und seine richtige Anwendung. Der Anaesthesist, 47 (6), 439–441. https://doi.org/10.1007/s001010050581

Ghent University Department of Personnel & Organization. (2018). Vision statement and principles: New career path and evaluation policy for professorial staff . Retrieved from https://www.ugent.be/en/work/mobility-career/career-aspects/professorial-staff/visionstatement.pdf

Gingras, Y. (2016). Bibliometrics and Research evaluation: Use and abuses . MIT Press.

Gingras, Y., & Mosbah-Natanson, S. (2010). Les sciences sociales françaises entre ancrage local et visibilité internationale. European Journal of Sociology, 51 (2), 305–321. https://doi.org/10.1017/S0003975610000147

Global Young Academy. (2018). Publishing models, assessments, and open science . Retrieved from Halle, Germany: https://globalyoungacademy.net/wp-content/uploads/2018/10/APOS-Report-29.10.2018.pdf

Gross, K., & Bergstrom, C. T. (2019). Contest models highlight inherent inefficiencies of scientific funding competitions. PLoS Biology, 17 (1), e3000065. https://doi.org/10.1371/journal.pbio.3000065

Hagstrom, W. O. (1975). Competition for recognition. In The scientific community (pp. 69–104). Southern Illinois University Press.

Hatch, A. (2019). To fix research assessment, swap slogans for definitions. Nature, 576 (9), 9. https://doi.org/10.1038/d41586-019-03696-w

Hatch, A., & Curry, S. (2019). Research assessment: Reducing bias in the evaluation of researchers . Retrieved from https://elifesciences.org/inside-elife/1fd1018c/research-assessment-reducing-bias-in-the-evaluation-of-researchers

Hatch, A., & Curry, S. (2020). Changing how we evaluate research is difficult, but not impossible , 9 , eLife, e58654. https://doi.org/10.7554/eLife.58654

Hatch, A., & Schmidt, R. (2020). Rethinking research assessment: Unintended cognitive and system biases . Retrieved from DORA: https://sfdora.org/wp-content/uploads/2020/11/DORA_UnintendendedCognitiveSystemBiases.pdf

Hazelkorn, E. (2007). The impact of league tables and ranking systems on higher education decision making . https://doi.org/10.1787/hemp-v19-art12-en

Health Research Board. (2019). HRB gender policy . Retrieved from https://www.hrb.ie/fileadmin/user_upload/HRB_Gender_Policy_Nov_2019.pdf

Hedding, D. W. (2019). Payouts push professors towards predatory journals. Nature, 565 , 267. https://doi.org/10.1038/d41586-019-00120-1

Heffernan, T. A., & Heffernan, A. (2019). The academic exodus: The role of institutional support in academics leaving universities and the academy. Professional Development in Education, 45 (1), 102–113. https://doi.org/10.1080/19415257.2018.1474491

Herbert, D. L., Barnett, A. G., Clarke, P., & Graves, N. (2013). On the time spent preparing grant proposals: An observational study of Australian researchers. BMJ Open, 3 (5), e002800. https://doi.org/10.1136/bmjopen-2013-002800

Hicks, D., Wouters, P., Waltman, L., Rijcke, S. d., & Rafols, I. (2015). The Leiden manifesto for research metrics. Nature, 520 , 429–431. https://doi.org/10.1038/520429a

Hirsch, J. E. (2020). Superconductivity, what the h? The emperor has no clothes. Physics and Society, 49 (1), 4–9. Retrieved from https://arxiv.org/abs/2001.09496

Hoger Onderwijs Persbureau. (2019, 7 October). Spinoza Prize to become a team effort . Cursor. Retrieved from https://www.cursor.tue.nl/en/news/2019/oktober/week-2/spinoza-prize-to-become-a-team-effort/

Hwang, S. (2018). Forskningskvalitet, effektivitet och extern finansiering (ISBN 978-91-88749-06-2). Retrieved from Sweden: http://hh.diva-portal.org/smash/record.jsf?pid=diva2%3A1253091&dswid=-5182

Ioannidis, J. P. A. (2005). Why most published research findings are false. PLoS Medicine, 2 (8), 696–201. https://doi.org/10.1371/journal.pmed.0020124

Ioannidis, J. P. A. (2018). Why replication has more scientific value than original discovery. Behavioral and Brain Sciences, 41 , e137. https://doi.org/10.1017/S0140525X18000729

Ioannidis, J. P. A., & Thombs, B. D. (2019). A user’s guide to inflated and manipulated impact factors. European Journal of Clinical Investigation, 49 (9), e13151. https://doi.org/10.1111/eci.13151

ISE task force on researchers’ careers. (2020). Position on precarity of academic careers . Retrieved from https://initiative-se.eu/wp-content/uploads/2021/02/Research-Precarity-ISE-position.pdf

IUPUI approves new path to promotion and tenure for enhancing equity, inclusion and diversity. (2021, May 10). News at IUPUI . Retrieved from https://news.iu.edu/stories/2021/05/iupui/releases/10-promotion-tenure-pathway-enhancing-diversity-equity-inclusion.html

Jonkers, K., & Zacharewicz, T. (2016). Research performance based funding systems: A comparative assessment (JRC101043). Retrieved from http://publications.jrc.ec.europa.eu/repository/bitstream/JRC101043/kj1a27837enn.pdf

Kidwell, M. C., Lazarević, L. B., Baranski, E., Hardwicke, T. E., Piechowski, S., Falkenberg, L.-S., et al. (2016). Badges to acknowledge open practices: A simple, low-cost, effective method for increasing transparency. PLoS Biology, 14 (5), e1002456. https://doi.org/10.1371/journal.pbio.1002456

Kirschner, M. (2013). A perverted view of “impact”. Science, 340 (6138), 1265. https://doi.org/10.1126/science.1240456

Kwon, D. (2021). Prestigious European grants might be biased, study suggests. Nature News., 593 (7860), 490–491. https://doi.org/10.1038/d41586-021-01362-8

Labib, K., & Evans, N. (2021). Gender, diversity, and the responsible assessment of researchers. PLoS Biology, 19 (4), e3001036. https://doi.org/10.1371/journal.pbio.3001036

Larivière, V., & Sugimoto, C. R. (2018a). The Journal Impact Factor: A brief history, critique, and discussion of adverse effects. In arxiv .

Larivière, V., & Sugimoto, C. R. (2018b). Mesurer la science . Les Presses de l’Université de Montréal.

Larivière, V., & Sugimoto, C. R. (2018c). Vue d’ensemble. In Mesurer la science (pp. 145–162). Les Presses de l’Université de Montréal.

Larivière, V., Ni, C., Gingras, Y., Cronin, B., & Sugimoto, C. R. (2013). Bibliometrics: global gender disparities in science. Nature, 504 (7479), 211–213. https://doi.org/10.1038/504211a

Larivière, V., Kiermer, V., MacCallum, C. J., McNutt, M., Patterson, M., Pulverer, B., ... Curry, S. (2016). A simple proposal for the publication of journal citation distributions. bioRxiv . https://doi.org/10.1101/062109 .

Larson, R. C., Ghaffarzadegan, N., & Xue, Y. (2014). Too many PhD graduates or too few academic job openings: The basic reproductive number R0 in academia. Systems Research and Behavioral Science, 31 (6), 745–750. https://doi.org/10.1002/sres.2210

Latin American Forum for Research Assessment (FOLEC). (2020a). Towards a transformation of scientific research assessment in Latin America and the Caribbean: Diagnosis and proposals for a regional initiative . Retrieved from Latin American Council of Social Sciences (CLACSO): https://www.clacso.org/en/diagnostico-y-propuestas-para-una-iniciativa-regional/

Latin American Forum for Research Assessment (FOLEC). (2020b). Towards a transformation of scientific research assessment in Latin America and the Caribbean: Evaluating scientivic research assessment . Retrieved from Latin American Council of Social Sciences (CLACSO): https://www.clacso.org/en/una-nueva-evaluacion-academica-para-una-ciencia-con-relevancia-social/

Latin American Forum for Research Assessment (FOLEC). (2020c). Towards a transformation of scientific research assessment in Latin America and the Caribbean: Proposal for a declaration of principles . Retrieved from Latin American Council of Social Sciences (CLACSO): https://www.clacso.org/en/una-nueva-evaluacion-academica-para-una-ciencia-con-relevancia-social-2/

Lebel, J., & McLean, R. (2018). A better measure of research from the global south. Nature, 559 , 23–26. https://doi.org/10.1038/d41586-018-05581-4

Levecque, K., Anseel, F., De Beuckelaer, A., Van der Heyden, J., & Gisle, L. (2017). Work organization and mental health problems in PhD students. Research Policy, 46 (4), 868–879. https://doi.org/10.1016/j.respol.2017.02.008

Leyser, O. (2020). The excellence question. Science, 370 (6519), 886. https://doi.org/10.1126/science.abf7125

Lindner, M. D., Torralba, K. D., & Khan, N. A. (2018). Scientific productivity: An exploratory study of metrics and incentives. PLoS One, 13 (4), e0195321. https://doi.org/10.1371/journal.pone.0195321

Macaluso, B., Lariviere, V., Sugimoto, T., & Sugimoto, C. R. (2016). Is science built on the shoulders of women? A study of gender differences in Contributorship. Academic Medicine, 91 (8), 1136–1142. https://doi.org/10.1097/ACM.0000000000001261

Many junior scientists need to take a hard look at their job prospects. (2017). Nature, 550 , 429. https://doi.org/10.1038/550429a .

Martinson, B. C. (2011). The academic birth rate. Production and reproduction of the research work force, and its effect on innovation and research misconduct. EMBO Reports, 12 (8), 758–761. https://doi.org/10.1038/embor.2011.142

Martinson, B. C., Anderson, M. S., & De Vries, R. (2005). Scientists behaving badly. Nature, 435 (7043), 737–738. https://doi.org/10.1038/435737a

McKiernan, E. C., Schimanski, L. A., Muñoz Nieves, C., Matthias, L., Niles, M. T., & Alperin, J. P. (2019). Use of the journal impact factor in academic review, promotion, and tenure evaluations. eLife, 8 , e47338. https://doi.org/10.7554/eLife.47338

McNutt, M. K., Bradford, M., Drazen, J. M., Hanson, B., Howard, B., Jamieson, K. H., et al. (2018). Transparency in authors’ contributions and responsibilities to promote integrity in scientific publication. Proceedings of the National Academy of Sciences, 115 (11), 2557. https://doi.org/10.1073/pnas.1715374115

Mejlgaard, N., Bouter, L. M., Gaskell, G., Kavouras, P., Allum, N., Bendtsen, A.-K., et al. (2020). Research integrity: Nine ways to move from talk to walk. Nature, 586 , 358–360. https://doi.org/10.1038/d41586-020-02847-8

Merton, R. K. (1957). Priorities in scientific discovery. In N. W. Storer (Ed.), The sociology of science: Theoretical and empirical investigations (p. 1973). University of Chicago Press.

Metcalfe, J., Wheat, K., Munafò, M., & Parry, J. (2020). Research integrity: A landscape study . Retrieved from https://www.vitae.ac.uk/vitae-publications/reports/research-integrity-a-landscape-study

Minello, A. (2020). The pandemic and the female academic. Nature . https://doi.org/10.1038/d41586-020-01135-9

Moed, H. F. (2008). UK Research assessment exercises: Informed judgments on research quality or quantity? Scientometrics, 74 (1), 153–161. https://doi.org/10.1007/s11192-008-0108-1

Moher, D., Naudet, F., Cristea, I. A., Miedema, F., Ioannidis, J. P. A., & Goodman, S. N. (2018). Assessing scientists for hiring, promotion, and tenure. PLoS Biology, 16 (3), e2004089. https://doi.org/10.1371/journal.pbio.2004089

Moher, D., Bouter, L., Kleinert, S., Glasziou, P., Sham, M. H., Barbour, V., et al. (2020). The Hong Kong principles for assessing researchers: Fostering research integrity. PLoS Biology, 18 (7), e3000737. https://doi.org/10.1371/journal.pbio.3000737

Munafò, M. R., Nosek, B. A., Bishop, D. V. M., Button, K. S., Chambers, C. D., Percie Du Sert, N., et al. (2017). A manifesto for reproducible science. Nature Human Behaviour, 1 (1). https://doi.org/10.1038/s41562-016-0021

Muthama, E., & McKenna, S. (2020). The unintended consequences of using direct incentives to drive the complex task of Research dissemination. Education as Change, 24 , 23. https://doi.org/10.25159/1947-9417/6688

Nuffield Council of Bioethics. (2014). The culture of scientific research in the UK . Retrieved from https://nuffieldbioethics.org/publications/the-culture-of-scientific-research

Ofir, Z., Schwandt, T., Duggan, C., & McLean, R. (2016). Research quality plus (RQ+): A holistic approach to evaluating research . Retrieved from Ottawa. http://hdl.handle.net/10625/56528

Open Science Policy Platform. (2017). OSPP-REC (ISBN 978-92-79-88333-0). Retrieved from Brussels, Belgium.

ORCID. (n.d.). Connecting research and researchers. Retrieved from https://orcid.org .

Padilla, M. A., & Thompson, J. N. (2016). Burning out Faculty at Doctoral Research Universities. Stress and Health, 32 (5), 551–558. https://doi.org/10.1002/smi.2661

Parr, C. (2015, April, 2). Open University maps new routes to career progression . Times Higher Education. Retrieved from https://www.timeshighereducation.com/news/open-university-maps-new-routes-to-career-progression/2019410.article

Payne, D. (2021, March, 30). Calls for culture change as “them versus us” mindset drives rift between academic and non-academic staff . Nature Index. Retrieved from https://www.natureindex.com/news-blog/calls-culture-change-them-versus-us-drives-rift-between-academics-administrators-research-science

Powell, K. (2018). These labs are remarkably diverse – Here’s why they’re winning at science. Nature, 558 , 19–22. https://doi.org/10.1038/d41586-018-05316-5

Publons (n.d.). Retrieved from https://publons.com/ .

Regeringskansliet. (2019). Statens offentliga utredningar från Utbildningsdepartementet: En långsiktig, samordnad och dialogbaserad styrning av högskolan . Retrieved from Sweden: https://www.regeringen.se/rattsliga-dokument/statens-offentliga-utredningar/2019/02/sou-20196/

Regeringskansliet. (2020). Forskning, frihet, framtid – kunskap och innovation för Sverige . Retrieved from Sweden:

Rochmyaningsih, D. (2018). Showcase scientists from the global south. Nature, 553 , 251. https://doi.org/10.1038/d41586-018-00662-w

Rockey, S. (2012, 9 August). More applications; Many more applicants . Retrieved from https://nexus.od.nih.gov/all/2012/08/09/more-applications-many-more-applicants/

Royal Society. (n.d.). Résumé for researchers . Retrieved from https://royalsociety.org/topics-policy/projects/research-culture/tools-for-support/resume-for-researchers/ .

Saenen, B., & Borell-Damián, L. (2019). EUA briefing – Reflections on university research assessment: Key concepts, issues and actors . Retrieved from Brussels, Belgium: https://eua.eu/resources/publications/825:reflections-on-university-research-assessment-key-concepts,-issues-and-actors.html

Schekman, R., & Patterson, M. (2013). Reforming research assessment. eLife, 2 , e00855. https://doi.org/10.7554/eLife.00855

Schimanski, L., & Alperin, J. (2018). The evaluation of scholarship in academic promotion and tenure processes: Past, present, and future [version 1; peer review: 2 approved]. F1000Research, 7 (1605). https://doi.org/10.12688/f1000research.16493.1 .

Schmidt, R. (2020, August 24). The benefits of statistical noise . Retrieved from https://behavioralscientist.org/the-benefits-of-statistical-noise/

Schneider, S. L., Ness, K. K., Shaver, K., & Brutkiewicz, R. (2014). Federal demonstration partnership 2012 faculty workload survey – Research report . Retrieved from https://osp.od.nih.gov/wp-content/uploads/SMRB_May_2014_2012_Faculty_Workload_Survey_Research_Report.pdf

Science Europe. (2020). Position statement and recommendations on research assessment processes . Retrieved from https://doi.org/10.5281/zenodo.4916155 .

Smaldino, P. E., Turner, M. A., & Contreras Kallens, P. A. (2019). Open science and modified funding lotteries can impede the natural selection of bad science. In OSF Preprints .

Smith, R. (1997). Authorship is dying: Long live contributorship. BMJ, 315 , 696.

Sugimoto, C. R., & Larivière, V. (2018). Measuring research what everyone needs to know . Oxford University Press.

The mental health of PhD researchers demands urgent attention. (2019). Nature, 575 , 257-258. https://doi.org/10.1038/d41586-019-03489-1 .

The Wellcome Trust and Shift Learning. (2020). What researchers think about the culture they work in (MC-7198/01-2020/BG). Retrieved from https://wellcome.ac.uk/reports/what-researchers-think-about-research-culture

Universities Norway. (2021). NOR-CAM – A toolbox for recognition and rewards in academic careers . Retrieved from Oslo: https://www.uhr.no/en/_f/p3/i86e9ec84-3b3d-48ce-8167-bbae0f507ce8/nor-cam-a-tool-box-for-assessment-and-rewards.pdf

University and College Union. (2020, February, 3). UCU announces 14 strike days at 74 UK universities in February and March . Retrieved from https://www.ucu.org.uk/article/10621/UCU-announces-14-strike-days-at-74-UK-universities-in-February-and-March

Urlings, M. J. E., Duyx, B., Swaen, G. M. H., Bouter, L. M., & Zeegers, M. P. (2021). Citation bias and other determinants of citation in biomedical research: Findings from six citation networks. Journal of Clinical Epidemiology, 132 , 71–78. https://doi.org/10.1016/j.jclinepi.2020.11.019

Van de Velde, J., Levecque, K., Mortier, A., & De Beuckelaer, A. (2019, September). Waarom doctorandi in Vlaanderen denken aan stoppen met doctoreren [Why PhD students in Flanders think about stopping their PhDs] . ECOOM Brief (no. 20). Retrieved from http://hdl.handle.net/1854/LU-8630419

van der Weijden, I., Teelken, C., de Boer, M., & Drost, M. (2016). Career satisfaction of postdoctoral researchers in relation to their expectations for the future. Higher Education, 72 (1), 25–40. https://doi.org/10.1007/s10734-015-9936-0

Viglione, G. (2020). Are women publishing less during the pandemic? Here’s what the data say. Nature, 581 , 365–366. https://doi.org/10.1038/d41586-020-01294-9

Vitae. (n.d.). 360 degree feedback from your research team . Retrieved from https://www.vitae.ac.uk/doing-research/leadership-development-for-principal-investigators-pis/developing-yourself-as-a-pi/360-degree-feedback-from-your-research-team .

Vogel, L. (2017). Researchers may be part of the problem in predatory publishing. Canadian Medical Association Journal, 189 (42), E1324. https://doi.org/10.1503/cmaj.109-5507

VSNU, NFU, KNAW, NWO, & ZonMw. (2019). Room for everyone’s talent . Retrieved from The Hague. https://vsnu.nl/recognitionandrewards/wp-content/uploads/2019/11/Position-paper-Room-for-everyone’s-talent.pdf

Waltman, L., & van Eck, N. J. (2012). The inconsistency of the h-index. Journal of the American Society for Information Science and Technology, 63 (2), 406–415. https://doi.org/10.1002/asi.21678

Wikipedia. (2021, 7 July). Impact factor . Retrieved from https://en.wikipedia.org/wiki/Impact_factor

Wilkinson, M. D., Dumontier, M., Aalbersberg, I. J., Appleton, G., Axton, M., Baak, A., et al. (2016). The FAIR guiding principles for scientific data management and stewardship. Scientific Data, 3 (1), 160018. https://doi.org/10.1038/sdata.2016.18

Wilsdon, J., Allen, L., Belfiore, E., Campbell, P., Curry, S., Hill, S., . . . Johnson, B. (2015). The metric tide: Report of the independent review of the role of metrics in research assessment and management . Retrieved from https://re.ukri.org/documents/hefce-documents/metric-tide-2015-pdf/

Winker, K. (2017). Eyeballs on science: Impact is not just citations, but how big is readership? bioRxiv . https://doi.org/10.1101/136689 .

Working group for responsible evaluation of a researcher. (2020). Good practice in researcher evaluation. recommendation for the responsible evaluation of a researcher in Finland (978-952-5995-28-2). Retrieved from Helsinki. https://doi.org/10.23847/isbn.9789525995282

Wouters, P. (2014). The citation: From culture to infrastructure. In B. Cronin & C. R. Sugimoto (Eds.), Beyond Bibliometrics: Harnessing multidimensional indicators of scholarly impact (pp. 47–66). The MIT Press.

Ziker, J. (2014, March, 31). The long, lonely job of Homo Academicus . Retrieved from https://www.boisestate.edu/bluereview/faculty-time-allocation/

Zuckerman, H., & Merton, R. K. (1971). Patterns of evaluation in science: Institutionalisation, structure and functions of the referee system. Minerva, 9 (1), 66–100. Retrieved from http://www.jstor.org/stable/41827004

Download references

Acknowledgements

The authors would like to thank Anna Hatch whose invaluable feedback has helped us improve and enrich this chapter.

Author information

Authors and affiliations.

Amsterdam University Medical Centers, Amsterdam, The Netherlands

Noémie Aubert Bonn

Hasselt University, Hasselt, Belgium

Amsterdam University Medical Centers and Vrije Universiteit, Amsterdam, The Netherlands

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Noémie Aubert Bonn .

Editor information

Editors and affiliations.

Institute of Science and Innovation in Medicine (ICIM), Universidad del Desarrollo, Santiago, Chile

Erick Valdés

Juan Alberto Lecaros

Rights and permissions

Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

Reprints and permissions

Copyright information

© 2023 The Author(s)

About this chapter

Cite this chapter.

Aubert Bonn, N., Bouter, L. (2023). Research Assessments Should Recognize Responsible Research Practices. Narrative Review of a Lively Debate and Promising Developments. In: Valdés, E., Lecaros, J.A. (eds) Handbook of Bioethical Decisions. Volume II. Collaborative Bioethics, vol 3. Springer, Cham. https://doi.org/10.1007/978-3-031-29455-6_27

Download citation

DOI : https://doi.org/10.1007/978-3-031-29455-6_27

Published : 29 June 2023

Publisher Name : Springer, Cham

Print ISBN : 978-3-031-29454-9

Online ISBN : 978-3-031-29455-6

eBook Packages : Biomedical and Life Sciences Biomedical and Life Sciences (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

DORA

  • Supporting Organizations
  • Privacy Policy
  • Code-of-Conduct
  • Initiatives
  • Driving Institutional Change for Research Assessment Reform

The changing role of funders in responsible research assessment: progress, obstacles and the way ahead

Position papers   For: Funders       DORA-produced

responsible research position paper

In partnership with the Research on Research Institute (RoRI), CWTS-Leiden, and National Research Foundation of South Africa, DORA published a position paper providing a state-of-play of responsible research assessment (RRA) practices from funders. The paper’s release coincided with the 2020 Global Research Council (GRC) virtual conference on RRA practices. It also presents the findings of a survey of RRA policies and practices in the participant GRC organizations, which are mainly national public funding agencies.

Curry S, de Rijcke S, Hatch A, Pillay D, van der Weijden I, and Wilsdon J (2020). The changing role of funders in responsible research assessment: progress, obstacles and the way ahead . https://doi.org/10.6084/m9.figshare.13227914.v1

  • Privacy Policy
  • SignUp/Login

Research Method

Home » Position Paper – Example, Format and Writing Guide

Position Paper – Example, Format and Writing Guide

Table of Contents

Position Paper

Position Paper

Definition:

Position paper is a written document that presents an argument or stance on a particular issue or topic. It outlines the author’s position on the issue and provides support for that position with evidence and reasoning. Position papers are commonly used in academic settings, such as in Model United Nations conferences or debates, but they can also be used in professional or political contexts.

Position papers typically begin with an introduction that presents the issue and the author’s position on it. The body of the paper then provides evidence and reasoning to support that position, often citing relevant sources and research. The conclusion of the paper summarizes the author’s argument and emphasizes its importance.

Types of Position Paper

There are several types of position papers, including:

  • Advocacy Position Paper : This type of position paper presents an argument in support of a particular issue, policy, or proposal. It seeks to persuade the reader to take a particular action or adopt a particular perspective.
  • Counter-Argument Position Paper: This type of position paper presents an argument against a particular issue, policy, or proposal. It seeks to convince the reader to reject a particular perspective or course of action.
  • Problem-Solution Position Paper : This type of position paper identifies a problem and presents a solution to it. It seeks to convince the reader that the proposed solution is the best course of action to address the identified problem.
  • Comparative Position Paper : This type of position paper compares and contrasts two or more options, policies, or proposals. It seeks to convince the reader that one option is better than the others.
  • Historical Position Paper : This type of position paper examines a historical event, policy, or perspective and presents an argument based on the analysis of the historical context.
  • Interpretive Position Paper : This type of position paper provides an interpretation or analysis of a particular issue, policy, or proposal. It seeks to persuade the reader to adopt a particular perspective or understanding of the topic.
  • Policy Position Paper: This type of position paper outlines a specific policy proposal and presents an argument in support of it. It may also address potential objections to the proposal and offer solutions to address those objections.
  • Value Position Paper: This type of position paper argues for or against a particular value or set of values. It seeks to convince the reader that a particular value or set of values is more important or better than others.
  • Predictive Position Paper : This type of position paper makes predictions about future events or trends and presents an argument for why those predictions are likely to come true. It may also offer suggestions for how to prepare for or respond to those events or trends.
  • Personal Position Paper : This type of position paper presents an individual’s personal perspective or opinion on a particular issue. It may draw on personal experiences or beliefs to support the argument.

Position Paper Format

Here is a format you can follow when writing a position paper:

  • Introduction: The introduction should provide a brief overview of the topic or issue being discussed. It should also provide some background information on the issue and state the purpose of the position paper.
  • Definition of the problem : This section should describe the problem or issue that the position paper addresses. It should explain the causes and effects of the problem and provide evidence to support the claims made.
  • Historical perspective : This section should provide a historical perspective on the issue or problem, outlining how it has evolved over time and what previous attempts have been made to address it.
  • The organization’s stance : This section should present the organization’s stance on the issue or problem. It should provide evidence to support the organization’s position and explain the rationale behind it. This section should also address any counterarguments or alternative perspectives.
  • Proposed solutions: This section should provide proposed solutions or recommendations to address the problem or issue. It should explain how the proposed solutions align with the organization’s stance and provide evidence to support their effectiveness.
  • Conclusion: The conclusion should summarize the organization’s position on the issue or problem and restate the proposed solutions or recommendations. It should also encourage further discussion and action on the issue.
  • References: Include a list of references used to support the claims made in the position paper.

How to Write Position Paper

Here are the steps to write a position paper:

  • Choose your topic: Select a topic that you are passionate about or have knowledge of. It could be related to social, economic, environmental, political, or any other issues.
  • Research: Conduct thorough research on the topic to gather relevant information and supporting evidence. This could include reading scholarly articles, reports, books, and news articles.
  • Define your position: Once you have gathered sufficient information, identify the main arguments and formulate your position. Consider both the pros and cons of the issue.
  • Write an introduction : Start your position paper with a brief introduction that provides some background information on the topic and highlights the key points that you will discuss in the paper.
  • Present your arguments: In the body of your paper, present your arguments in a logical and coherent manner. Each argument should be supported by evidence from your research.
  • Address opposing views : Acknowledge and address the opposing views on the issue. Provide counterarguments that refute these views and explain why your position is more valid.
  • Conclusion : In the conclusion, summarize your main points and reiterate your position on the topic. You can also suggest some solutions or actions that can be taken to address the issue.
  • Edit and proofread : Finally, edit and proofread your position paper to ensure that it is well-written, clear, and free of errors.

Position Paper Example

Position Paper Example structure is as follows:

  • Introduction:
  • A brief overview of the issue
  • A clear statement of the position the paper is taking
  • Background:
  • A detailed explanation of the issue
  • A discussion of the history of the issue
  • An analysis of any previous actions taken on the issue
  • A detailed explanation of the position taken by the paper
  • A discussion of the reasons for the position taken
  • Evidence supporting the position, such as statistics, research, and expert opinions
  • Counterarguments:
  • A discussion of opposing views and arguments
  • A rebuttal of those opposing views and arguments
  • A discussion of why the position taken is more valid than the opposing views
  • Conclusion:
  • A summary of the main points of the paper
  • A call to action or recommendation for action
  • A final statement reinforcing the position taken by the paper
  • References:
  • A list of sources used in the paper, cited in an appropriate citation style

Purpose of Position Paper

Here are some of the most common purposes of position papers:

  • Advocacy: Position papers are often used to promote a particular point of view or to advocate for a specific policy or action.
  • Debate : In a debate, participants are often required to write position papers outlining their argument. These papers help the debaters clarify their position and provide evidence to support their claims.
  • Negotiation : Position papers can be used as part of negotiations to establish each party’s position on a particular issue.
  • Education : Position papers can be used to educate the public, policymakers, and other stakeholders about complex issues by presenting a clear and concise argument supported by evidence.
  • Decision-making : Position papers can be used by decision-makers to make informed decisions about policies, programs, or initiatives based on a well-reasoned argument.
  • Research : Position papers can be used as a starting point for further research on a particular topic or issue.

When to Write Position Paper

Here are some common situations when you might need to write a position paper:

  • Advocacy or lobbying : If you are part of an organization that is advocating for a specific policy change or trying to influence decision-makers, a position paper can help you articulate your organization’s position and provide evidence to support your arguments.
  • Conferences or debates: In academic or professional settings, you may be asked to write a position paper to present your perspective on a particular topic or issue. This can be a useful exercise to help you clarify your thoughts and prepare for a debate or discussion.
  • Public relations: A position paper can also be used as a tool for public relations, to showcase your organization’s expertise and thought leadership on a particular issue.
  • Internal communications: Within an organization, a position paper can be used to communicate a particular stance or policy to employees or stakeholders.

Advantages of Position Paper

There are several advantages to writing a position paper, including:

  • Organizing thoughts : Writing a position paper requires careful consideration of the issue at hand, and the process of organizing thoughts and arguments can help you clarify your own position.
  • Demonstrating expertise: Position papers are often used in academic and professional settings to demonstrate expertise on a particular topic. Writing a well-researched and well-written position paper can help establish your credibility and expertise in a given field.
  • Advocacy: Position papers are often used as a tool for advocacy, whether it’s advocating for a particular policy or for a specific point of view. Position papers can help persuade others to adopt your position on an issue.
  • Facilitating discussion : Position papers can be used to facilitate discussion and debate on a particular issue. By presenting different perspectives on an issue, position papers can help foster dialogue and lead to a better understanding of the topic at hand.
  • Providing a framework for action: Position papers can also be used to provide a framework for action. By outlining specific steps that should be taken to address an issue, a position paper can help guide decision-making and policy development.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

What is Literature

What is Literature – Definition, Types, Examples

What is Science

What is Science – Definition, Methods, Types

Academic Paper

Academic Paper – Format, Example and Writing...

Evolution

Evolution – Definition, Types and Example

What is Art

What is Art – Definition, Types, Examples

Theory

Theory – Definition, Types and Examples

RRBM network

  • Introduction
  • Vision 2030
  • Science for Better Business and a Better World
  • Executive Summary
  • Principles of Responsible Science
  • Possible Actions
  • Current Business School Research Eco-System
  • Conclusion: Science for a Better World
  • References and Authors

RRBM Honor Roll

  • RRBM Honor Roll Submission
  • RRBM Dare to Care Scholarship 2024 Call for Applications
  • RRBM Dare to Care Dissertation Scholarships 2023 Winners
  • RRBM Dare to Care Dissertation Scholarships 2022 Winners
  • Philosophical Foundations of Responsible Research
  • Category 1: Initiatives on improving research relevance
  • Category 2: Initiatives on improving research credibility
  • Category 3: Initiatives on improving the accuracy of research contribution assessment
  • Presentations
  • Conferences
  • 2024 Responsible Research Summit
  • RRS23 – Workshop on Doing Impact Oriented Business and Management Research
  • RRS23 Accomodations
  • RRS22 – Accommodations
  • 2021 Responsible Research Roundtable
  • RRAS2021 Speakers
  • RRAS21 Planning Committee
  • RRVS2020 Agenda
  • RRVS2020 Participants
  • RRVS2020 Groups
  • RRVS2020 Organizing Committee
  • RRVS20 We Will Statements
  • RRS2019 Agenda
  • RRS2019 Presentations
  • RRS2019 Participants
  • RRS2019 I Will
  • RSS2019 Organising Committees
  • RRS2019 Full Report
  • Chinese Business Deans’ Summit on Responsible Research
  • Online Events
  • Journal Special Issues
  • Journal Practices
  • Testimonials
  • Impact Scholar Community
  • 2024 “Responsible Research in Management” Award Call
  • 2023 “Responsible Research in Management” Award Winner Announcement
  • 2022 “Responsible Research in Management” Award Winner Announcement
  • 2022 “Responsible Research in Management” Award Call
  • 2021 “Responsible Research in Management” Winners Announcement
  • 2021 “Responsible Research in Management” Award Call
  • 2020 Responsible Research in Management Award Winners Announcement
  • 2020 Responsible Research in Management Award Call
  • 2019 Responsible Research in Management Award Winners
  • 2019 Responsible Research in Management Award Call
  • 2018 IACMR Presidential Award Winners
  • 2022 AMA-EBSCO Annual Award for Responsible Research in Marketing Winners
  • 2021 AMA-EBSCO Annual Award for Responsible Research in Marketing Winners
  • 2021 AMA-EBSCO Annual Award for Responsible Research in Marketing Call
  • 2020 AMA-EBSCO Annual Award for Responsible Research in Marketing Winners
  • 2020 AMA-EBSCO Annual Award for Responsible Research in Marketing Call
  • 2021 M&SOM Society Award for Responsible Research in Operations Management
  • 2020 M&SOM Society Award for Responsible Research in Operations Management Call
  • 2019 M&SOM Society Award for Responsible Research in Operations Management Winners
  • 2021 – “Business for a Better World” Dissertation Proposal Competition
  • 2020 EFA Best Paper Prize in Responsible Finance
  • Inaugural OB Division Award for Societal Impact
  • 2018 International Impactful Collaboration Award
  • 2020 Ideas Worth Teaching Awards
  • Founding Members
  • Pioneering Institutions
  • Individual Endorsement
  • Institutional Endorsement
  • Books & Reports
  • Videos & Podcasts
  • COVID-19 Insights from Business Sustainability Scholars
  • Newsletters
  • Working Board
  • Executive Committee
  • IRS Approval
  • A brief history of RRBM
  • Yearly activities
  • Minutes of the meetings

With this Honor Roll, RRBM aims to create a system to recognize a scholarly article, monograph, policy paper, or book that reflects credible science useful to society. All publications receiving this honor will be prominently displayed on the RRBM website and authors are encouraged to display the RRBM-Honor Roll emblem on their CV. Schools, journals, and publishers can also track their Honor Roll publications as one of their metrics of the likely societal impact of their research.

Starting in July 2022, recent publications may be nominated to the RRBM Honor Roll. The Selection Board is looking for works that are both rigorous and relevant, and to encourage research that achieves these goals. RRBM does not seek to add another layer of review to vouch for the rigor of published work: that is the job of editors and reviewers. Publications selected for the Honor Roll have the added distinction of being relevant and more likely to have a positive societal impact. It is not intended as a competition or award – in our ideal world, most academic publications would qualify as responsible research; it is a recognition that the work meets RRBM principles for contributing to a better society. We encourage you to peruse the listing of publications selected for the Honor Roll.

responsible research position paper

Criteria for “RRBM Honor Roll”

Recently published work will be assessed according to the spirit of two principles focused on usefulness articulated in RRBM’s Vision for Responsible Research in Business and Management:

  • Service to Society : Development of knowledge likely to benefit business and the broader society, locally and globally, for the ultimate purpose of creating a better world.
  • Impact on Stakeholders : Research that is likely to have an impact on diverse stakeholders, especially research that contributes to better business and a better world.

Please submit your nomination for responsible research published from January 2020 to Editor-in-Chief Ron Hill through the form below.

Selection Board of the Responsible Research in Business and Management Honor Roll.

The Editor-in-Chief is Ron Hill from American University.

Each board member is selected for their interest in the larger societal good and their scholarly profile. They also represent all major areas of research in the business domain.

You may access the list of the Selection Board by clicking on the title below.

Displaying 1 - 40 of 110

If you have any question regarding the Honor Roll, the process or the Selection Board, don’t hesitate to send an email to honorroll[at]rrbm.network

IMAGES

  1. Research-Position-Paper-Guidelines

    responsible research position paper

  2. SOP

    responsible research position paper

  3. (PDF) Developing responsible research and innovation for robotics

    responsible research position paper

  4. Position Paper: Strengthening AU Member States' Regulatory Capacities

    responsible research position paper

  5. Sample Position Paper (500 Words)

    responsible research position paper

  6. (PDF) Response to Responsible Research Assessment I

    responsible research position paper

VIDEO

  1. 902.636 Position Paper overview class 20231012 225157 Meeting Recording

  2. January 28, 2024

  3. Importance of Research

  4. How to get a Research position in US- Arabic Version

  5. 😳2 Ways to Find Remote FREE research position in the US 🇺🇸?😎#doctor #usmle

  6. When you finally get the research position of your dreams!

COMMENTS

  1. Position Paper

    Position Paper Executive Summary This position paper presents a vision of a future in which business schools and scholars worldwide have successfully transformed their research toward responsible science, producing useful and credible knowledge that addresses problems important to business and society.

  2. RRBM network

    Responsible Research for Business and Management (RRBM) is dedicated to inspiring, encouraging, and supporting credible and useful research in the business and management disciplines.

  3. PDF A Vision of Responsible Research in Business and Management

    This position paper presents a vision of a future in which business schools and scholars worldwide have successfully transformed their research toward responsible science,2 producing useful and credible knowledge that addresses problems important to business and society.3 This vision is based on the belief that business can be a means for a bett...

  4. Responsible research in business and management comes alive

    The position paper paints a picture of a future—Vision 2030—in which business schools and scholars worldwide will have successfully transformed their research toward responsible science, producing useful, credible knowledge that addresses problems important to business and how business links to society.

  5. Exploring the dimensions of responsible research systems and cultures

    In a position paper on a vision for responsible research in business management, the Community for Responsible Research in Business and Management states that acting with responsibility 'protects the integrity of science' [25, p. 7].

  6. PDF A Vision of Responsible Research in Business and Management: A

    The Position Paper ü Vision 2030 ü Background ü Principles of responsible research ü Actors and potential actions ü Current research ecosystem ü Consequences of "do nothing" ü Call to action Vision 2030 "In 2030, business and management schools worldwide are widely admired for their contributions to societal well-being....

  7. PDF Guest Editorial: Responsible Research in Marketing

    useful research. In its position paper (RRBM 2017), RRBM outlines seven principles related to those dimensions: ser-vice to society (RRBM Principle 1), credibility (RRBM Prin - ciples 2, 3, 4), and usefulness (RRBM Principles 5, 6, 7). Responsible research is gaining increasing prominence in marketing.

  8. Relational Objectivity as Responsibility in Management Research

    Developing a relational view of inquiry positions management research to contribute to the second of seven principles of responsible science set forth in the position paper for responsible research in business and management (RRBM Network 2017). This principle, "Stakeholder Involvement" reads:

  9. PDF A Vision of Responsible Research in Operations Management

    A Vision of Responsible Research in Operations Management Serguei Netessine, The Wharton School, University of Pennsylvania ... Why this position paper I am forever grateful to M&SOM Society and my colleagues for recognizing my work with the 2018 M&SOM Distinguished Fellow Award. It is a great honor to join the remarkable

  10. Guest Editorial: Responsible Research in Marketing

    We thank the editors of the Journal of the Academy of Marketing Science for the opportunity to share our thoughts about responsible research in marketing. ... (2017) A vision for responsible research in business and management: Striving for useful and credible knowledge, Position Paper, accessible from www.rrbm.network. Download references ...

  11. A Vision of Responsible Research in Business and Management ...

    Authors: 28 founding members of the Community for Responsible Research in Business and Management. The list of authors is integrated in the position paper.This position paper presents a vision of a future in which business schools and scholars worldwide have successfully transformed their research toward responsible science,3 producing useful and credible knowledge that addresses problems ...

  12. Responsible research and innovation in practice: Driving both the 'How

    1. Introduction. Responsible Research and Innovation (RRI), sometimes known simply as responsible innovation, is playing an increasingly important role in a wide range of research disciplines and areas, including the emerging field of trusted autonomous systems (TAS) (He et al., 2021; Hesketh, 2021; Martínez-Fernández, Franch, Jedlitschka, Oriol & Trendowicz, 2020).

  13. Responsible Research and Responsible Leadership Studies

    This Guidepost discusses the necessary interdependence of rigor and relevance, introduces the responsible research for business and management movement that strives for both rigor (credibility of evidence) and relevance (usefulness of the knowledge), and illustrates how the seven principles of responsible research can guide studies of responsibl...

  14. Responsible Research in Business & Management (RRBM) Dare to ...

    Is familiar with the RRBM Principles of Responsible Research (e.g., as an endorser of the position paper, an attendee of RRBM webinars, or through other engagements); We recommend attending the Philosophical Foundation of Responsible Research course which will be offered online September to mid-November 2021. The course covers the topics of ...

  15. Position Paper

    Responsible Research and Innovation (RRI) is an approach that can support this agenda. RRI refers to an approach rolled out in Framework programmes 7 and 8 emphasising the on-going process of aligning research and innovation to societal values, needs and expectations [1].

  16. Research Assessments Should Recognize Responsible Research ...

    1 Citations 10 Altmetric Part of the Collaborative Bioethics book series (CB,volume 3) Abstract Research assessments have been under growing scrutiny in the past few years. The way in which researchers are assessed has a tangible impact on decisions and practices in research.

  17. The changing role of funders in responsible research assessment ...

    In partnership with the Research on Research Institute (RoRI), CWTS-Leiden, and National Research Foundation of South Africa, DORA published a position paper providing a state-of-play of responsible research assessment (RRA) practices from funders.

  18. PDF Responsible Operations: Data Science, Machine Learning, and AI ...

    of a research agenda to help chart library community engagement with data science, machine learning, and artificial intelligence (AI).2 Responsible Operations: Data Science, Machine Learning, and AI in Libraries is the result.3 Responsible Operations was developed in partnership with an advisory group and a landscape group from March

  19. 2021 "Responsible Research in Management" Winners Announcement

    In 2021, the Fellows joined forces with the Community for Responsible Research in Business and Management to sponsor the Responsible Research in Management Award. This annual award recognizes and celebrates recent research that benefits society by producing credible and useful knowledge. Credibility refers to the reliability, validity and ...

  20. Position Paper

    Definition: Position paper is a written document that presents an argument or stance on a particular issue or topic. It outlines the author's position on the issue and provides support for that position with evidence and reasoning.

  21. RRBM Honor Roll

    RRBM Honor Roll. With this Honor Roll, RRBM aims to create a system to recognize a scholarly article, monograph, policy paper, or book that reflects credible science useful to society. All publications receiving this honor will be prominently displayed on the RRBM website and authors are encouraged to display the RRBM-Honor Roll emblem on their CV.