Skip to end of metadata
Go to start of metadata



The problem


Open science presents the opportunity to radically change the way we evaluate, reward and incentivise science. Its goal is to accelerate scientific progress and enhance the impact of science for the benefit of society. By changing the way we share and evaluate science, we can provide credit for a wealth of research output and contributions that reflect the changing nature of science.
The assessment of research proposals, research performance and researchers serves different purposes, but often seems characterised by a heavy emphasis on publications, both in terms of the number of publications and the prestige of the journals in which the publications should appear (citation counts and impact factor). This emphasis does not correspond with our goals to achieve societal impact alongside scientific impact. The predominant focus on prestige fuels a race in which the participants compete on the number of publications in prestigious journals or monographs with leading publishers, at the expense of attention for high-risk research and a broad exchange of knowledge. Ultimately this inhibits the progress of science and innovation, and the optimal use of knowledge.

The solution

  • Ensure that national and European assessment and evaluation systems encourage open science practices and timely dissemination of all research outputs in all phases of the research life cycle.
  • Create incentives for an open science environment for individual researchers as well as funding agencies and research institutes.
  • Acknowledge the different purposes of evaluation and what 'right' criteria are. Amend national and European assessment and evaluation systems in such a way that the complementary impact of scientific work on science as well as society at large is taken into account.
  • Engage researchers and other key stakeholders, including communications platforms and publishers within the full spectrum of academic disciplines. Set up assessment criteria and practices, enabling researchers to exactly understand how they will be assessed and that open practices will be rewarded. 


Concrete actions

  • National authorities and the European Commission: acknowledge that national initiatives are reaching their limits, and that this is an area for a harmonised EU approach.
  • National authorities, European Commission and research funders: reform reward systems, develop assessment and evaluation criteria, or decide on the selection of existing ones (e.g. DORA for evaluations and the Leiden Manifesto for research metrics), and make sure that evaluation panels adopt these new criteria.
  • Research Performing Organisations, research funders and publishers: further facilitate and explore the use of so-called alternative metrics where they appear adequate to improve the assessment of aspects such as the impact of research results on society at large. Experiment with new approaches for rewarding scientific work.
  • Research communities, research funders and publishers: develop and adopt citation principles for publications, data and code, and other research outputs, which include persistent identifiers, to ensure appropriate rewards and acknowledgment of the authors. 
  • Research communities and publishers: facilitate and develop new forms of scientific communication and the use of alternative metrics. 


Expected positive effects

  • An end to the vicious circle that forces scientists to publish in ever more prestigious journals or monographs and reinforcement of the recognition for other forms of scientific communication;
  • A wider dissemination of a wider range of scientific information that benefits not only science itself but society as a whole, including the business community;
  • A better return for the parties that fund research.
  • No labels

14 Comments

  1. Anonymous

    LIBER, The Association of european Research Libraries/ Kristiina Hormia-Poutanen

    Concrete actions:

    LIBER: to form an Open Science metrics group to participate in the development of Open Science metrics; raise awareness among libraries, share best practice, integrate “mature” indicators into the (library)statistics processes.

  2. Anonymous

    Sonia Frota, University of Lisbon

    I believe the goal of open science requires a combination of assessment criteria that guarantees high quality and impact combined with fair open access. There is no necessary contradiction between number of citations and other kinds of metrics presently in use and open science. However, assessment and metrics can and should be improved to integrate different kinds of indicators, including those already commonly accepted (e.g. JIF, h-index) and others that measure research and researchers impact on the research community (taking advantage of digital communities such as Research Gate and others, which comprise data sharing, and interaction among researchers; or Publons - which values peer-reviewing, one of the foundations of excellence in science) and on society in general (taking into account the research mission/goals stated by the research project or institution). Metrics should be transparent in function of the databases used, and the disciplines they apply to (e.g., Thomson Reuters already considers 'relative' metrics, which are discipline-based, or even institution-based). Crucially, by getting more involved in open science researchers need to realize that they are improving their assessment and reward, through metrics and through the better fulfillment of their mission to science and society.

  3. Anonymous

    Re: first concrete action:

    I cannot see that there are far reaching national initiatives in this area at all!

     

    Georg Botz

  4. Anonymous

    From my point of view there is still a too heavy stress on impact indicators in this suggestion to improve the current assessment and reward system. I think that the distinction between impact, excellence, and quality of research that we discussed during the conference breakout-session should be taken into account here. Defining concrete methods to assess not only impact, but also excellence and especially quality of research could really revolutionise the current research assessment and reward system.

    Michela Vignoli

    http://www.year-network.com/

    http://list.scape-project.eu/confluence/display/AORW/Austrian+Open+Research+Wiki

  5. Anonymous

    We must change the assessment-system wherefore I'm pleased this is a theme in the Amsterdam Call for Action. Where a researcher is assessed on the publications journal impact factor (or on the respective prestige of the publishing company or publication) when he or she applies for grants or employment, the researchers are largely steered towards publishing in established channels which often are subscription based. As funders and employers we cannot expect open access-journals to flourish if we don't stop looking at where, instead of what, the researcher has published./Lisbeth Söderqvist, Swedish Research Council

  6. Anonymous

    With regard to the impact of research results on society at large it is important to not limit research to only this impact. The idea of Open Science should not be encroached to limit the scope and freedom of science and the humanities. Scientific freedom is the prerequisite of scientific achievement. Free societies depend on a free discussion of ideas, and progress often results from new insights that challenge old ideas. Therefore, the identification of the most original researchers and the best projects, purely along quality criteria, serves the public interest. Impact can arise as a welcomed externality of research. But no evidence exists to support the claim that research that aims at impact created larger societal benefits than research that was not aiming at impact.

  7. Three problems with the text as it stands:

    • It is still very much “journal article focussed’; this can be easily accommodated
    • It does not acknowledge differences between the goals of various stakeholders: changing the reward system is primarily an issue for organisations that employ researchers; for funders things are different according to what kind of research they fund (bottom-up versus topic-driven)
    •  It refers to alt-metrics as if that is a system that exists, is tested as to its reliability and is accepted by the science community; this is not the case, so the text should contain the development of such a system as one of the goals, not the application of whatever is being tried out at present

    Martin Stokhof on behalf of the Open Access Working Group of the European Reserach Council

    Proposed changes:

    Open science presents the opportunity to improve the way we evaluate, reward and incentivise science. Its goal is to accelerate scientific progress and enhance the impact of science for the benefit of society. By changing the way we evaluate the results of science, we can provide credit for a wealth of research output and contributions that reflect the changing nature of science.
    The assessment of research proposals, research performance and researchers serves different purposes, but often is characterised by a heavy emphasis on publications, both in terms of the number of publications and the prestige of the journals and of the publishers where the publications appear (citation counts and impact factors of journals in some disciplines, reputation of publishers or conferences in others). This emphasis does not facilitate reaching our goals to achieve societal impact alongside scientific impact. The predominant focus on internal recognition fuels a race in which the participants compete on the number of publications in prestigious journals or in collections or monographs with leading publishers, without enough attention to high-risk research and a broad exchange of knowledge. Ultimately this can delay the progress of science and innovation, and may negatively affect the optimal use of knowledge.

    The solution

    • Ensure that national and European assessment and evaluation systems encourage open science practices and timely dissemination of all research outputs in all phases of the research life cycle.
    • Create incentives for an open science environment for individual researchers as well as funding agencies and research institutes.
    • Acknowledge the different purposes of evaluation and what 'right' criteria are. Amend national and European assessment and evaluation systems in such a way that, where appropriate, the impact of scientific work on society at large be taken into account.
    • Engage researchers and other key stakeholders, including communications platforms and publishers within the full spectrum of academic disciplines. [MS1] Set up assessment criteria and practices that are suitable for the different disciplines, thus enabling researchers to exactly understand how they will be assessed and that open practices will be rewarded. 


    Concrete actions

    • National authorities and the European Commission: acknowledge that national initiatives are reaching their limits, and that this is an area for a harmonised EU approach.
    • National authorities, European Commission and research funders: reform reward systems, develop complementary assessment and evaluation criteria, or decide on the selection of existing ones (e.g. DORA for evaluations and the Leiden Manifesto for research metrics), and make sure that evaluation panels adopt these new criteria where appropriate.
    • Research Performing Organisations, research funders and publishers: further facilitate and explore the use of so-called alternative metrics where they appear adequate to improve the assessment of aspects such as the impact of research results on society at large. Experiment with complementary new approaches for rewarding scientific work.
    • Research communities, research funders and publishers: develop and adopt citation principles for publications, data and code, and other research outputs, which include persistent identifiers, to ensure appropriate rewards and acknowledgment of the authors. 
    • Research communities and publishers: facilitate and develop new forms of scientific communication and the use of alternative metrics. 


    Expected positive effects

    • An end to the vicious circle that forces scientists to give priority to publishing in prestigious journals or monographs with high ranking publishers and reinforcement of the recognition for other forms of scientific communication;
    • A wider dissemination of a wider range of scientific information that benefits not only the science community itself but society as a whole, including the business community;
    • A quicker and hopefully  better return for the parties that fund research.
  8. Anonymous

    Re first concrete action, and maybe responding to Georg's comment above, what "initiatives" are meant here?  Research evaluation initiatives?  Initiatives to change the way research is evaluated?  Something else?  Needs to be clearer perhaps.

    If it means "initiatives to change the way research is evaluated", then needs to acknowledge that such initiatives, like DORA, need to reflect the global nature of research, so, not just at EU level.  EU action is possible, but only in this wider context.

    Neil Jacobs / Jisc, UK.

  9. Anonymous

    Fourth concrete action: "develop and adopt citation principles for publications, data and code, and other research outputs, which include persistent identifiers, to ensure appropriate rewards and acknowledgment of the authors".  Not just "authors" (unless that word has a very broad meaning).  Perhaps "contributors" would be better, to reflect the broad range of relationships that a person might have with a research output?  Neil Jacobs / Jisc, UK

  10. Anonymous

    Concrete actions:

    Researchers: consider using new forms of scientific communication and alternative venues instead of basing decisions on publication outlets solely on prestige.

  11. Anonymous

    Michael Matlosz, President Science Europe:

    The juxtaposition in the text between societal and scientific impact assessment cannot be endorsed. Countless game-changing, paradigm-shifting innovations stem from research that was assessed purely on the basis of scientific merit.

    Today there is no firm evidence as to what assessment methodologies or programme designs are more effective at achieving long-term impacts. What counts as positive impact depends on the time frame considered and on the point(s) of view adopted for the assessment. Not all impacts are bound to be positive for the whole of society or world, and at all times. There usually are trade-offs.

    The parts of the text that juxtapose scientific impact assessment and societal impact assessment should be rephrased. The text should instead advocate for the diversity of programme designs, research goals and evaluation methodologies, and for a better use of the opportunities for evidence-gathering and knowledge-sharing afforded by new technologies and community dynamics.

    There should be no mention of ‘right’ evaluation criteria: criteria should always be context-dependent and tailored to the objectives of the policies, programmes, grants or institutions that use them. If this is not clarified, the document will encourage bad practice instead of good practice. If this is clarified, then a call for a broader and increasing variety of evaluation criteria is welcome.

    DORA makes the point that individual assessments should not be metrics-based. This is not simply about creating new metrics ‘alongside’ the impact factor: this is about ensuring that metrics never substitute for human judgement. DORA flagged a problem within the research system which should not be made worse by creating even more short-cuts to short-circuit human qualitative judgement. As a result, the call made in the text above for RPOs to develop ‘alternative metrics’ needs to be deeply reviewed and to be made about methodological diversity.

    I agree with the call to develop citation principles and the necessary technical implementation facilities for a wider variety of outputs, including data.

    Given the great methodological difficulties, fundamental questions and major implications around alternative metrics, developing them should be a process led by research organisations and communities. The text should not call for publishers to develop them.

  12. Anonymous

    Natalia Manola on behalf of OpenAIRE:

    EC needs to facilitate the discussion, with research communities in the centre (involve scholarly societies and research infrastructures). The transition towards a fair, novel and flexible rewards system, calls for data driven policy development (pilots and evaluation under different settings).

    Solution:

    • Facilitate the use of a multi-parametric assessment system, with quality factors clearly distinguished and involved. Current altmetrics is one step towards assessing the wider impact, but often lacks the quality aspect/measure.
    • Distinguish metrics for excellent/high risk science and metrics for broader societal impact. These may differ over time or over different settings (regions, disciplines) or context.
    • Open metrics very much depend on access to interlinked metadata for, and the actual content or methods (e.g., software) - and should therefore be based on interoperable Open Science/Open Scholarship infrastructures and services.

    Concrete actions:

    • National authorities, European Commission and research funders: promote concrete initiatives to discuss, assess and atopt new harmonized/aligned open metrics and evaluation criteria and procedures, based on usage and impact, both on the research community and the society as whole. Existing open e-Infrastructures, like OpenAIRE, can provide useful data for this purpose, and can be expanded to provide broader and deeper data for the new metrics and assessment. (note: EC's pending H2020 project OpenUP, will pilot novel dissemination mechanisms and report on impact evaluation).
    • Distinguish and explore metrics for excellent/high risk science and metrics for broader societal impact, as applied to different disciplines or environments, and come up with solid recommendations to be used by research communities, RPO’s and funders in the right/appropriate context.
  13. Anonymous

    In the UK most major research funders now require that research outputs (including traditional publications and data) are shared openly. The UK Research Excellence Framework requires all submitted publications to be openly available. In both cases, considerable care has been taken to emphasise that no account is taken of journal impact factor (JIF) or other “prestige” measures, yet as James Wilsdon reports this is an entrenched belief within institutions and amongst researchers.

    There are increasing moves towards a global metrics based research assessment movement, and we here feel that a potential concrete outcome of the Presidency, is to ensure that metrics and indicators – where used – are themselves open, replicable, transparent, and used with care. As an example, we could work to ensure that download metrics are standardised to COUNTER-compliance.

    Jisc is particularly interested in developing and examining new forms of research metrics and indicators, within work on Open Citation/Semantometrics (http://semantometrics.org) and the emerging field of Data Citation (https://researchdata.jiscinvolve.org/wp/2016/04/14/data-citation-what-is-the-state-of-the-art-in-2016/). We feel that the Presidency should build on existing work wherever possible, and that one of the benefits of open science that might be stressed is the potential for open citation, indicators and metrics from the research being open itself. For example open access papers and research outputs can be mined and can act as an open corpus to underpin more transparent indicators. 

    David Kernohan, Jisc, UK

     

    1. Anonymous

      From the metric tide here are the dimensions of responsible metrics:

      • Robustness: basing metrics on the best possible data in terms of accuracy and scope;
      • Humility: recognising that quantitative evaluation should support – but not supplant – qualitative, expert assessment;
      • Transparency: keeping data collection and analytical processes open and transparent, so that those being evaluated can test and verify the results;
      • Diversity: accounting for variation by field, and using a range of indicators to reflect and support a plurality of research and researcher career paths across the system;
      • Reflexivity: recognising and anticipating the systemic and potential effects of indicators, and updating them in response.

      Rachel Bruce, Jisc UK