1 / 23

Evaluation of scientists: reasons, data and methods

Evaluation of scientists: reasons, data and methods. Gábor B. Makara Library and Information Centre, Hungarian Academy of Sciences. Scientists' evaluation – the topics. The need for evaluation The types of evaluation Data for evaluation Data selection Indicators

alaula
Download Presentation

Evaluation of scientists: reasons, data and methods

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Evaluation of scientists: reasons, data and methods Gábor B. Makara Library and Information Centre, Hungarian Academy of Sciences

  2. Scientists' evaluation – the topics • The need for evaluation • The types of evaluation • Data for evaluation • Data selection • Indicators • The peer review process

  3. Evaluations are everywhere • Scientists evaluation start at the doctoral schools • Postdoctoral fellowships • Job interviews, search committees, leadership applications, promotion • Performance reviews • Funding decisions • Professorships, Doctor of Academy title • Prizes

  4. Evaluation goals may be widely different Selection in some situations, such as • general funding • funding talented young scientists • recognizing outstanding achievement • filling a job opening will be different. Goals should be defined in advance and kept in focus throughout

  5. Experienced evaluators are scarce • Good peer reviewers’ time is in great demand, hence the importance of scientometric indicators. • Good data and indicators may speed up and simplify evaluation • It is not enough to get good scientists in a panel and tell them to do the evaluation • Scientific evaluation require guidelines, evaluator training/experience and diligent study of the subjects

  6. However, evaluation is similarly to soccer: everyone seems to be an expert.

  7. Types of evaluations Eligibility evaluation - Comparing indicators to thresholds, one scientist at a time Competitive (comparative) evaluation - Ranking or grouping by indicators, groups of scientists They may use • Indicators of scientific competence, productivity and recognition by peers • Thresholds for the indicators Disciplinary differences will apply

  8. Eugene Garfield (1979) suggested: "Instead of directly comparing the citation count of, say, a mathematician against that of a biochemist, both should be ranked with their peers, and the comparison should be made between rankings.” “Using this method, a mathematician who ranked in the 70 percentile group of mathematicians would have an edge over a biochemist who ranked in the 40 percentile group of biochemists, even if the biochemist's citation count was higher."

  9. Data in scientific evaluations • Qualifications, licences, trainings, experience (time), prior positions • Scientific publications • Patents • Contracts and contract income • Grants • Invitations • Prizes

  10. Data on scientific publications • International non-profit databases • Medline, Astrophysics Data System, ... • Commercial databases • Web of Sciences, Scopus, ... • National or institutional databases • Ad hoc lists, self-evaluations Across subdisciplines these vary tremendously in completeness, accuracy and applicability.

  11. Hungarian Scientific Bibliography Database (Hungarian abbreviation: MTMT) The goal is to constructand maintain a complete and validated collection of scientific and scholarly publications (and citations) of Hungarian scientists and scholars

  12. MTMT characteristics • Scientific and scholarly publications and citations • National (widely used, quasi compulsory) • Publicly available • Attribution to authors and institutions • Author and institutional responsibility:„they should know best their own output” • Complete for given periods (2007-2014, and beyond) • Standardized • Validated

  13. Uses for MTMT • Inventory of scientific output • Public data for funding evaluation of individual scientists • Evaluation of applicants for the Doctor of Academy (DSc) title • Academy membership election • Evaluation of research groups at Institutions of HAS • Evaluation, accreditation of professors, universities? • Gateway to open access repositories

  14. Publication selection for scientists’ evaluation Publications (primary, secondary, ...) • Trustworthy • Scientific • Reporting original research • Separate the reviews from the original research Citations in primary, secondary, ... ,research papers • Original scientific publications • Scientific reviews (journal, book, conference) • Patents • Dissertations (usually not a primary research publication) • Miscellaneous

  15. Metrics for evaluation • Journal level metric used with individual articles • an easily committed error, also frequent in Hungary, where impact factor values are transferred to research articles and summed over publications of a scientist and a time interval • Article level metric • Citation counts – raw • Citation counts – selected • Citation counts – weighted or classified

  16. Evaluation by publications • Numbers, raw • Numbers weighted by prestige of the journals • Citation counts, raw numbers • Weighted citation counts • Downloads • Web links to the publications, raw numbers • Networks of citations • and so on

  17. Identifying contributors (technicalities) Identifying scientists by names or identifiers ORCID (Open Researcher and Contributor ID) is coming, but not yet here Identifying institutional affiliation („authorship”) MTMT solves both problems locally as authors and institutions are best placed to know and label their own publications. Errors in identification are spotted and corrected by those involved.

  18. An expensive myth: self-citation distortion The myth:„Author-self citations are used to manipulate impact and to artificially increasethe own position in the community. Self-citationsare very harmful and must be removed from the statistics.”(Wolfgang Glanzel: Seven Myths in Bibliometrics. 2008) • Eliminating self-citations carry large administrative overhead without adding value to a sound evaluation • Average self-citation rate is around 20% in our 3 million sample of citations

  19. Attribution of credit for publications • Are authors and institutions equal contributors? • Can we attribute partial credit? • First, last, multiple first and/or last authorships, corresponding authorship • Collaborations (large groups of scientists, experts) as authors • Authors for collaborations? • Publications with many authors, more than 10, more than 1000, ...

  20. Reference values For eligibility evaluation • Define at sub-field levelE.g. neuroophthalmology versus neuroscience, values differ more than 5 times • Published, fixed threshold values for subfields • Problems with small specialities • Scientists active in two or more widely different subfields • Individual reference values are preferable • Construct a tailor-made reference publication list (Andras Schubert et al.) • Match publications by subfield and maturity • Compare to the real peers • Requires investment, time, workforce and research

  21. Peer review groups and scientometric indicators • Evaluation is a human activity, helped by scientometric indicators • The scales and thresholds are multidimensional and relative • Algorithms are not available • Peer review is the instrument of evaluation • Transforms a multidimensional measurement process into decision-making (yes/no, ranking or grouping)

  22. Panel "technology„ is important • Each dimension is given a scale • marks or points are assigned • additivity is implied, but... • decision by the numbers - bad practice • Open debate • Decision by consensus • Decision by voting on rankings using • points • ranking separately in each dimension • ranking or grouping • handling ties • weighting of the dimension for the goals of evaluation

  23. Seven general points, as a summary Evaluation of scientists is inevitable Design evaluation for the goals Usedataappropriate to the goals Useselectedtypes of publications and citations Usearticle level metrics Choose appropriate reference values Use peer review panels carefully

More Related