Access the full text.
Sign up today, get DeepDyve free for 14 days.
(2013)
Methods for Multicriteria Stratification and Experimental Comparison of Them”, Preprint (in Russian) WP7/2013/06
D HICKS, P WOUTERS, L WALTMAN, S RIJCKE, I RAFULS (2015)
The Leiden Manifesto for Research MetricsNature, 520
B. Mirkin, Michael Orlov (2015)
Three Aspects of the Research Impact by a Scientist: Measurement Methods and an Empirical Evaluation
Tim Berners-Lee (2010)
Long Live The WebScientific American, 303
M. Way, Sharon Ahmad (2013)
The San Francisco Declaration on Research AssessmentJournal of Cell Science, 126
Antonis Sidiropoulos, Dimitrios Katsaros, Y. Manolopoulos (2014)
Identification of Influential Scientists vs. Mass Producers by the Perfectionism IndexArXiv, abs/1409.6099
(2013)
On the Notion of Research Impact and Its Measurement”, Institute of Control Problems, Moscow (in Russian), Control in Large Systems, Special Issue: Scientometry and Experts
(2016)
IHTSDO, International Health Terminology Standards Development Organization, SNOMED CT, Systematized Nomenclature of Medicine
F MURTAGH (2010)
The Correspondence Analysis Platform for Uncovering Deep Structure in Data and InformationThe Computer Journal, 53
D. Blei, A. Ng, Michael Jordan (2001)
Latent Dirichlet AllocationJ. Mach. Learn. Res., 3
(2016)
What It Takes to Succeed in FET-Open
E. Picard (2013)
San Francisco declaration on research assessment
G. Abramo, T. Cicero, Ciriaco D’Angelo (2013)
National peer-review research assessment exercises for the hard sciences can be a complete waste of money: the Italian caseScientometrics, 95
Frederic Lee, Xuan Pham, Gyun Gu (2013)
The UK research assessment exercise and the narrowing of UK economicsCambridge Journal of Economics, 37
(2006)
Inventory Classification withMultiple Criteria UsingWeighted Linear Optimization
J. Eisen, C. MacCallum, C. Neylon (2013)
Expert Failure: Re-evaluating Research AssessmentPLoS Biology, 11
B. Alberts (2013)
Impact Factor DistortionsScience, 340
M. Osterloh, B. Frey (2015)
Ranking GamesEvaluation Review, 39
J. Canavan, Aisling Gillen, Aileen Shaw (2009)
Measuring research impact: developing practical and cost-effective approachesEvidence & Policy: A Journal of Research, Debate and Practice, 5
A. Aragón (2013)
A measure for the impact of researchScientific Reports, 3
D. Hicks, P. Wouters, L. Waltman, S. Rijcke, Ismael Rafols (2015)
Bibliometrics: The Leiden Manifesto for research metricsNature, 520
A. Raan (2005)
Comparison of the Hirsch-index with standard bibliometric indicators and with peer judgment for 147 chemistry research groupsScientometrics, 67
W. Ng (2007)
A simple classifier for multiple criteria ABC analysisEur. J. Oper. Res., 177
R. Schapire (1990)
The Strength of Weak LearnabilityMachine Learning, 5
K. Lintner, M. Lane (2014)
EditorialInternational Journal of Cosmetic Science, 36
Yizhou Sun, Jiawei Han, Peixiang Zhao, Zhijun Yin, Hong Cheng, Tianyi Wu (2009)
RankClus: integrating clustering with ranking for heterogeneous information network analysis
Mikhail Orlov, B. Mirkin (2014)
A Concept of Multicriteria Stratification: A Definition and Solution
F. Murtagh (2008)
The Correspondence Analysis Platform for Uncovering Deep Structure in Data and InformationArXiv, abs/0807.0908
F MURTAGH (2008)
EditorialThe Computer Journal, 51
Tim Engels, P. Goos, Nele Dexters, E. Spruyt (2013)
Group size, h-index, and efficiency in publishing in top journals explain expert panel assessments of research group quality and productivityResearch Evaluation, 22
The appeal of metric evaluation of research impact has attracted considerable interest in recent times. Although the public at large and administrative bodies are much interested in the idea, scientists and other researchers are much more cautious, insisting that metrics are but an auxiliary instrument to the qualitative peer-based judgement. The goal of this article is to propose availing of such a well positioned construct as domain taxonomy as a tool for directly assessing the scope and quality of research. We first show how taxonomies can be used to analyze the scope and perspectives of a set of research projects or papers. Then we proceed to define a research team or researcher’s rank by those nodes in the hierarchy that have been created or significantly transformed by the results of the researcher. An experimental test of the approach in the data analysis domain is described. Although the concept of taxonomy seems rather simplistic to describe all the richness of a research domain, its changes and use can be made transparent and subject to open discussions.
Journal of Classification – Springer Journals
Published: Mar 22, 2018
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.