Journal of Legal, Ethical and Regulatory Issues (Print ISSN: 1544-0036; Online ISSN: 1544-0044)

Research Article: 2022 Vol: 25 Issue: 4S

Usefulness of Ranking Systems and Opportunities to Improve Research Quality

Rowaida Aqrabawi, Al-Ahliyya Amman University

Citation Information: Aqrabawi, R. (2022). Usefulness of ranking systems and opportunities to improve research quality. Journal of Legal, Ethical and Regulatory Issues, 25(S4), 1-9.

Abstract

University rankings may provide essential benefit. At the point when rankings are sensitively handled, they tend to strengthen the culture of transparency, also to enhance competition between universities. In truth they permit students settle on educated decisions for university placement. Additionally, ranking systems often invite quality assurance procedures within universities. Rankings are also "intensely debated", particularly as to whether they appropriately serve the interests and needs of non-industrial nations.

Keywords

Ranking Systems, Research Quality Performance, Academic Quality

Introduction

Concerns about reproducibility and the effect of research encourage improvement drives. Current university ranking frameworks assess and think about universities on proportions of academic and research performance. Albeit regularly helpful for marketing purposes, the benefit of ranking systems while analyzing the quality and results is indistinct. This investigation expected to assess the convenience of ranking frameworks and recognize opportunities to support research quality and performance improvement.

Methods

An organized survey of university ranking systems was led to explore research performance and academic quality measures. Eligibility necessities included: incorporation of at least 100 doctoral-granting foundations be right now delivered on a continuous premise and incorporate worldwide universities, rank calculation methodology in English, and separately compute positions. Ranking systems should likewise incorporate a few proportions of research results. Markers were preoccupied and appeared differently in relation to fundamental quality improvement necessities. Investigation of accumulation techniques, the legitimacy of research and academic quality markers and appropriateness for quality improvement inside ranking systems were likewise directed.

Investigation of accumulation techniques, the legitimacy of research and academic quality markers and appropriateness for quality improvement inside ranking systems were likewise directed.

Results

An aggregate of 24 ranking systems were distinguished and 13 qualified ranking systems were assessed. Six of the 13 rankings are 100% centered on research performance. For those detailing weighting, 76% of the all-out positions are ascribed to research markers, with 24% credited to academic or instructing quality. Seven systems depend on standing overviews as well as workforce and alumni awards. Rankings impact academic decision yet research performance measures are the most weighted markers. There are no commonly acknowledged academic quality pointers in ranking systems.

Discussion

No single ranking system gives a thorough assessment of research and academic quality. Using a consolidated methodology of the Leiden, Thomson Reuters Most Creative Universities, and the SCImago ranking systems may give foundations a more powerful criticism for research improvement. Rankings which broadly depend on emotional standing and "prosperity" pointers, for example, award-winning personnel or graduated class who are high-ranking leaders, are not appropriate for academic or research performance improvement drives. Future endeavors ought to more readily investigate estimation of the university research performance through comprehensive and standardized pointers. This paper could fill in as an overall writing reference when at least one of university ranking systems are utilized in endeavors to improve academic noticeable quality and research performance.

Thinking about the value of university advancement, there is a squeezing need for result studies and quality improvement drives in the research undertaking. Keupp, et al., (n.d.) call attention to that current advancement the board is portrayed by clashing expectations, knowledge gaps, and hypothetical irregularities. These issues may adversely affect the interpretation of academic research into the revelation and material cultural advantage. Research quality issues exist inside university research; over the most recent 10 years, a few examinations, and discourses have featured the requirement for development in transparency, replicability, and significant research result announcing (Freedman, Cockburn & Simcoe, 2015; Minelli & Baio, 2015).

Numerous university overseers depend on university ranking systems as markers of progress after some time and in comparison, to different institutions. Universities advance improvement in standings as proof of progress in the academic and research conditions when mentioning financing from government sources (Aguillo, 2011). Different universities utilize ranking systems as proof of money-saving advantage for recently subsidized drives and to upholding extra funding demands. Consumers use university rankings to assess higher education openings both nationally and universally.

Previous surveys of university rankings found that accentuation on standing and institutional assets may not genuinely address university-quality (Usher & Medow, 2009; Jöns & Hoyler, 2013). Audits of five ranking systems by (Usher & Medow, 2009) zeroed in on the reasonableness of rankings as illustrative of academic quality. Their discoveries show that ranking system markers are not adequate for promoting policy choices or buyer decision. Proposed academic quality markers incorporate understudy section measures, program fruition rates, extent of graduates entering the work upon graduation; proficient preparing, higher degrees, and the normal beginning pay rates of graduates. Shin & Toutkoushian (2011) reasoned that distributions and references were not suitable markers of logical institutional worth. Their outcomes propose that different measures ought to be carried out while evaluating institutions for quality or decision for the profession choice.

Jöns & Hoyler (2013) most as of late assessed five world ranking systems and inferred that while ranking systems have improved somewhat recently, the propensity to be one-dimensional frustrates a more extensive university assessment.

An assessment of the Shanghai and Times Advanced education rankings conducted 70 recreations to imitate rankings; their outcomes demonstrate that wrong loads were utilized to figure the general score (Pinar, Milla & Stengos, 2013). The absence of replicability emphasizes the requirement for progressing research quality assessment and improvement. Reliability of research impacts scientific credibility as well as effective advancement.

Appraisal of the legitimacy of research and academic quality pointers in university rankings is regularly neglected; just a single time in the writing were two ranking systems so assessed (Huang, 2012). Incorporating the much-referred definitions of legitimacy via Carmines and Hammersley, legitimacy is the degree to which a measuring instrument precisely addresses those highlights of phenomena, that it is expected to describe (Guest, MacQueen & Namey, 2012; Karros, 1997).

While academic institutions have an obligation to guarantee that research cycle and results productively and wisely oversee assets, standardized research performance assessment instruments for examination across institutions don't at present exist. Academic institutions and directors need solid assessment pointers of research and academic quality and university ranking systems are regularly utilized for this reason. The target of this examination is to assess the helpfulness of ranking systems for both academic and research performance and quality improvement, through a methodical survey of openly accessible university ranking systems.

Methods

We directed an efficient audit of university ranking systems using the PRISMA protocol and checklist, researched applicable measures to discover regularly utilized pointers for assessing research performance and development (Panic, Leoncini, De Belvis & Ricciardi, 2013). The survey protocol for this examination is accessible from the creators.

Eligibility Criteria

Ranking systems that incorporate more than 100 doctoral-granting universities in their sample were qualified. Rankings should be presently delivered on a continuous premise and incorporate worldwide universities. Ranking systems additionally expected to distribute the rank computation methodology in English. Ineligible measures included rankings that were exclusively founded on reputation surveys, did exclude research result pointers, or positioned institutions exclusively by subject area.

Searches

A pursuit of publicly accessible ranking systems for universities was attempted among January and March 2017, using web search and subjective literature review. Search terms included "university ranking", "research productivity," "measurement," and "ranking university research." Ranking system proprietors and VP of Research Administration were likewise counseled. Our searches were not restricted to a specific field. Web search engines utilized included Indexed at (Search strategy: "university ranking" All Fields,, Web of Science (WOS), and Google Scholar. To diminish determination bias, extra web look was likewise extensively directed with a similar hunt terms to recognize any additional ranking systems.

Processing/Abstraction

The motivation behind the ranking system and methodologies for calculation of positions were gotten from distributed proclamations through each ranking system site or publicly accessible documentation on methodology. Terms, for example, "the objective," or "purpose of" each ranking system is utilized to recognize the stated purpose behind the ranking system. All indicators which were expressed by the ranking systems to assess research and academics were preoccupied and compared across systems. The conglomeration methodology was likewise disconnected and looked at from the publicly accessible methodologies and results.

Analysis

Ranking systems were likewise assessed on their utility for institutional quality improvement dependent on transparency of information and data analysis, consistency of pointers utilized in rankings over the long run, and availability of institution-level information from ranking system made accessible for others to replicate ranking calculations.

Results

An aggregate of 24 ranking systems were at first recognized through searches. Thirteen ranking systems which distributed in 2015 or 2016. Prohibited ranking systems were either done being distributed, did exclude research performance pointers, or didn't distribute ranking methodologies. The scope of institutions assessed is somewhere in the range of 500 and 5000 institutions. The oldest ranking system is the Carnegie Classification, set up in 1973. Any remaining ranking systems were first distributed somewhere in the range of 2003 and 2015. Three ranking systems are controlled by universities, two by publications or news agencies, five by counseling or independent groups, and one by a government established entity.

The motivation behind most ranking systems is to distinguish top institutions for consumers, to classify institutions by their research activity, and to compare institutions inside nations and across the globe. Some ranking systems express that they don't plan for the data to be utilized to compare institution to institution, however to give an overall interpretation of every institution’s yearly performance.

Four ranking systems explicitly state that their outcomes are intended to assess research quality. The Shanghai and UMR feature their utilization in government cost benefit analysis; RUR, Shanghai, UMR and Times express that their ranking systems may have use in supporting government funding demands.

The Carnegie Classification explicitly states that their rankings are not intended to assess research performance. The Carnegie Classification System depends on Research and development expenditure information in both STEM and non-STEM fields from the NSF Review of Research and Improvement Expenditure at Universities and Colleges. Absolute staff working in science and engineering research are incorporated from the NSF Review of Graduate Understudies and Post-doctorates in Science and Engineering. No measures of research performance are evaluated. The UMR system likewise gives pointers of quality, however leaves the definition of quality up to client inclinations, by permitting a selection of markers to be chosen.

Nine systems utilized absolute distributions as a marker-this is ordinarily characterized by the number of peer-reviewed articles that are included for either the Thomson Reuters Web of Science Core Collections database, or SCOPUS, delivered by Elsevier. By and large, 33.8% of ranking scores are allocated to publications and references or different versions of these measurements. In many examinations, this isn't reliant upon first writer affiliation, implying that articles could be checked more than once across various institutions in collaborative works. Companion assessment of both academic and research reputation and cumulative workforce awards contribute overall.

39.8% of total ranking score among those who report weighting.

Ranking systems which depend vigorously on publication and reference measurements incorporate the Leiden Ranking, Shanghai, SCImago, URAP, US News, and World Report, and the EU U-Multirank systems. The Leiden Ranking gives size-dependent and size-independent variations, all indicators considered, with the exception of publication yield. Reference markers are additionally standardized for scientific field contrasts. The counting technique is led utilizing a full tallying and a fractional tallying strategy-wherein collaborative publications are given less weight than non-collaborative ones. A calculation is applied to compute field standardized effect pointers, depicted by Perianes-Rodriguez & Ruiz-Castillo (2017). In the Shanghai ranking system, publications in Nature/Science and Nobel or Fields Awards include half of the score–demonstrating reliance on exceptionally particular markers. Rankings are made by scoring the most noteworthy institution as 100, and the rest as a level of 100. URAP rankings are altogether founded on publication and reference measurements. Scores are standardized by field of study. CWUR rankings are the lone ranking system created by Hirsch (2010) to demonstrate the wide effect of a university's research dependent on performance and reference impact. For all except two ranking systems, Leiden and Carnegie, information utilized in the computations are not made accessible making replicability of the rankings outlandish. Leiden and Carnegie both give downloadable accounting spreadsheets of the ranking pointer information.

Four systems fused at least one of these markers CWUR, SCImago, CA & UMR. The Clarivate Analytics Most Innovative Universities is the solitary ranking system vigorously centered on intellectual property pointers and incorporates markers dependent on independent empirical information. A patent achievement proportion is determined from patent awards per applications. Raw information isn't accessible for validation and replication. The UMR & CWUR incorporate patent applications. The one marker of performance in SCImago depends on reference measurements (publications cited in patent applications).

Six systems join academic quality by different markers. The most well-known is peer to peer survey, utilized by QS World, Times, US News and World Report, UMR, and RUR. Understudy/Workforce proportion is utilized by each one of these systems, barring the US News and World Report. Carnegie, Times, and the UMR likewise utilize absolute doctoral certificates presented while assessing academic quality. Variety of personnel and understudies are likewise utilized by QS World, Times, UMR and RUR as markers of academic quality.

The SCImago rank web presence by Google metrics makes up 20% of the absolute score. Additionally, Webometrics incorporates all worldwide universities that have a web presence. The objective is to urge universities and staff to build their perceivability through the number of webpages and outer networks beginning at institution websites. References and publications make up 40% of the score, in view of the creation of the most cited workforce.

Five ranking systems incorporate reputation overviews as a huge part of the ranking calculation. The QS World ranking credits half of the institution score to academic and employer reputation reviews. Research and academic reputation reviews contribute 33% of the Times ranking system.

A review by PricewaterhouseCooper was finished for this methodology, yet there is no independent validation of self-report information or clarification of the weighting of the pointer rates. Raw information isn't accommodated for independent replication or validation. USN&WR global Rankings join overviews of worldwide and regional research reputation (25% of the absolute score), the consequences of which are not publicly accessible.

Standardization and accumulation methods are utilized in different iterations by the ranking systems. Endeavors are made by totally assessed systems to standardize markers by computing proportions as indicated by personnel numbers or research expenditures. Others-standardized references by field of study, to decrease the benefit of profoundly cited disciplines. Z-scores, fragmentary tallying, and weighted subscales are likewise used to standardize the ranking scores.

All ranking systems refine their examination before every publication. No ranking systems report a particular measures or analysis of their pointer legitimacy. Leiden gives a stability interval to support the individual marker.

Discussion

Administrators, funders, and consumers should search for rankings which are predictable over the long run, cover various spaces of estimation and are less reliant on peer reputation. In view of our outcomes, reputation surveys, self-reported and invalidated information, and non-replicable investigations make an impractical establishment for research improvement assessment, and can prompt a wide scope of institutional positions. At the point when rankings are utilized to as help for budget demands, or as proof of profit from venture, markers which give a balanced approach have the best chance to be genuinely intelligent.

At the point when utilized couple, some ranking systems may have more sensible thoroughness and legitimacy. Utilization of the Leiden Ranking system, the Clarivate Analytics Innovation Ranking System, and SCImago measure for efficient assessment and correlation might be a promising methodology for research managers. The U-Multirank is the broadest of the systems inspected, yet without the ability to look at a university's performance after some time as opposed to in generally classifications, pattern investigation gets difficult.

We tracked down that current ranking systems seldom consolidate the promotion of development culture through patents or intellectual property disclosures. Expanding the research item: publication/patent might be handily controlled to build rankings without really expanding contribution to science (Destler, 2016; Bloche, 2016).

Eight of the thirteen systems incorporate markers to gauge academic quality. These are primarily centered on peer reputation, workforce accomplishment, understudy to staff proportions, and the absolute number of granted doctorates in both STEM and non-STEM fields. Legitimate measures of academic quality are not universally standardized (Usher & Medow, 2009). Many ranking systems are promoted either for academic decision/comparison; yet, these pointers don't adequately mirror the educating and learning conditions of understudies.

Research consumption is regularly utilized as a marker of the strength and quality of an institution’s research abilities. In any case, no connection has been found between more research expenditure and better-quality research. A Canadian assessment tracked down a decreasing rate of return between the two elements, and in the US, NIH funding was altogether associated with expanded publications, yet not with the advancement of novel therapeutics (Yin, Liang & Zhi, 2018; Bowen & Casadevall, 2015).

University rankings will in general focus in on bibliometric sources which are one-sided towards English language journals and are hence not comprehensive or completely precise. Peer reputation overview is not published, nor is the information made accessible, and bias towards more well-known institutions might be unavoidable. Also, measures, for example, the number of Nobel Prize victors could be considered "extravagance" pointers, accessible to elite universities however are far off and un-spurring for most different colleges.

In this survey, we investigate the legitimacy and suitability of positioning systems for research performance improvement. Obviously, there is a requirement for development in positioning methodologies. Applying organizational management standards may improve the validity and reliability of university ranking systems and help with suitable pointer decisions.

We suggest that the ideal ranking systems restrict the significance of peer reputation to close to 10% and meet the Thoroughness, Transparency and Replicability rules. Current methodologies depend on easily accessible output information sources; reliance on these measures propagates the point of view that a couple of approaches satisfactorily address scientific worth, quality improvement, and advancement performance. While we accept this addresses an exhaustive analysis of suitable ranking systems, different institutions may depend on various systems. Conference with ranking system developers and research administrators has offered help for the included rundown.

Conclusion

There is a requirement for a credible quality improvement development in research that grows new measures and is helpful for institutions to assess and improve performance and societal worth. Quality over quantity ought to be underlined to attest research performance improvement drives and results, which advantage society through scientific revelation, economic outcomes, and public health impact. Flow pointers are lacking to precisely assess research results and ought to be enhanced and extended to meet normalized criteria. We recommend that future research to assess three components of research results: scientific effect, economic outcomes, and public health impact for assessing research performance inside an academic institutional climate.

Research output is an element of assets spent and the microeconomic impetus structure. Expanded assets alone won't really increment and improve academic output. A significant tool being sent around the world, and talked about in this paper, is the implementation and development of strategies to assess research output. Numerous nations that perform well on measures of research output have techniques to assess research output. Nations that presented these systems have hence fortified their systems by presenting improved motivations. Policymakers in these nations should accept there are substantial advantages from assessing research output through upgraded motivators to produce high-quality research output.

Assessments are significant as motivators yet at a more principal level, they give information on the research activity inside a country. On the off chance that there are no transparency and objective method of inspecting a research activity, it is hard to decide if the research system is 'working' and where and how it very well may be improved.

References

Aguillo, I.F. (2011). University rankings: The web ranking. Higher Learning Research Communications, 2(1).

Crossref, Google Scholar

Bloche, M.G. (2016). Scandal as a sentinel event—Recognizing hidden cost–quality trade-offs. New England Journal of Medicine, 374, 1001–1003.

Google Scholar, Indexed at

Bowen, A., & Casadevall, A. (2015). Increasing disparities between resource inputs and outcomes, as measured by certain health deliverables, in biomedical research. Proceedings of the National Academy of Sciences, 112, 11335–11340.

Crossref, Google Scholar,Indexed at

Buela-Casal, G., & Gutiérrez-Martínez, O. (2007). Comparative study of international academic rankings of universities. Scientometrics, 71, 349-365.

Crossref, Google Scholar, Indexed at

Destler, K.N. (2016). Creating a performance culture: Incentives, climate, and organizational change. The American Review of Public Administration, 46(2).

Crossref, Google Scholar

Forms Version. (2013). Application guide for NIH and other PHS agencies. U.S. Department of Health and Human Services-Public Health Service.

Google Scholar

Freedman, L.P., Cockburn, I.M., & Simcoe, TS. (2015). The economics of reproducibility in preclinical research. PLoS Biol, 13(6), e1002165

Crossref, Google Scholar, . Indexed at

Gast, J., Gundolf, K., & Cesinger, B. (2017). Doing business in a green way: A systematic review of the ecological sustainability entrepreneurship literature and future research directions. Journal of Cleaner Production, 147, 44-56.

Crossref, Google Scholar

Goodman, S., & Greenland, S. (2007). Why most published research findings are false: problems in the analysis. PLoS Med, 4(2), e28.

Crossref, Google Scholar

Guest, G., MacQueen, K.M., & Namey, E. (2012). Validity and reliability (credibility and dependability) in qualitative research and data analysis. Applied thematic analysis.

Crossref, Google Scholar

Hirsch, J.E. (2010). An index to quantify an individual's scientific research output that takes into account the effect of multiple coauthorship. Scientometrics, 85, 741-754.

Crossref, Google Scholar, Indexed at

Huang, M.H. (2012). Opening the black box of QS World University Rankings. Research Evaluation, 21(1), 71-78.

Crossref, Google Scholar, Indexed at

Jöns, H., & Hoyler, M. (2013). Global geographies of higher education: The perspective of world university rankings. Geoforum, 46, 45-59.

Crossref, Google Scholar, Indexed at

Karros, D.J. (1997). Statistical methodology: II-Reliability and validity assessment in study design, Part B. Academic Emergency Medicine, 4(2), 144-147.

Crossref, Google Scholar, Indexed at

Minelli, C., & Baio, G. (2015). Value of information: A tool to improve research prioritization and reduce waste. PLoS Med, 12(9), e1001882.

Crossref, Google Scholar, Indexed at

Panic, N., Leoncini, E., De Belvis, G., & Ricciardi, W. (2013). Evaluation of the endorsement of the preferred reporting items for systematic reviews and meta-analysis (PRISMA) statement on the quality of published systematic review and meta-analyses. PloS one, 8(12), e83138.

Crossref, Google Scholar, Indexed at

Perianes-Rodriguez, A., & Ruiz-Castillo, J. (2017). A comparison of the web of science and publication-level classification systems of science. Journal of Informetrics, 11(1), 32-45.

Crossref, Google Scholar, Indexed at

Pinar, M., Milla, J., & Stengos, T. (2013). Research discourses surrounding global university rankings: Exploring the relationship with policy and practice recommendations Education Economics, 65, 709-723.

Crossref

Salman, R.A.S., Beller, E., Kagan, J., & Hemminki, E. (2014). This week in medicine. The Lancet, 383(9912), 11–17.

Shin, J.C., & Toutkoushian, R.K. (2011). The past, present, and future of university rankings. University rankings.

Google Scholar, Indexed at

Taffe, M.A., & Gilpin, N.W. (2021). Equity, diversity and inclusion: Racial inequity in grant funding from the US National Institutes of Health. ELife, 10, e65697.

Crossref, Google Scholar, Indexed at

Thakur, M. (2007). The impact of ranking systems on higher education and its stakeholders. Journal of Institutional Research, 13, 83-96.

Google Scholar, Indexed at

Usher, A., & Medow, J. (2009). A global survey of university rankings and league tables. University rankings, diversity, and the new landscape of Higher Education, 32(1), 1-18.

Crossref, Google Scholar

Yin, Z., Liang, Z., & Zhi, Q. (2018). Does the concentration of scientific research funding in institutions promote knowledge output?Journal of Informetrics, 12(4), 1146-1159.

Crossref, Google Scholar, Indexed at

Zineldin, M., Akdag, H.C., & Vasicheva, V. (2017). Assessing quality in higher education: New criteria for evaluating students' satisfaction. Quality in higher education, 17(2), 231-243.

Crossref, Google Scholar, Indexed at

Received: 08-Feb-2022, Manuscript No. JLERI-21-9111; Editor assigned: 10-Feb-2022, PreQC No. JLERI-21-9111 (PQ); Reviewed: 23- Feb-2022, QC No. JLERI-21-9111; Revised: 07-Mar-2022, Manuscript No. JLERI-21-9111 (R); Published: 21-Mar-2022.

Get the App