Journal of Management Information and Decision Sciences (Print ISSN: 1524-7252; Online ISSN: 1532-5806)

Research Article: 2023 Vol: 26 Issue: 3S

Performance Management and the Pitfalls of Measurements

Harry Müller, Ludwigshafen University of Business and Society

Citation Information: Müller, H. (2023). Performance management and the pitfalls of measurement. Journal of Management Information and Decision Sciences, 26 (S3), 1-12.

Abstract

Evaluating performance factors by means of ‘performance controlling’ is of great importance for the management of operational value creation processes, but it is fraught with major problems of measurability. The present article examines this by reference to the higher education system, where the difficulties and limitations of performance measurements can be graphically described. Firstly, the role of performance controlling in general and its problems in practice are outlined. By reference to higher education, and specifically the areas of teaching and research, it is shown that defining good teaching and good research by using straightforward key ratios is often problematical. It is proposed that even apparently simple indicators and tried and tested approaches can lead to counter-productive incentives for lecturers and professors, and this in turn can lead to significant performance control problems. This is not merely the result of special factors in the further education system, it indicates a fundamental problem of performance controlling. Here, this article demands a more sensitive application of the methods of performance measurement, even in private business and especially in connection with the evaluation and controlling of sustainability.

Keywords

Performance Controlling, Performance Measurement, University Management, Rankings, Evaluation.

Introduction

Role of Performance Controlling

It is claimed that Peter Drucker encapsulated a central objective of business controlling and value-based management in the slogan: what gets measured gets managed. This statement suggests that analytical comprehension and an operational expression of a situation are necessary as a prerequisite for successful entrepreneurial management. In other words, operational functions should not only be studied on the basis of verbal analysis and their design and optimisation potential. According to this widespread business management approach, the results of this analysis must then be expressed in the form of numerical indicators which should preferably be scalable. In a management environment which is focussed on clearly defined goals, measurability becomes a requirement to design business processes which will enable management by objectives (Greenwood, 1981; Odiorne, 1965). Setting goals and determining the extent to which they are achieved means that it must be possible to describe them in the form of indicators, or better still, key ratios.

This creates an interdependent relationship between measurement and management. What gets measured gets managed and therefore gets done (and there are alternative versions of the slogan quoted at the beginning of this article which state that what gets measured gets done). Key ratios and comparisons of goals and results can identify problem areas which need to be addressed. On the other hand, the elements of a problem need to be measurable so that the company management can recognise it as a problem. What cannot be expressed in the form of indicators often simply does not exist for the management. Situations which cannot be operationally described with key ratios tend to be ignored by the reporting system and are therefore not noticed at all by the management, or only as peripheral phenomena.

The central task of the controlling system in this context is initially to provide the management with relevant decision-making information. Even though some more recent controlling concepts go beyond this definition, for example the definition of controlling as a way of ensuring the rationality of the management Weber & Schäffer (1999) they nevertheless include it as a significant component. In deciding what subjects are suitable for observation and measurement, the traditional controlling concept is heavily weighted towards costs. The question of an appropriate treatment of overheads in every scenario of the business decision-making process has been at the heart of accounting systems since the days of Schmalenbach (1928) but the relationship between performance and revenue has traditionally been paid less attention. Even though it has not been completely neglected, performance controlling has received much less consideration in theory and practice than cost accounting.

In many cases, however, effective controlling of business results cannot be limited to just the turnover revenue, it also needs to generate deeper insights into to the preparatory and auxiliary processes within the company. There is a wide variety of concepts on the cost side in areas such as in-depth deviation analysis, but there is often no such conceptual depth on the performance side, even though there have been a number of developments in the measurement of performance in the last few decades (Arnaboldi & Azzone, 2010; Gleich, 1997; Neely et al., 1995; Seiter, 2006; Simons, 2000).

At the level of reporting systems and key ratios, multi-dimensional approaches such as those in the established balanced scorecard developed by Kaplan & Norton (1996) are now normal. After all, a sensible pursuit of economic principles is impossible without considering performance factors, because focussing simply on cost factors would neglect a possible loss of quality on the performance side. It is therefore all the more important that the data basis for performance controlling should be critically examined and that its quality should be ensured. If performance measurement actually measures the wrong things, the key ratios and balanced scorecard could also lead to wrong management decisions.

Four conceptual measuring levels can be distinguished in performance measurement (Brown & Svenson, 1988; Weber & Großklaus, 1995; Weber & Schäffer, 2016):

1. Input

2. Process

3. Output

4. Outcome

The input factors which contribute to the production process have the most general link with the area of performance which is being considered here. These factors include topics such as working hours or the cost of materials. The process covers the proper implementation of the performance, for example whether all necessary working steps are carried out correctly. The output describes the actual result of the performance, for example the quantity and/or quality of a product or service. And the outcome is aimed at the goals and quantities which lie behind the performance, such as a long-term profitable and stable relationship with the customer.

The question of which level is appropriate for evaluation and controlling of a business process depends on the structure of the process itself, or more specifically: the degree to which the results are observable. A precise description of the outcome would of course be ideal here, but in practice it can only rarely be achieved. In businesses, measuring the output for specific products at the turnover level appears unproblematical, at least at first sight. But quantifying the outcome in the form of customer satisfaction and the potential for further profitable business relations is likely to be difficult, although it definitely appears to be worthwhile. But if we go back along the process chain it becomes apparent that preliminary processes in production, such as the quality of incoming parts or secondary business processes, are hardly observable at the output level. Here, it is necessary to refer to the process level or even the input level (Weber & Schäffer, 2016). It is no accident that the normal case in remuneration is a time-based wage, in other words a factor on the input level, because there is generally no rational way to observe the process quality, output or outcome of the performance rendered by the employees.

The great importance of efficient performance measurement indicators which take the situation into account becomes clear if we consider the tasks which performance controlling needs to fulfil. Performance measurement is a necessary basis for an effective planning and monitoring system, and thus a necessary prerequisite to create incentives to perform the necessary tasks economically, i.e. with an efficient ratio between goals and resources. This enables areas which have no market turnover and are therefore not accessible to direct monetary measurement to be assessed for goal-oriented management (Gleich, 1997; Greenwood, 1981; Rommelspacher et al., 2006).The formulation and calculation of performance factors also helps to provide a better analytical understanding of overheads and forms the basis for an allocation of costs. In addition, performance controlling helps us to recognise the operational control problems in the relevant areas. Performance measurements refer directly to business procedures and processes, whereas elements such as process costs only describe the cost-related consequences.

Measurement Problems

However, effective performance controlling requires suitable indicators which need not be identical with the performance itself, but must express it sufficiently. Potential indicators must meet three central requirements.

1. Observability

2. Validity

3. Incentive Compatibility

Observability means that it must be possible for the performance indicator to be observed and measured by an independent third party (i.e. not only by the person who renders the performance). Validity means that the measurement results must be reproducible and thus verifiable for third parties. Incentive compatibility, which in practice is the most demanding of the three requirements, means that the indicators must offer a suitable way to influence human behaviour effectively in the relevant performance area (i.e. to steer the performance in the desired direction). The central importance of this third requirement can be seen particularly clearly in the system-immanent control problems in the former Socialist centrally administered economies of Central and Eastern Europe (Lachmann, 2004). The negative control results which arose from the lack of incentive compatibility are sometimes characterised by using the pejorative term ‘tonnage ideology’.

This measurement and incentive problem always exists where transactions are not guided by market mechanisms – both in broad areas of the national economy and within individual companies. Outside the commercial and industrial sector, for example in many liberal professions or in non-profit organisations and public companies, the input or process factors mentioned at the start of this article and the associated governance mechanisms tend to be more prevalent. In certain respects, non-observable performance in the case of ‘credence goods’ means that the performance is regarded as belonging to a non-market-based sector, and in some cases is strictly regulated. Examples include medical services, the activities of lawyers, the promotion of culture or the education sector.

Outside the non-profit and public sector, the internal preliminary processes and all secondary business processes in private business companies (i.e. personnel, marketing, finance etc.) are affected by this problem. Internal transfer prices do not solve the problem of measurability – on the contrary, they require the prior definition and measurement of valid performance units which must then, in a second step, be assigned a price value. In addition to these areas, the problem of indicators also arises in company-wide service functions such as quality management or in the systematic development of potential business opportunities and relationships in the company environment, for example with customers, potential employees, public relations etc.

The Example of Higher Education

The fact that the measurability of output indicators is not merely an academic question, and that the depth of the problems are often only understood on closer examination, can be graphically illustrated by considering the university system. The performance rendered in higher education institutions can fundamentally be sub-divided into two or three major areas:

1. Teaching

2. Research

3. Third Mission

Teaching is the means by which universities pass on knowledge to their students, research is the way the universities generate new knowledge – with a greater focus on basic research in classical universities and more emphasis on application-based research in universities of applied sciences. The question of whether universities also have a ‘third mission’ in their interaction with society outside the teaching and research environment is controversial, and this question will not be examined further in this article. Here, readers are referred to the relevant literature which has arisen in connection with the Centre for Higher Education of the Bertelsmann Foundation (Hachmeister et al., 2016a; Hachmeister et al., 2016b; Roessler et al., 2015a; Roessler et al., 2015b; Roessler et al., 2016; Schneidewind, 2016) and the not unfounded criticism of this concept (Bacevic, 2017; Shore & McLauchlan, 2012; Watson & Hall, 2015). To examine the problems of performance controlling, the following text will concentrate on the areas of teaching and research, which are undisputed domains for higher education institutions.

The rationality of performance controlling in relation to the research and teaching performance carried out at higher education institutions is widely contested, but in many decision-making and leadership situations it is a necessary prerequisite for transparent management. To assess the quality of research and teaching, make academic appointments, award research or lecturing prizes, assess professorial and lecturer salaries and assign performance-based funds, it is essential to have reliable statements on the output that has been achieved. In addition, the results of performance controlling are also of interest to third parties such as students deciding where to study or companies looking for suitable research partners.

This information is provided in the form of rankings, which in the terminology of controlling are simply an aggregated report based on the results of performance controlling. These reports compile assessments for various dimensions of the performance, which are then used as the criteria for the ranking (Müller, 2013). Teaching-based rankings may be based on the results of evaluations or questionnaires on the quality of the teaching, or alternatively, as a popular method especially in English-speaking countries, they may be linked with the later earnings of the graduates, which then become a performance indicator for the quality of the teaching. In research, the criteria may be based on the number of publications, or the reception of these publications may be determined on the basis of citations. These two key ratios and the total amount of external funding obtained are commonly used as indicators for the quality of the research, although external funding is strictly speaking not an output but an input, and it can only indirectly be interpreted as a measure of performance, if at all.

Each of these ranking criteria can be considered at various observation levels, and these also determine the objects of the ranking. In addition to data capture and evaluation at the level of individual professors and lecturers, the results may also be aggregated at the level of the faculty, the university or the federal state in order to make a statement about the performance of a collective entity.

In the following sections, the examples of teaching and research will be examined to determine whether the standard indicators which are commonly used in practice, and in some cases have been established for decades, fulfil the above requirements. Do the indicators really measure what they claim to measure?.

Teaching performance

The most important instrument of teaching evaluation used in practice in universities is probably the teaching quality evaluation feedback system, which is sometimes prescribed as a compulsory element. The established method here is a survey of students based on a standardised questionnaire. The results are obtained both for individual departments and faculties and also for whole universities, for example by the Centre for Higher Education (CHE) of the Bertelsmann Foundation. They provide an important basis for the CHE university ranking, which is published by the weekly newspaper DIE ZEIT and is said to have a significant influence on the choices of potential students. The significance and reliability of these rankings as reporting and key ratio systems is of course dependent on the quality of the underlying survey data.

In this connection, it is necessary to refer to a relevant study by Felton et al. (2008) which draws on an extensive statistical sample to demonstrate that the results of the teaching evaluation (‘overall quality’) are largely dependent on the ‘easiness’ of the courses (i.e. the students' perception of the ratio between the grades obtained and the amount of learning effort) and the personal attractiveness of the professors of lecturers (‘hotness’). This correlation has been confirmed in other studies (Freng & Webber, 2009; Spooren et al., 2013; Timmerman, 2008). As a generalisation, it can therefore be concluded that the results of the teaching evaluations do not exclusively reflect characteristics immanent to the teaching events, but that to a significant extent they also express the attractiveness of the teaching personnel and the demands made in examination situations.

At first sight this result may simply appear bizarre, but problems arise first of all in connection with incentives for the teaching personal. The realisation that students seem to prefer good-looking lecturers is not such a problem in relation to the possible incentives because attractiveness largely refers to personal characteristics which cannot be significantly changed by the persons involved (such as size, facial form etc.). And this phenomenon is also a typical factor in many other professions (Biddle & Hamermesh, 1998; Frieze et al., 1991; Judge & Cable, 2004; Judge Hurst, & Simon, 2009).

A significantly greater problem results from the fact that lower standards in examinations tend to favour a positive evaluation of the teaching personnel. If professors and lecturers change in response to this incentive and reduce their demands in examinations, this can lead to significant disadvantages for the economy as a whole. The individual examination candidate may see this as an advantage which could be reflected in a positive evaluation. But in a wider context, it represents a negative external effect. There are individual economic benefits for the student in the form of less effort, and for the lecturer because of the better evaluation, but the national economy and society as a whole cannot have any interest in lowering the standards in university education and the examination system (Chan et al., 2007; Gaens, 2013; Pressman, 2007). In the light of this background, there are at least justified reasons to suspect that teaching evaluation based on student questionnaire data (in addition to the positive effects which undoubtedly exist and are not considered here) could also have negative effects from the perspective of society as a whole. In any event, it seems reasonable to consider whether more suitable indicators could be developed for this aspect of performance controlling, and how this could be done.

Research performance

Alongside teaching and lecturing, the area of research, i.e. the task of generating new knowledge, is the second main pillar of university work. Here, too, common methods to measure output have become established over the last few decades. Compared with the evaluation of the teaching, the focus here is more at the university level than the personal level, although methodically it is simply an aggregation. The results of the output measurement are analysed in terms of key ratios and consolidated into rankings which are then keenly studied both by other specialist researchers and by the wider public. Examples include the international ranking published by the science periodical Nature, the economist ranking of the F.A.Z. newspaper, the CHE ranking of the Bertelsmann Foundation, the QS World University Ranking, and in our subject area the business administration ranking published by the Handelsblatt (Dilger & Müller, 2016). In contrast with the teaching evaluation system, there is lively public debate about the validity and relevance of research performance rankings, with occasional prominent calls to boycott the system (Albers, 2009; Erne, 2007; Frey, 2003; Kieser, 2010a; Kieser, 2010b). These results have been controversially discussed in German-speaking academic circles, partly because, at least at universities, the results are even more closely linked with academic appointments and scientist career promotions than the teaching evaluation results, so the topic is more acutely relevant to career development and recruitment policies.

The performance indicators used for research relate to the number of publications and especially the key figures for citations (Hornbostel, 2006). Citations measure how often a publication is referred to in other publications, and this aims to show how influential the content is deemed to be. They give the publications a quality weighting, either at the level of individual publications or at the level of the periodical itself and its impact factors (Müller, 2012). Quantitative studies of the reception of scientific publications have even developed as a separate discipline, known as ‘bibliometry’.

The question of how the individual research performance can be expressed with the aid of bibliometric methods, and what other influencing factors contribute to the calculated results, has been examined for the German-speaking business administration field in a study presented by (Müller & Dilger, 2016). This study examines the publications by all university lecturers and professors in the German Academic Association for Business Research (VHB). In addition to the typically unequal distribution of publication activities and citations according to the Pareto principle (Pareto, 1964), the most notable result here was the correlation between the fields of research and the position of the researchers in the rankings. In the course of multi-variant analysis, it was shown that the content focus of an author had a significant effect on the author's position in the rankings. It was found that this was due to different citation and publication cultures in the different subject areas of business administration, and these cultural differences must not be confused with differences in standards or quality. So it is not an accident or a quality indicator that the top positions in research performance rankings in business administration are disproportionately dominated by representatives of specific subject fields.

Rankings undoubtedly reflect the quantity and quality of the scientific work of a university lecturer. But the order in the rankings is systematically distorted by the different subject areas, i.e. the rankings make a comparison between factors which are not equal. The question of whether someone in position 5 has achieved greater success than someone in position 105 can simply not be answered by the ranking system. As a system of output evaluation and controlling, rankings create pseudo-objective results which then may possibly be used to make real (wrong) decisions (Albers, 2009; Kieser, 2012; Müller, 2013; Müller & Dilger, 2016).

Conclusions and Parallels with Sustainability Controlling

The two examples from higher education which have been presented here will now be discussed in relation to the subject of performance controlling in general. On this basis, the following seven propositions are formulated:

1. The examples from research and university teaching highlight a general problem of performance controlling. This is not specific to higher education, it can occur whenever the result of a performance process needs to be measured in situations that do not involve market transactions. If no market-based turnover data is available, indicators are needed which could potentially be vulnerable to distortion.

2. Even apparently simple indicators or conventions that have been common practice for decades must be critically examined. Both the evaluations and the rankings are standard procedures that have been tried and tested, and the teaching quality evaluation process has been carried out thousands of times year after year.

3. Problematical performance indicators can create counter-productive incentives for those who render the performance, and this can lead to negative external consequences. If performance indicators only express part of the performance area, and especially if they do so in a distorted manner, this may then cause a reduction in the performance to focus only on the aspects covered by the indicators.

4. The real damage is caused by the counter-productive incentives. The problem is not that the measurements themselves may possibly be irrelevant, it lies in the consequences which may arise from the incentives that are created. The damage to business and the economy is caused by these counter-productive incentives.

5. Improved measurement techniques alone cannot solve this problem. A greater number of indicators or more differentiation and methodical sophistication in the measurement processes (e.g. by statistical normalisation of the identified distortions) do not automatically reduce the problem. The more highly differentiated the methods become, the more difficult it is to detect the problems in validity and incentives.

6. This does not lead us to demand a general elimination of performance controlling. This would only be justified if the damage caused by counter-productive incentives exceeds the benefits of controlling, and at least as a rule this does not seem to be true.

7. But judicious and discerning output controlling must distinguish between the goals of the controlling and evaluation process and possible counter-productive incentives. This requires an exact analysis of the indicators proposed in each individual case to assess their observability, validity and stimulus effects. Performance controlling with imperfect indicators is still sensible if positive results can be expected even taking counter-productive incentives into account.

With regard to the slogan what gets measured gets managed, these conclusions suggest that a certain degree of humility is necessary. The case studies from higher education show that, in addition to the positive effects which are not examined here, performance measurement may also lead to negative business and economic effects, and that the positive and negative outcomes must be carefully balanced against each other. The reasons for these problems do not lie in the specific circumstances of universities as an organisational form, they are more general in character. The output is not assigned inherent price indicators as part of a market process, it must be quantified on the basis of indicators. But these indicators can naturally not adequately capture amorphous performance concepts such as ‘good teaching’ or ‘relevant research’, so they invariably lead to problems.

These general findings can be transferred to numerous other situations, and these specifically include controlling and evaluation processes in private business companies. The question of how performance concepts which are naturally vague can be adequately transformed into operational statements is one of the central challenges of performance controlling. Transferring the concept of performance measurement into practical business use, means facing up to the question of the observability, validity and incentive compatibility of performance indicators. In private business companies, this also often tends to happen without enough reflection, so the problems outlined here provide an extensive field for application-based research. The concept of performance measurement needs to be defined in more detail in relation to specific application situations, but it must also be critically examined for possible problems.

In relation to the propositions formulated above, parallels with the evaluation and monitoring of sustainability (‘sustainability controlling’) appear especially relevant (Colsman, 2016; Günther & Steinke, 2016; Schaltegger et al., 2006). In contrast with economic profitability goals, sustainability goals are multi-dimensional, often qualitative or do not involve scalable measurements. However, enforcing sustainability goals in a company means that they must be defined in operational terms in line with the quote from Peter Drucker at the start of this article. But the management and information systems used in sustainability controlling are particularly affected by the problems outlined in the seven propositions.

Firstly, from an economic perspective the sustainability problem arises from external factors. Not all consequences of a market transaction affect the market participants themselves, because ecological or social harm is often suffered by third parties, so sustainability goals must be deliberately taken into account by the company management (because they are not automatically achieved by simply pursuing economic goals). Pursuing sustainability goals therefore secondly requires an information system which supplements the market processes, and the indicators in this information system must be defined individually. Converting sustainability goals into indicators is a complex problem which has been addressed by various sustainability standards. Such initiatives include the OECD guidelines, the guidelines of the Global Reporting Initiative 1999, the UN Global Compact, 2000, the Carbon Disclosure Project, 2000 and the ISO standard 26000. But the world-wide distribution of these standards in business practice is often carried out without considering the problem of measurability in individual cases, so it still leads to the problem outlined in the second proposition.

Thirdly, this may then lead to counter-productive incentives which arise from measuring individual sustainability indicators. For example, if a company evaluates its environmental performance on the basis of pollution emissions and greenhouse gases, this can lead to deficits in the rationality of make-or-buy decisions unless the relevant emission values are measured for each purchased component on a cost centre basis (which is highly unlikely in practice). If supplier companies are only asked for certification of ecological performance (category-based measurement threshold) but internal company processes are quantified (metric measurement threshold), this will normally lead to a decision to outsource problematical value creation stages even though it is not clear whether this will actually reduce the level of net emissions. As a result, this could fourthly even lead to adverse selection if production steps are carried out by the companies which report environmental pollution at the lowest level of the scale. Such a mis direction would correspond to the harm described in proposition four, which results solely from the imperfection of the management and information system. In addition, there may be a conscious or unintentional failure to pursue social and environmental goals which were not covered in the necessarily limited catalogue of indicators. Just as student satisfaction is not the same thing as good teaching, better values in individual sustainability indicators are not the ecological and social ideal, they are merely an incomplete representation of this ideal.

Fifthly, in relation to sustainable management it must also be admitted that merely adding more indicators and improving the measuring techniques will unfortunately not be a likely way to solve the problem. More data and greater differentiation of the data often impair the verifiability of the results. Under the sixth proposition, this is not an argument against the use of sustainability controlling.

There needs to be a sustainability controlling system because there are no alternative market instruments which could achieve the same goals, and in most cases it can be assumed that ecological and social goals would be pursued to a lesser extent without the use of relevant controlling instruments. However it is certainly possible to imagine cases in which a unilateral concentration on one-dimensional controlling ratios could weaken the intrinsic motivation of the participants to deal with the multi-dimensional aspects of the problem. The problem of counter-productive incentives, which was discussed in relation to the third proposition, also underlines the call for greater sensitivity to the problems of observability, validity and incentive effects which may be associated with the key ratios used in performance measurement. So an analysis of the indicators used in performance controlling in higher education can provide valuable insights which are also applicable to social and ecological business management.

References

Albers, S. (2009). Misleading rankings of research in business. German Economic Review, 10(3), 352-363.

Indexed at, Google Scholar, Cross Ref

Arnaboldi, M., & Azzone, G. (2010). Constructing performance measurement in the public sector. Critical perspectives on accounting, 21(4), 266-282.

Indexed at, Google Scholar, Cross Ref

Bacevic, J. (2017). Beyond the third mission: toward an actor-based account of universities’ relationship with society. In H. Ergül & S. Co?ar (eds.). Universities in the Neoliberal Era: Academic Cultures and Critical Perspectives, 21–39, London: Palgrave Macmillan.

Indexed at, Google Scholar, Cross Ref

Biddle, J. E., & Hamermesh, D. S. (1998). Beauty, Productivity, and Discrimination: Lawyers' Looks and Lucre. Journal of Labor Economics, 16(1), 172–201.

Indexed at, Cross Ref

Brown, M.G., & Svenson, R.A. (1988). Measuring R&D Productivity. Research-Technology Management, 31(4), 11–15.

Indexed at, Google Scholar, Cross Ref

Chan, W., Hao, L., & Suen, W. (2007). A Signaling Theory of Grade Inflation. International Economic Review, 48(3), 1065–1090.

Indexed at, Google Scholar, Cross Ref

Colsman, B. (2016). Nachhaltigkeitscontrolling: Strategien, Ziele, Umsetzung. Wiesbaden: Springer Gabler.

Dilger, A., & Müller, H. (2016). Outputanalyse betriebswirtschaftlicher Fachbereiche: Ein zitationsbasiertes Ranking deutschsprachiger Hochschulen. In H. Ahn, M. Clermont, & R. Souren (eds.). Nachhaltiges Entscheiden: Beiträge zum multiperspektivischen Performancemanagement von Wertschöpfungsprozessen, 405–425. Wiesbaden: Springer Gabler.

Google Scholar

Erne, R. (2007). On the use and abuse of bibliometric performance indicators: a critique of hix's ‘global ranking of political science departments. European Political Science, 6(3), 306–314.

Indexed at, Google Scholar, Cross Ref

Felton, J., Koper, P. T., Mitchell, J., & Stinson, M. (2008). Attractiveness, Easiness and Other Issues: Student Evaluations of Professors on Ratemyprofessors. com. Assessment & Evaluation in Higher Education, 33(1), 45–61.

Indexed at, Google Scholar, Cross Ref

Freng, S., & Webber, D. (2009). Turning up the heat on online teaching evaluations: Does “hotness” matter?. Teaching of Psychology, 36(3), 189-193.

Indexed at, Google Scholar, Cross Ref

Frey, B.S. (2003). Publishing as prostitution?–Choosing between one's own ideas and academic success. Public choice, 116 (1-2), 205-223.

Indexed at, Google Scholar, Cross Ref

Frieze, I. H., Olson, J. E., & Russell, J. (1991). Attractiveness and income for men and women in management. Journal of Applied Social Psychology, 21(13), 1039-1057.

Indexed at, Google Scholar, Cross Ref

Gaens, T. (2013). Von einem, der auszog, einen Leistungsindikator zu erheben: Durchfallquoten und die Problematik ihrer Bildung. Das Hochschulwesen, 61(6), 200–206.

Google Scholar

Gleich, R. (1997). Performance Measurement, Die Betriebswirtschaft, 57, 114–118.

Google Scholar

Greenwood, R.C. (1981). Management by objectives: As developed by Peter Drucker, assisted by Harold Smiddy. Academy of Management Review, 6(2), 225-230.

Indexed at, Google Scholar, Cross Ref

Günther, E., & Steinke, K. H. (eds.). (2016). CSR und Controlling: Unternehmerische Verantwortung als Gestaltungsaufgabe des Controlling. Springer-Verlag.

Google Scholar

Hachmeister, C. D., Möllenkamp, M., Roessler, I., & Scholz, C. (2016a). Katalog von Facetten von und Indikatoren für Forschung und Third Mission an Hochschulen für angewandte Wissenschaften. Gütersloh: Centrum für Hochschulentwicklung.

Google Scholar

Hachmeister, C. D., Henke, J., Roessler, I., & Schmid, S. (2016b). Die Vermessung der Third Mission: Wege zu einer erweiterten Darstellung von Lehre und Forschung. Die Hochschule, 25(1), 7–13.

Indexed at, Google Scholar

Hornbostel, S. (2006). Leistungsmessung in der Forschung. In Hochschulrektorenkonferenz (ed.) Beiträge zur Hochschulpolitik: Von der Qualitätssicherung der Lehre zur Qualitätsentwicklung als Prinzip der Hochschulsteuerung, 219-228, Bonn.

Indexed at, Google Scholar

Judge, T. A., & Cable, D. M. (2004). The effect of physical height on workplace success and income: preliminary test of a theoretical model. Journal of Applied Psychology, 89(3), 428.

Google Scholar

Judge, T. A., Hurst, C., & Simon, L. S. (2009). Does it pay to be smart, attractive, or confident (or all three)?: relationships among general mental ability, physical attractiveness, core self-evaluations, and income. Journal of Applied Psychology, 94(3), 742–755.

Google Scholar

Kaplan, R.S., & Norton, D.P. (1996). The Balanced Scorecard: Translating Strategy into Action. Boston, MA: Harvard Business Press.

Indexed at, Google Scholar, Cross Ref

Kieser, A. (2010a). Unternehmen Wissenschaft?. Leviathan, 38(3), 347-367.

Indexed at, Google Scholar, Cross Ref

Kieser, A. (2010b). Die Tonnenideologie der Forschung. Frankfurter Allgemeine Zeitung, 9th of June,N5. Google Scholar

Kieser, A. (2012). JOURQUAL - der Gebrauch, nicht der Missbrauch, ist das Problem. Oder: warum Wirtschaftsinformatik die beste deutschsprachige betriebswirtschaftliche Zeitschrift ist. Die Betriebswirtschaft, 72(1), 93–110.

Indexed at, Google Scholar

Lachmann, W. (2004). Volkswirtschaftslehre 2: Anwendungen. Berlin, Heidelberg: Springer.

Müller, H. (2012). Zitationen als Grundlage von Forschungsleistungsrankings. Konzeptionelle Überlegungen am Beispiel der Betriebswirtschaftslehre. Beiträge zur Hochschulforschung, 34(2), 68-92.

Indexed at, Google Scholar

Müller, H. (2013). Zur Ethik von Rankings im Hochschulwesen: Eine Betrachtung aus ökonomischer Perspektive. Hochschulmanagement, 8(2/3), 41–46.

Indexed at, Google Scholar

Müller, H., & Dilger, A. (2016). „Wie der Forschungsschwerpunkt den Zitationserfolg beeinflusst: Eine empirische Untersuchung für die deutschsprachige BWL“. Betriebswirtschaftliche Forschung und Praxis, 68(1), 36-52.

Google Scholar

Neely, A., Gregory, M., & Platts, K. (1995). Performance measurement system design: a literature review and research agenda. International journal of operations & production management, 15(4), 80-116.

Google Scholar

Odiorne, G. S. (1965). Management by Objectives: A System of Managerial Leadership. New York: Pitman.

Google Scholar

Pareto, V. (1964). Cours d'Économie Politique. Geneva: Librairie Droz.

Google Scholar

Pressman, S. (2007). The Economics of Grade Inflation. Challenge, 50(5), 93–102.

Indexed at, Google Scholar, Cross Ref

Roessler, I., Duong, S., & Hachmeister, C. D. (2015a). Teaching, Research and more!? Achievements of Universities of applied sciences with regard to the society: Third Mission at UAS. Gütersloh: Centrum für Hochschulentwicklung.

Google Scholar

Roessler, I., Duong, S., & Hachmeister, C.D. (2015b). Welche Missionen haben Hochschulen?: Third Missionals Leistung der Fachhochschulen für die und mit der Gesellschaft. Gütersloh: Centrum für Hochschulentwicklung.

Google Scholar

Roessler, I., Hachmeister, C. D., & Scholz, C. (2016). Positionierung durch Profilierung-Stärkung der Third Mission an HAW. Gütersloh: Centrum für Hochschulentwicklung.

Google Scholar

Rommelspacher, J., Burmester, L., & Goeken, M. (2006). Performance-Measurement- und Analyse-Konzepte im Hochschulcontrolling. In H.-D. Haasis, H. Kopfer, & J. Schönberger (eds.). Operations Research Proceedings 2005: Selected Papers of the Annual International Conference of the German Operations Research Society (GOR), 539–544, Berlin, Heidelberg: Springer.

Indexed at, Google Scholar, Cross Ref

Schaltegger, S., Bennett, M., & Burritt, R. (2006). Sustainability accounting and reporting. Ecoefficiency in industry and science. Dordrecht: Springer.

Google Scholar

Schmalenbach, E. (1928). Buchführung und Kalkulation im Fabrikgeschäft. Leipzig: Gloeckner.

Google Scholar

Schneidewind, U. (2016). Die" Third Mission" zur" First Mission" machen? Wuppertal: Wuppertal Institut für Klima, Umwelt, Energie.

Google Scholar

Seiter, M. (2006). Performance Measurement. Wissenschaftsmanagement, 12(5), 36–38.

Google Scholar

Shore, C., & McLauchlan, L. (2012). ‘Third Mission Activities, Commercialisation and Academic Entrepreneurs. Social Anthropology, 20(3), 267–286.

Indexed at, Google Scholar, Cross Ref

Simons, R. (2000). Performance Measurement & Control Systems for Implementing Strategy: Text & Cases. Upper Saddle River, NJ: Pearson Education.

Indexed at, Google Scholar

Timmerman, T. (2008). On the validity of ratemy professors.com. Journal of Education for Business, 84(1), 55- 61.

Google Scholar

Watson, D., & Hall, L. (2015). Addressing the elephant in the room: are universities committed to the third stream agenda. International Journal of Academic Research in Management, 4(2), 48–76.

Google Scholar

Weber, J., & Großklaus, A. (1995). Kennzahlen für die Logistik. Stuttgart: Schäffer-Poeschel.

Weber, J., & Schäffer, U. (1999). Sicherung der Rationalität von Führung als Funktion des Controlling. Die Betriebswirtschaft, 59(6), 731-747.

Indexed at, Google Scholar, Cross Ref

Weber, J., Schäffer, U., & Binder, C. (2016). Einführung in das Controlling: Übungen und Fallstudien mitLösungen. Schäffer-Poeschel.

Google Scholar

Received: 21-Jan-2023, Manuscript No. JMIDS-23-13202; Editor assigned: 23-Jan-2023, Pre QC No. JMIDS-23-13202(PQ); Reviewed: 06- Feb-2023, QC No. JMIDS-23-13202; Revised: 09-Feb-2023, Manuscript No. JMIDS-23-13202(R); Published: 23-Feb-2023

Get the App