Summary by Eileen Z. Taylor
Ph.D. Program in Accounting
University of South Florida, Spring 2004
This paper is a commentary that summarizes studies that have used ‘reliance on accounting performance measures’ (RAPM) as a central variable. Otley and Fakiolas note that there is not a consistent stream of results in relation to the measurement of RAPM. They submit that this is directly due to errors in measurement and identification of the construct. The paper gives some history about the development of the RAPM construct, groups and critiques past research on the variable, and provides some suggestions for future measurement.
Review of Original Study
The authors trace the measurement of evaluative style, also known as RAPM, in Hopwood’s (1972)1 original research. In this study, three styles were identified, based on their reliance on meeting budgets.
Budget-Constrained (BC) represented a rigid application of the budget-based performance evaluation. In this case, no excuse was sufficient for not meeting the budget.
Profit-Conscious (PC) style was still concerned with meeting budgets, but offered a more flexible approach. In this case, reasonable explanations for deviating from budgets were acceptable. The budget was still important, but it was relegated to a lower level and became just a part of the overall evaluative system.
Non-Accounting (NA) encompassed a heterogeneous variety of other styles- none of which prioritized meeting budgets. Given the heterogeneity of the category, Hopwood did not study it intensively.
A fourth classification, Budget-Profit (BP) was added. This represented a point between BC and PC but was not used extensively.
Hopwood used a survey methodology to evaluate the perceived style used by an individual’s manager. The questions used by Hopwood (1972) are illustrated in Figure 1 within the paper, along with the questions from other related studies by various authors. This figure allows an objective comparison by the reader.
The measurement of such items in the Hopwood (1972) article was two-fold. The items were rated on importance (Likert 1-5 scale), as well as ranked. The three most important items were used to classify evaluative style.
Otley and Fakiolas group subsequent studies on RAPM into four groups. The first group represents the closest replication of the Hopwood study. The second group differs in the measurement of the items used. The third group differentiate between quantitative and qualitative data, and the fourth group looks at objective and subjective differences in accounting data.
Group one studies include Otley (1978), which did not successfully find the same results as Hopwood. However, Otley changed the context. Otley concludes that Hopwood’s results were driven by an “inappropriate match between accounting data, which assumed independence, and an operating reality, which was highly interdependent.”(p. 501)
Brownell (1982), Brownell and Hirst (1986), Hirst (1987), and Pope and Otley (1996) are also included in group one. The current authors conclude that although the referenced papers remain closely related to Hopwood’s original questions, variations in wording, methodology (relevance to questioned managers) and merging of classifications all limit the ability of these studies to adequately and validly measure RAPM.
The group 2 studies also used questions similar to those used by Hopwood; however, the scoring of the results differs significantly. Rather than using a ranking, these studies rely on the Likert ratings of importance to measure the underlying construct.
Brownell (1985), Brownell (1987), Brownell and Dunk (1991), Dunk (1989a,b) and Dunk (1990) are all grouped together. They basically used a summation technique, rather than a contrast as used in Hopwood. The validity of this approach is suspect, as many of the studies did not also report evidence of correlations between the items. This paper’s authors posit that the underlying construct was not as Hopwood originally intended.
The authors designate a third group of studies. This group includes Hirst (1983), Hirst and Yetton (1984). In this group, the authors seek to come up with an instrument that can be used in multiple settings (not just manufacturing). They arrive at a focus on the difference between quantitative accounting data and qualitative measures. This is also the source of the term RAPM – reliance on accounting performance measures. It has become synonymous with ‘evaluative style’.
The authors claim that while the Hirst studies may indeed be useful for evaluating reliance on quantitative measures, such measures are not necessarily accounting measures. As such, their instrument deviates from the original Hopwood intention.
Finally, the fourth group included Govindarajan (1984) and Govindarajan and Gupta (1985). Their approach consisted of the use of “a single item instrument to measure the relative reliance on objective versus subjective approaches…”(p. 505). Although they are measuring the basis for evaluation as budget, profit, or non-accounting based, their measure suffers because it is a single item, and because it asks about bonuses, thereby limiting its use to firms that offer bonuses as a central reward motivation mechanism.
Otley and Fakiolas then put forth a suggestion for RAPM measurement that they have developed and statistically validated. They discuss the applied validation procedures, and possible implications of the use for such an instrument. They also call for more statistical rigor in the study of RAPM.
Overall, this paper provides a clear history of the measurement of RAPM. Although their study is limited to the measurement of a single variable, it is interesting to see how such measurements develop over time. The paper informs researchers to use caution when adopting or adapting an instrument from a prior study. It also reminds researchers to pay attention to establishing validity when using instruments to measure unobservable constructs.
1 Hopwood, A. G. 1972. An empirical study of the role of accounting data in performance evaluation. Journal of Accounting Research (Empirical Research in Accounting: Selected Studies): 156-182. (JSTOR Link).
Collingwood, H. 2001. The earnings game: Everyone plays, nobody wins. Harvard Business Review (June): 65-74. (Summary).
Dierks, P. A. and A. Patel. 1997. What is EVA, and how can it help your company? Management Accounting (November): 52-58. (Summary).
Hope, J. and R. Frazer. 2003. Who needs budgets? Harvard Business Review (February): 108-115. (Summary).
Jalbert, T. and S. P. Landry. 2003. Which performance measurement is best for your company? Management Accounting Quarterly (Spring): 32-41. (Discussion of EVA, tracking stock and balanced scorecard). (Summary).
Jensen, M. C. 2001. Corporate budgeting is broken - Let's fix it. Harvard Business Review (November): 94-101. (Summary).
Kaplan, S. E. and J. T. Mackey. 1992. An examination of the association between organizational design factors and the use of accounting information for managerial performance evaluation. Journal of Management Accounting Research (4): 116-130. (Summary).
Kurtzman, J. 1997. Is your company off course? Now you can find out why. Fortune (February 17): 128-130. (Summary).
Roehm, H. A. and J. R. Castellano. 1999. The danger of relying on accounting numbers alone. Management Accounting Quarterly (Fall): 4-9. (Summary).
Sandison, D., S. C. Hansen and R. G. Torok. 2003. Activity-based planning and budgeting: A new approach. Journal of Cost Management (March/April): 16-22. (Summary).
Tatikonda, L. U. and R. J. Tatikonda. 1998. We need dynamic performance measures. Management Accounting (September): 49-53. (Summary).
Van der Stede, W. A. 2000. The relationship between two consequences of budgetary controls: Budgetary slack creation and managerial short-term orientation. Accounting, Organizations and Society 25(6): 609-622. (Summary).
Weisenfeld, L. W. and L. N. Killough. 1992. A review and extension of using performance reports: A field study based on path-goal theory. Journal of Management Accounting Research (4): 209-225. (Summary).