Summary by Rosalyn Mansour
Ph.D. Program in Accounting
University of South Florida, Spring 2004
The balanced scorecard is a strategic tool that assists management by measuring organizational performance. There are a great number of actual measurements, which can cause managers to experience information overload in trying to understand what they all mean and how they are related to one another. Therefore, the BSC’s organization around the 4 categories (financial, customer relations, internal business processes, and learning & growth) helps ameliorate this problem because it points out related measures by having them categorized together.
Prior research suggests that, if correlated performance measures are not grouped together or otherwise indicated as being related, managers will not discount the weight they place on these measures when making a judgment. In other words “judgmental discounting for information redundancy does not occur unless the judge is alerted to the presence of such relations (p. 533).” This would mean that judgment would differ between managers receiving a categorized list of the balance scorecard measures and those that received an unorganized list. However, the authors say that the pattern of performance matters:
1. “judgments are likely to be moderated when multiple above-target (or below-target) measures are contained in a single BSC category but,
2. judgments are unlikely to be affected when multiple above-target (or below target) measures are distributed throughout the BSC categories (p. 533).”
To test these theories, experiment 1 was conducted on 78 MBA students who were provided case materials where they were to assume the role of a senior executive. Participants were asked to evaluate two managers and their strategies over two divisions of a company and then rate the managers’ performance on a Liker-scale of 0 to 100 (from worst to best). Each participant was given a set of performance measures, an explanation of how the measures were calculated, and a corresponding target measure. The organization of the performance measures (categorized vs. not categorized) was manipulated between subjects, as was the presentation order (first vs. second) of the two divisions. There was also one within subjects manipulation (division was above or below customer relations targets). Otherwise, the performance measures were exactly the same, except for 4 customer relation measures. The dependent variable was respondents’ ratings of management performance.
The following is the ANOVA table for experiment 1 presented on page 536.
|ANOVA Results For Experiment One - Manager Evaluations|
The authors state that this shows “information’s organization affects the relative evaluations of the managers for this pattern of performance results where particularly positive/negative performance is concentrated in one BSC category (p. 537).” A supplemental analysis was performed on a memo that each subject completed justifying his/her evaluation. For those that received no categorization of the performance measures, they mentioned 22.6 individual performance measures and the group that received the BSC categorization identified an average of 18.7.
Experiment 2 was a repeated measures experiment where a group of graduate managerial accounting students were used to basically repeat experiment one, except with regard to the actual performance measures provided. Instead of having the 4 customer relations measures vary between divisions, these measures were changed so that they related to other BSC categories as well. Results were similar to experiment 1, except that “with this pattern of performance results (i.e. with the above/below-target measures distributed across BSC categories), the BSC format did not affect the evaluations of the managers” and furthermore that “dependent on the pattern of performance results, organizing measures into the BSC can affect managerial judgments (p. 538).”
Grojer, J. 2001. Intangibles and accounting classifications: In search of a classification strategy. Accounting, Organizations and Society 26(7-8): 695-713. (Summary).
Ittner, C. D. and D. F. Larcker. 1998. Innovations in performance measurement: Trends and research implications. Journal of Management Accounting Research (10): 205-238. (Summary).
Ittner, C. D. and D. F. Larcker. 2003. Coming up short on nonfinancial performance measurement. Harvard Business Review (November): 88-95. (Summary).
Kurtzman, J. 1997. Is your company off course? Now you can find out why. Fortune (February 17): 128-130. (Summary).
Kaplan, R. S. 1998. Innovation action research: Creating new management theory and practice. Journal of Management Accounting Research (10): 89-118. (Summary).
Kaplan, R. S. and D. P. Norton. 1992. The balanced scorecard - Measures that drive performance. Harvard Business Review (January/February): 71-79. (Summary).
Kaplan, R. S. and D. P. Norton. 1996. The Balanced Scorecard: Translating Strategy into Action Boston: Harvard Business School Press. (Summary).
Kaplan, R. S. and D. P. Norton. 2001. The Strategy-Focused Organization: How Balanced Scorecard Companies Thrive in the New Business Environment. Harvard Business School Press. (Summary).
Lipe, M. and S. Salterio. 2000. The balanced scorecard: Judgmental effects of common and unique performance measures. The Accounting Review (July): 283-298. (Summary).
Norreklit, H. 2003. The balanced scorecard: What is the score? A rhetorical analysis of the balanced scorecard. Accounting, Organizations and Society 28(6): 591-619. (Summary).
Schonberger, R. J. 2008. Lean performance management (Metrics don't add up). Cost Management (January/February): 5-10. (Note: Schonberger criticizes the KPI or scorecard approach from the lean enterprise perspective. Summary).
Stivers, B. P., T. J. Covin, N. G. Hall and S. W. Smalt. 1998. How nonfinancial performance measures are used. Management Accounting (February): 44, 46-49. (Summary).
Tatikonda, L. U. and R. J. Tatikonda. 1998. We need dynamic performance measures. Management Accounting (September): 49-53. (Summary).