The Use of Performance Information by Local Government Stakeholders in Tanzania

Abeid Francis Gaspar, Tausi Ally Mkasiwa

  Open Access OPEN ACCESS  Peer Reviewed PEER-REVIEWED

The Use of Performance Information by Local Government Stakeholders in Tanzania

Abeid Francis Gaspar1, Tausi Ally Mkasiwa1,

1Department of Accounting and Finance, Institute of Finance Management (IFM), Dar es Salaam, Tanzania

Abstract

This paper investigates the use of performance information by local government stakeholders in Tanzania. The use of performance information has been explored at various levels, such as Citizens/communities, Local government managers/officials, Councillors, Legislators, and other decision makers. The use of performance information was explored by using interviews, document analysis and observation in four Tanzanian Local Government Authorities (LGAs). The principal research findings reveal that some of the performance information may not be collected at all as advocated in the rules and regulations issued by oversight bodies; some may be collected for their intrinsic value, and some may be collected and used for legitimacy and efficiency purposes. It is also revealed that stakeholders’ power and interest influence the collection and/or use of performance information for either efficiency and/or legitimacy purposes. Unlike other previous studies, this paper explores the use of performance information by local government stakeholders by integrating a number of stakeholders, such as Councilors, local government officials, central government, and Parliament. From a policy perspective, the findings of this study can provide additional insights to reformers and governments in their quest for NPM reforms and their implications for efficiency and legitimacy.

Cite this article:

  • Gaspar, Abeid Francis, and Tausi Ally Mkasiwa. "The Use of Performance Information by Local Government Stakeholders in Tanzania." Journal of Finance and Accounting 2.3 (2014): 51-63.
  • Gaspar, A. F. , & Mkasiwa, T. A. (2014). The Use of Performance Information by Local Government Stakeholders in Tanzania. Journal of Finance and Accounting, 2(3), 51-63.
  • Gaspar, Abeid Francis, and Tausi Ally Mkasiwa. "The Use of Performance Information by Local Government Stakeholders in Tanzania." Journal of Finance and Accounting 2, no. 3 (2014): 51-63.

Import into BibTeX Import into EndNote Import into RefMan Import into RefWorks

1. Introduction

Performance measurement has been at the core of management reforms mainly due to the need to enhance accountability (Julnes 2006). It has been promoted all over the world as an important tool to improve efficiency, effectiveness, and organizational responsiveness to changing circumstances and customer expectations (Halachmi 2012). Similarly, concern with government performance has existed for a long time (Steinberg 2009). Performance reports are one of the lynchpins of government accountability (McDavid and Huse 2012). Governments around the world have made large investments to develop performance measurement systems, frequently related to notions of accountability (Kloot 1999). New Public Management (NPM) has broadened the concept of accountability to include accountability for performance, which is viewed as the requirement to explain or justify what has been done, what is being done and what has been planned (Eden and Hyndman 1999, Kluvers 2003). NPM has been introduced to solve the problem of inefficiency because of its emphasis on managing outcomes and results as one of the ways to improve performance in the public sector (Arnaboldi and Azzone 2010, Aziz et al. 2012). Public sector entities are required to demonstrate accountability, and managerial systems are seen as one way in which public sector entities can legitimize their operations (Torres et al. 2010).

Many of the recent NPM reforms have emphasized the need for the use of performance information in discharging accountability (Eden and Hyndman 1999).The interest in accountability and performance measurement can be traced back to the latter part of the 19th century (Julnes 2006). However, whether intentionally or unintentionally it is highly unlikely that the outcomes of NPM and performance measurement will be significant gains for any of the crucial stakeholders in public service provision (Adcroft and Willis 2005). Nevertheless, external legitimacy is crucial because organizations can fail for the reason that they have lost external legitimacy with key stakeholders, particularly resource providers (Lawton et al. 2000). Public sector organizations are accountable to government, to the public, and to other potential stakeholders for the resources entrusted to them (Eden and Hyndman 1999). The potential stakeholders of a public sector organization may include local citizens, clients, consumers, users, customers of the service producers, the media, elected representatives, the central government, regulatory agencies, managers, and employees (Rantanen et al. 2007).

In Tanzania, the LGA’s stakeholders include local communities, the Ministry of LGAs – Prime Minister’s Office - Regional Administration and Local Government (PMO-RALG), other ministries, such as the Ministry of Education and Vocational Training and the Ministry of Finance and Economic Affairs, Development Partners (DPs), Councillors, the Regional Secretariats (RS, which are central government bodies at a regional level), the Parliamentary Local Authorities Accounts Committee (LAAC), and the Parliament.

Section 18 (1) (b) of the Finance Act (2001) of Tanzania requires government entities, such as Ministries, Departments and Agencies (MDAs) and Local Government Authorities (LGAs), to prepare a statement detailing the classes of outputs and performance criteria to be met when providing those outputs. The budgeting guidelines, which are issued yearly, require preparation of a performance outcome report at the end of the Strategic Planning cycle. The outcome reports are required to focus on assessing the degree to which the institution is meeting its planned objectives or outcomes, as documented in the Strategic Plan. LGAs are also required to submit quarterly progress reports and annual performance reports to the PMO-RALG and other relevant authorities, such as the Ministries. On the other hand, the Local Government Development Grant (LGDG) system, which was introduced in Tanzania in the Financial Year 2004/2005, links the financing of the LGAs with the level of achieved performance. The achieved level of performance of an LGA is determined through the annual evaluation of actual performance against the prescribed performance measures.

Previous studies on the use of performance information have considered a single stakeholder, such as Councillors (Askim 2007, 2008, 2009), Legislators (McDavid and Huse 2012, Raudla 2012), Local government managers/officials (Ammons and Rivenbark 2008, Charbonneau 2010), and Citizens/local people (Cohn Berman 2008). The purpose of this paper is to explore the use of performance information by several LGA stakeholders. In so doing, the research attempts to advance knowledge about the interplay between different stakeholders in the use of performance information in LGAs and their impact on the organizations. As Brignall and Modell (2000) expressed it, “it would thus appear appropriate to shift the attention to the power and pressures exerted by different groups of stakeholders and how these affect the use of performance information in organizations” (p. 282). The key research question addressed by this research is:

How do the local government stakeholders use performance information?

Data was collected from multiple sources including interviews, observation, and documentary analysis. Institutional theory was employed to explore further the main findings which emerged from this qualitative interpretive study. The main findings of this study reveal that performance information may not be collected at all as advocated in the rules and regulations issued by oversight bodies; some may be collected for their intrinsic value, and some may be collected and used for legitimacy and efficiency purposes.

The remainder of the paper is structured as follows. In the next section, previous research is presented. This is categorized into three sections: previous research on local government stakeholders, the use of performance information in public sector organizations and performance measurement and institutional theory. This is followed by a section about the methodology of the research. Then the findings of the study are presented and discussed, using Institutional theory to develop the empirical findings in a more generalisable way. Finally, the concluding section summarises the study and outlines some implications for future research into performance measurement.

2. Prior Research

2.1. The Use of Performance Measurement and Performance Information

Performance measurement is widely used within public sector organizations, but there is a lack of evidence regarding their usefulness (Propper and Wilson 2003). Performance information is produced and published within management control processes, and the products take the form of statistics, strategies, budgets, analyses, annual reports, press releases and media articles (Julnes 2006). There are different opinions about the usefulness of the performance measurement practices in public sector organizations. Performance measurement is argued to be used in contemporary organizations in order to change, or maintain, the status quo (Vakkuri and Meklin 2006).

On one hand, performance information is used in the monitoring of programs, reporting to elected officials, reporting to internal management, and reporting to citizens or the media (Julnes 2006). It is believed to be essential for managerial control and informed decision making (Wisniewski and Stewart 2004). Performance measurement judges the performance of the program or organization (Guthrie and English 1997).The use of performance measures has increased across a range of dimensions. Financial and non-financial performance measures are calculated and used in the management process to improve the performance of both individuals and the councils themselves (Kloot 1999). For example, performance measurement can assist appraisal by third parties and to appraise members of staff, particularly when the work allows for standardization (de Bruijn 2002, Economic Commissions for Africa 2003, Van Dooren 2005, De Bruijn 2007). Performance measurement can also be used in resource allocation and budgetary decision-making through performance based budgeting (Melkers and Willoughby 2005, Van Dooren 2005, Julnes 2006). It also provides input for budgeting by providing justification of the use of resources and development of cost targets. However, if performance information is not included in deliberations about the budget, or not used as an input for appropriation decisions, one may question whether the costs of producing performance information really outweigh the benefits (Raudla 2012). Raudla argued that legislators make only limited use of the formal documents containing performance information. Instead, they rely, for the most part, on informal social networks for gathering the information they consider necessary for budget discussions. Performance measurement provides input into planning, monitors the implementation of organisation plans, and determines when the plans are unsuccessful, and how to improve them, through facilitating choice between alternative strategies, determination of priorities and changes in policy directions (Guthrie and English 1997, Kloot 1999, Van Dooren 2005, Julnes 2006).

Performance measurement is also used to understand how process performance affects organisational learning (Kloot 1999, Economic Commissions for Africa 2003, Melkers and Willoughby 2005, De Bruijn 2007). It can bring transparency, be used in communication, and in positive/negative sanctioning (de Bruijn 2002, Economic Commissions for Africa 2003, Melkers and Willoughby 2005, De Bruijn 2007). Performance measurement can also be an incentive for output (de Bruijn 2002). In other operations, performance measurement can help to identify operational problems, which can be solved by adjusting existing processes, and can indicate more fundamental problems which require an adjustment to the strategies of the organisation (Kloot 1999, Melkers and Willoughby 2005). Performance information also provides input into implementation (actual results checked against budgets and goals, corrective action guidance), and evaluation (assistance in determining programme effectiveness, examination of other ways to meet objectives, assistance in determination of better ways to implement programmes) (Guthrie and English 1997, Lee Jr and Burns 2002, De Bruijn 2007).

On the other hand, measuring performance can serve as an elegant way of shaping accountability and form the basis for discharging the accountability of both individuals and organizations (Eden and Hyndman 1999, Economic Commissions for Africa 2003, Wisniewski and Stewart 2004, Flynn 2007, Torres et al. 2010). It promotes accountability to stakeholders, particularly in government organisations (Kloot 1999). Despite its appeal as a means for improving government, many governments have not developed performance-measurement systems, and even fewer use these systems to improve decision making (Julnes 2006). Public reporting of targeted performance measures may improve symbolic accountability, but it undermines the usefulness of the reported performance information for performance management (McDavid and Huse 2012). Performance measures may be adopted and simply collected for their intrinsic value (Torres et al. 2010).The issues of effective performance measurement in public sector organisations have frequently been addressed by politicians, academicians and the public at large via electronic and written media; however, the improvement is still unsatisfactory (Aziz et al. 2012). The use of performance measurement by local departments in local governments may be perceived as pervasive, although survey respondents might be less enthusiastic about measurement effectiveness (Melkers and Willoughby 2005). Many local governments measure and report their performance, but the record of these governments in actually using performance measures to improve services is more modest (Ammons and Rivenbark 2008).

In budgeting, for example, the main reasons for the limited use of performance information include, that the documents containing performance information are too long and cumbersome, that the legislative budget process is too time-constrained, and that the parliament has only a limited role in making substantive changes to the budget (Raudla 2012). Other factors which affect the relevance and usefulness of performance information and performance measurement include resources, social norms, interpretive schemes and management heuristics, public service motivation, information availability, organizational culture, and administrative flexibility (Moynihan and Pandey 2010, Vakkuri 2012). Leadership style also matters in the use of performance information in decision-making (Moynihan and Ingraham 2004, Moynihan and Pandey 2010). In addition, the types of measures on which officials rely, the willingness of officials to embrace comparison, and the degree to which measures are incorporated into key management systems, affect the use performance measures for service improvement (Ammons and Rivenbark 2008).

Approximately 20 years after the first NPM reforms were initiated, it has become clear that performance assessment in the public sector is not without problems or unintended consequences (Van Thiel and Leeuw 2002). Performance measurement prompts game playing, adds to internal bureaucracy, blocks innovations and ambitions, kills system responsibility, and punishes good performance (de Bruijn 2002, Propper and Wilson 2003). The increase of output measurement in the public sector can lead to several unintended consequences that may not only invalidate conclusions about public sector performance, but can also negatively influence that performance (Van Thiel and Leeuw 2002). Employees have discretion about whether, and the degree to which, they engage in performance measurement, but they are influenced by the social context and formal systems in which they work (Propper and Wilson 2003). The use of performance indicators can inhibit innovation and lead to organizational paralysis (Van Thiel and Leeuw 2002).

2.2. Local Government Stakeholders

The management of the focal organization involves its alignment with one or several groups of stakeholders because of organizational dependence on stakeholders (Brignall and Modell 2000). Performance measurement promotes accountability to stakeholders, particularly in public sector organisations (Kloot 1999). An interest in performance measures produced by public sector organizations is held by a number of likely stakeholders, who can be internal or external to the organizations (Kanter and Summers 1987), such as Councillors (Wisniewski and Stewart 2004, Askim 2007, 2008, 2009), central government (Wisniewski and Stewart 2004), policymakers and independent government advisors (Harrison et al. 2012), Legislators (Eden and Hyndman 1999, Wisniewski and Stewart 2004, McDavid and Huse 2012, Raudla 2012), Local government managers/officials (Wisniewski and Stewart 2004, Ammons and Rivenbark 2008, Charbonneau 2010, Moynihan and Pandey 2010, Harrison et al. 2012), inspection and audit agencies (Wisniewski and Stewart 2004), investors and creditors (Eden and Hyndman 1999), clients/consumers/customers of the services provided (Kanter and Summers 1987, Wisniewski and Stewart 2004, Harrison et al. 2012), citizens/local people/representative groups/voters/voter representatives (Eden and Hyndman 1999, Wisniewski and Stewart 2004, Cohn Berman 2008, Moynihan and Pandey 2010, Harrison et al. 2012) and the media (Wisniewski and Stewart 2004). Compared to developed countries, the involvement of stakeholders – such as citizens – in public sector organisations is rather limited in many developing countries (Mimba et al. 2007).

The public/citizens are variously described as the ultimate stakeholder of local government performance information and consumer of government services (Cohn Berman 2008). They benefit because performance information increases transparency, as it becomes easier for them to judge whether governments perform well, or poorly, in terms of how tax revenues are utilized for public services (Askim 2009). Although the public/citizens cannot decide how much tax they have to pay, or the volume or quality of the services, they can control public management when it comes to election time and, thus, performance influences voting behaviour (Brusca and Montesinos 2006). The public/citizens also benefit because it becomes easier for them to choose service providers when there is competition amongst them (Askim 2009).

Governments keep tabs on performance and produce reports that are used for accounting, auditing, budgeting, and management purposes, as well as to comply with legislative mandates. They compile data about revenues and expenditures. They count work that comes in (such as the number of applications and complaints) and the work produced (such as applications processed, tons of refuse collected, lane miles paved) (Cohn Berman 2008). Councillors benefit from local government performance information because it is easier for them to judge whether managers, departments, and programmes perform well, or poorly, in terms of “x-efficiency”— the ways in which they turn inputs into outputs (Askim 2009). More than two-thirds of OECD countries include non-financial performance data in the budget documentation available to managers and policy makers (internal use), and provide reports on their performance to the public (external use) (Torres et al. 2010). For managers, performance information benefits them because they can use that information as a device to control their ‘‘agents’’, the service-producing departments, and to bond their ‘‘principals’’, the councillors (elected representatives to municipal councils) (Askim 2009).

A large number of stakeholders with multiple views about what good performance means exacerbate the public sector performance measurement problem (Harrison et al. 2012). For example, there are different perceptions between governments and the public regarding performance. Cohn Berman (2008) argues that the perceptions of the public are different from those of governments. The public is interested in outcomes and the quality of work performed, and judge performance by their first impression of the way they have been treated (they expect courtesy, respect, and compassion), the accessibility of an office and information, and the cleanness of the facility. On the other hand, governments report about workloads and costs, and few of them gather data about what interests the public, or reports about this. Differences exist even within a single group of stakeholders. For example, more experienced politicians are less interested in performance information than the newer ones, but there are no significant differences between legislators from governing and opposition parties (Raudla 2012). Askim (2009) investigated factors that condition the extent to which councillors search for performance information when faced with decision dilemmas. One such factor is within-polity-rank: front benchers are more inclined than back benchers to search for performance information. A second factor is education: the best educated councillors are least inclined to search for performance information. A third factor is political experience: inexperienced councillors are most inclined to search for performance information.

Similarly, differences can exist within a single group in different time periods. For example, McDavid and Huse (2012) found that while the legislators had high initial expectations of the use of performance reports, actual usage, measured in two follow-up surveys, showed substantial drops in their expectations. Attempts to meet the performance information needs of these differing stakeholders with a “one size fits all” approach is argued to be unlikely to be successful (Wisniewski and Stewart 2004). It is becoming more usual to say that, when developing and implementing performance measurement systems in the public sector, the starting point and key driver should be stakeholders’ needs and expectations (Linna et al. 2010). Performance measurement systems and their implementation require the identification of stakeholders (Harrison et al. 2012). Because of the multiplicity of stakeholders, the produced information is likely to be biased and limited to favour a particular avenue (Brunsson 1990). Multiplicity of stakeholders causes conflicting objectives, measures, usefulness and applicability of different approaches to the performance measurement (Rantanen et al. 2007, Harrison et al. 2012). Similarly, the information which is useful to some stakeholders may have limited relevance to others (Harrison et al. 2012).

Brignall and Modell (2000) argue that to deal with the conflicts inherent in such trade-offs, management may adopt the seemingly irrational or ‘hypocritical’ strategy of providing a particular type of information to mobilize the support of one stakeholder group, while effectively pursuing a divergent course of action that is more in tune with the interests of another group. They argue that even though such a scenario frequently appears to emerge ‘spontaneously’, rather than as a result of active managerial intervention, management may also consciously manipulate the information provided to certain groups of stakeholders, particularly if these exert more limited institutional pressures on the organization. Public sector reforms which are partly stimulated by the growing involvement of stakeholders lead to an increasing demand for performance information but, because of the low institutional capacity and the high level of corruption in developing countries, this increasing demand is not always followed by a sufficient supply of performance information (Mimba et al. 2007).

2.3. Performance Measurement and Institutional Theory

Some papers focus on rational and economic arguments, according to which the role of performance measures is analysed based on their contribution to enhancing efficiency, effectiveness, and internal decision-making, while others consider the concept of institutional pressure as a determinant of performance measure developments (Torres et al. 2010). Despite a range of theories being used to explain the introduction of performance measures into governments (Torres et al. 2010), institutional theory has increasingly been adopted to explore the phenomenon of accounting in different organizations and social settings (Dillard et al. 2004). It adds the interests and power of different stakeholders, which are typically absent or de-emphasized in the rationally instrumental approach, to the organizational analysis (Brignall and Modell 2000). Accounting research from the institutional theory perspective is informed by the Old Institutional Economics (OIE), New Institutional Economics (NIE), and New Institutional Sociology (NIS) theories (Burns and Scapens 2000). Institutional theory is being employed in performance measurement studies in both the public and private sectors (Johnsen 1999, Brignall and Modell 2000, Lawton et al. 2000, Modell 2001, Hussain and Hoque 2002, Modell 2003, Modell 2005). Other areas of study informed by institutional theory are management accounting changes in organizations (Burns 2000, Burns and Scapens 2000, Granlund 2001, Soin et al. 2002, Siti-Nabiha and Scapens 2005), cost allocation processes and techniques (Carmona and Macías 2001, Ahmed and Scapens 2003, Carmona and Donoso 2004), budgeting in governmental organizations and schools (Edwards et al. 2000, Collier 2001, Seal 2003), accounting regulations and the role of accounting in organisations (Bealing Jr et al. 1996, Fogarty 1996, Fogarty et al. 1997, Seal 1999, Lapsley and Pallot 2000, Broadbent et al. 2001, Carpenter and Feroz 2001, Eden et al. 2001, Hines et al. 2001, Kurunmaki et al. 2003, Fogarty and Rogers 2005), accounting and institutionalisation processes (Burns and Scapens 2000, Dillard et al. 2004, Burns and Baldvinsdottir 2005), and external auditing (Basu et al. 1999). This study employs NIS and the concept of power, as defined by Pfeffer (1981), to explain the findings from the investigation about the use of performance information by local government stakeholders, and the impact this has on performance measurement practices in LGAs.

NIS is primarily concerned with interactions between organizational structures, practices, behaviours, and the wider social environment in which organizations operate (Hussain and Hoque 2002). The main propositions of the theory are firstly, that many elements of formal organizational structures, practices, and characteristics arise as a consequence of the social expectations of appropriate practices (Bealing Jr et al. 1996). Secondly, organizations are motivated to interact with their environment, in ways perceived to be appropriate by the various stakeholders, for the sake of survival and maintenance of legitimacy (Dillard et al. 2004). Finally, behaviours and practices in organisations, both at micro and macro levels, are shaped by ‘coercive, mimetic and normative isomorphic processes’ (DiMaggio and Powell 1983, p.147). Coercive isomorphic processes occur when an organization changes in response to both formal and informal pressures exerted by other organisations on which the organisation is dependent, as well as by the expectations of the public it serves. Mimetic isomorphic processes occur when an organization faces high levels of uncertainty and, hence, imitate other organisations that are perceived to be successful in order to deal with them. Normative isomorphic processes involve changes in an organization resulting from the professionalism that influences the behaviours of the individuals working in the organisation.

The early formulations of NIS propound that formal structures and procedures are adopted in order to acquire legitimacy and guarantee the resources required for the survival of the organization, but they are detached from the everyday organizational practices, so as not to disturb the normal processes of daily operations. The intentional or unintentional separation between external image and actual structures and procedures has been referred to as 'decoupling'. Decoupling allows a system in an organizational location to act on both technical and institutional levels (Orton and Weick 1990). It is considered essential for resolving conflicts between legitimacy and efficiency (Meyer and Rowan 1977, Oliver 1991). Accounting practices may be ‘decoupled’ from the core operations of various organizations (Basu et al. 1999, Johnsen 1999, Edwards et al. 2000, Collier 2001, Modell 2003, Siti-Nabiha and Scapens 2005). Some institutional theorists use the terms ‘decoupling’ and ‘loose coupling’ interchangeably to describe poor, or absence of, connection between what organizations display to the outside world, particularly to important stakeholders, and the actual internal operations or systems. Many studies about the implementation of public sector management reforms have used institutional theory to explain the features of these implementations and the gap between the rhetoric and the actual results (Torres et al. 2010).

Some accounting studies employing NIS have suggested that accounting is sometimes employed in organizations as a legitimating device, rather than one for facilitating operations and informed decision making (Bealing Jr et al. 1996, Fogarty 1996, Fogarty et al. 1997, Lapsley and Pallot 2000, Hines et al. 2001, Modell 2001, Ahmed and Scapens 2003, Kurunmaki et al. 2003, Carmona and Donoso 2004). Others have indicated that existing institutions influence the way an accounting information is used by organizational actors, and also the introduction of new accounting systems and techniques in organizations (Burns 2000, Granlund 2001, Fogarty and Rogers 2005, Siti-Nabiha and Scapens 2005). Several studies have also demonstrated that institutional pressures (coercive, mimetic and normative) contribute to the development and/or adoption of new accounting practices in organizations (Seal 1999, Carmona and Macías 2001, Carpenter and Feroz 2001, Eden et al. 2001, Hussain and Hoque 2002). However, Powell (1985) argues that new rules are institutionally enforced and adopted by organizations, and are seen as necessary to maintaining operational efficiency.

Later formulations of NIS incorporate power, which is argued to be a limitation in institutional theory, yet, a useful concept in discussion of the interaction between the institutional environment and organizations (Covaleski and Dirsmith 1988, Abernethy and Chua 1996, Collier 2001, Tsamenyi et al. 2006).The meaning of power referred to into this research is Pfeffer (1981)’s definition which denotes the ability to get people to do things that they would not otherwise do (Pfeffer 1981). The paper also discusses power in institutional theory at two levels: institutional power, which relies on an external legal or regulatory base, and organizational power, which is based on managerial structure for its external and/or internal power base, supported by a range of rewards and sanctions (Fincham 1992). This paper is, therefore, in response to Collier (2001)’s concern over the lack of attention given in institutional theory to the organizational level of analysis, to issues of power, and to the role of management in reconciling institutional and technical demands (Collier 2001).

2.4. A synthesis of the Prior Research

There have been attempts to develop measurement and assessment systems for public sector performance for centuries (Hawke 2012). Over the past several decades, there has been dramatically increased attention paid to measuring the performance of the public sector (Figlio and Kenny 2009). Increased utilization of performance information has been on the reform agenda in Western democracies since the 1980s, especially in local governments (Askim 2009). Despite the long period of implementation and refinement, fundamental weaknesses persist in the quality and use of performance information (Hawke 2012). In addition, the role of stakeholders, from a performance measurement perspective, has been little discussed (Wisniewski and Stewart 2004). It is not yet known how stakeholders respond to performance measurement (Figlio and Kenny 2009). Relatively little attention has been paid to date to the issue of who such performance measurement information is for, and to what purposes such information will be put by those using it (Wisniewski and Stewart 2004). The issue of who is seen as the end user of the performance measurement information generated has received little attention and yet, particularly in the public sector, it is of critical importance (Wisniewski and Stewart 2004).Other authors have investigated the use of performance information by Legislators (Raudla 2012), Local government managers/officials (Ammons and Rivenbark 2008, Charbonneau 2010), Councillors (Askim 2007, 2008, 2009), and Citizens/local people (Cohn Berman 2008). The need to theoretically integrate the use of performance information by various stakeholders is critical. A fruitful research strategy in this respect is to compare performance measurement use pertaining to different groups of stakeholders (Brignall and Modell 2000).

3. Research Methods

The study seeks to analyse the use of performance information by LGA stakeholders and how such information has shaped performance measurement practices in the LGAs in Tanzania. It covers four LGAs: namely SMC, BDC, TMC, and KDC. The pseudonyms have been used in order to maintain the confidentiality of the LGAs. The LGAs were selected to facilitate comparison of the cases. In the LDGD results, the performances of BDC and KDC were rated as very poor, while the performances of SMC and TMC were rated as very good. In addition, of the 14 LGAs visited for the purpose of performance evaluation by the Controller and Auditor General, TMC had the highest performance with a total score of 86%, while BDC was the last but one, with a total score of 53%.

BDC was one of the 6 districts of the Pwani Region of Tanzania. It was the largest LGA, covering an area of 9842 square kilometres and with a population of 311,740 people. It consisted of 22 Wards, with 23 councillors including 16 elected councillors, 5 women councillors nominated by political parties (Special seats), and 2 Members of Parliament. TMC covered 652 square kilometres and has a population of 1,368,881 people. It had a total of 34 councillors comprising 26 elected councillors, 8 Councillors for special seats, and 2 Members of Parliament. Administratively, TMC was divided into 30 Wards and 164 villages. KDC had a population of 438,175 people. SMC has a population of 161,391 people. It has 17 Wards and covers 548 square kilometres.

The study focuses on three stakeholders: Councillors, the Central government officials, namely, the Prime Minister’s Office Regional Administration and Local Government (PMO-RALG) officials, and members of the Parliamentary Committee, the Local Authority Accounts Committee (LAAC). Councillors are either elected and represent each Ward, or represent special groups of people, such as women. The ward is the constituency for electing councilors. Councillors oversee the work of technicians (heads of departments at the LGA and the Council Director) at LGA level. They are political leaders elected to serve citizens for a specified period of time, usually 5 years, while the technicians are qualified personnel permanently employed to work in the LGAs. Councillors take an active part in reviewing matters and debating Council issues, review councils’ objectives, policies, resource allocation, expenditure and activities, and the efficiency and effectiveness of Council’s service delivery. The main functions of the PMO-RALG are to ensure that LGAs comply with regulations and policies and provide quality services. Other functions include managing critical interfaces between local government stakeholders, building the capacity of the Regional Secretariats and the LGAs, providing advice to LGAs, and providing a link between LGAs and other ministries as well as central government departments.

The data for the study was gathered through three main methods. Unstructured and semi-structured interviews were conducted with 6, 6, 7, and 7 heads of departments from TMC, BDC, KDC, and SMC, respectively. These heads of departments were drawn from the following departments: Administration and Human Resources, Education and Culture, Health and Waste, Industries and Trade, Urban Planning, Works and Fire Rescue, Planning and Monitoring, and Finance. In order to understand the use of performance information by LGA managers/officials, the study focused on officials/managers who were engaged in performance management in one way or the other at the LGA level. Other stakeholders, namely councillors, Central government officials (PMO-RALG), and members of the Parliamentary Committee (LAAC) were also visited. 3 members from the PMO-RALG, 7 Councillors, and 8 members of the LAAC, including the Chairperson and Secretary, were also interviewed. In total, 46 were interviews were conducted with the different groups of local government stakeholders.

The interviewees were mainly asked about the type of performance information they utilized and the way they had been using the information. In addition, the respondents were further requested to describe the effects of performance information on performance management practices in the LGAs. Initially, unstructured interviews were employed, but, as issues of significance emerged, semi-structured interviews with open questions were used. Unstructured interviews allowed issues of interest to emerge from the respondents and facilitated in-depth exploration and flexibility when probing the respondents. Subsequently the interviews were focused on the emerging issues and themes (Glaser and Strauss, 1967). Most of the interviews were audio-taped and later on transcribed. Notes were also taken during the interviews. The interview duration with individual respondents ranged from about forty minutes to around one and half hours.

Various internal and external documents were also analysed to obtain further data on performance measurement and the use of performance information. Documents such as Quarterly progress reports (physical and financial), Progress reports for development projects, annual performance reports, Internal Audit reports, Annual budgets, external audit reports, LGDG reports and LAAC reports were consulted. Observation was made, particularly through attendance at meetings that were relevant to the research. Ten sessions of the parliamentary committee meetings were attended. These sessions discussed the audited annual reports of some of the LGAs in the country. All relevant issues, events, views and activities observed were recorded immediately.

The data analysis procedures proposed by Corbin and Strauss (1998, 2008) and Glasser and Strauss (1967) were employed in analyzing the interview, documentary and observational data, with the objective of identifying, coding and categorizing emerging themes and patterns in the raw data.

4. Findings

There were two types of performance information in the Tanzanian LGAs; performance information that was linked with the LGDG system, and that which was not linked with the LGDG system. These types of performance information were observed to be collected and used under four categories. Some of the performance information was collected and used for external image and attainment of legitimacy; some was collected and used for rational decision making and achievement of efficiency; some was collected for its own sake/intrinsic value; and some was not collected at all, as advocated in the requirements/rules and regulations. In some cases, the same performance information had multiple uses. Stakeholders’ power and interest emerged as influencing the collection and/or use of performance information for either efficiency, legitimacy, intrinsic value and the collection of the performance information itself.

4.1. Performance Information Collected/Used for Attainment of Legitimacy

Under this category, the main purpose for collection and use of performance information was to fulfill legality and accountability purposes, rather than for rational decision-making (Torres et al. 2010). The category includes both types of performance information; that which was linked with the LGDG system, and that which was not linked with the LGDG system.

Performance information related to the LGDG system was collected by LGAs’ officials/managers to prove legitimacy to assessors and councillors, the main motives being financial and political respectively. This type of performance information was also used by councillors to legitimize themselves in the eyes of citizens. The LGDG performance information was collected and used more as a legitimating device than as one for facilitating operations and informed decision making (Bealing Jr et al. 1996, Fogarty 1996, Fogarty et al. 1997, Lapsley and Pallot 2000, Hines et al. 2001, Modell 2001, Ahmed and Scapens 2003, Kurunmaki et al. 2003, Carmona and Donoso 2004).

Assessors under the PMO-RALG used different types of performance measures/indicators to assess LGAs’ performance as the base for grants’ provision. This is in line with Kanter and Summers (1987)s’ argument that resource providers provide institutional measures of performance which legitimate the activities of organizations. As proposed by NIS, performance measurement practices under the LGDG were perceived to be important, as a consequence of the LGAs’ social expectations of receiving grants (Bealing Jr et al. 1996).The LGDG system was perceived to be important simply because it was used to build a good image and was the base for grants’ provision, as reflected in the following quotes:

[…] If our performance is not good it causes bad image to those who want to help. Therefore... we have to make sure that we meet those performance measures. That is why I perceive them to be completely ok. […] (HoD, Municipal).

[...] They (assessors) are using only documents. i.e. “is there a plan? Is there a budget? Where are they? Bring them here so that I can see them”. It can be seen that a council has a good performance however, because has those documents. We are asking ourselves: does this really mean assessing LGA performance? [...] (HoD, Municipal)

Being conducted annually, the LGDG system was not perceived sufficient to measure the LGAs’ performance. In addition, assessors who were involved in the exercise were perceived to be subjective. The whole exercise was perceived to be biased, as reflected in the following quote:

[...] There should be another group of assessors in making LGA assessment. Two groups concurrently making assessment will help decision makers to judge “why this group has come to this conclusion while another group has a different conclusion? What should we believe...they will be in a position to know the correct and genuine results of an LGA [...] (HoD, Municipal).

It may not come as a surprise then that managerial innovation could be undertaken simply for image and legitimacy (Torres et al. 2010). Despite being used to motivate and encourage managers and employees to do a better job, performance information can backfire, replacing genuine productivity gains with bookkeeping trickery, when there is too much emphasis placed upon it (Halachmi 2012). Practices, such as manipulation, collusion and gaming, were adopted by officials in legitimization when the normal practices failed to achieve the targeted performance measures. Managers had stakes in how stakeholders judged their x-efficiency, and may therefore have been tempted to report data in a misleading form so as to exaggerate their performance (Askim 2009). Because of the financial implications of the LGDG system, performance measurement practices related to the LGDG were characterized by high levels of manipulation, gaming, and collusion, as reflected in the following quotes:

[…] Then we need to do everything possible to make sure that the evidence is there… hahaha… (Laughing) by using any means…sometimes we go to them requesting them to sign… they understand and sign for us […] (Internal Auditor, Municipal Council).

[…] If you are assessing me in order to give me fund ... I can construct some documents for you. What you want as evidence is a document... I will make it for you because that is what you want […] (HoD, Municipal Council).

On the other hand, leaders need to demonstrate active use of performance information to retain credibility among other populations of users (Moynihan and Ingraham 2004). Councillors used the LGAs’ performance information to legitimate themselves in the eyes of the citizens. Performance measures which had an impact on fulfilling councillors’ promises to the citizens were of interest to the councillors. For example, the nature of the audit opinion of the LGA had an impact on accessing Council’s grants. Promises were hard to fulfil when the LGA did not qualify for grants. Councillors favoured performance measures that met the interests of citizens in preference to institutional legitimacy (Lawton et al. 2000). To Councillors, performance measurement was about meeting their constituents’ expectations (dealing with problems or kero (Dissatisfaction in public services delivery)) and the fulfilment of promises given during elections. It was about showing the constituents that they were delivering. Citizens benefited because performance information increased transparency as it became easier for them to judge whether LGAs performed well or poorly (Askim 2009). The eyes of Councillors were therefore focusing on the extent to which the Council addressed the problems/kero in their locality. Performance measures which made Councils eligible for grants were perceived to be important because funds obtained through the LGDG were used in building schools, the construction of wells and roads, and other activities, as reflected in the following quote:

[…] As a councillor I am responsible to the people, I represent them, you see….About my responsibilities to the people, it depends, they want you to deal with kero, this is very important. They expect you to be close, accessible. They come even with personal problems. You have to do something […] (Councillor).

Other LGA stakeholders, such as Parliament’s LAAC, used LGAs’ performance information that was not linked with the LGDG system to legitimize themselves in the eyes of Donors, NGOs, and Citizens. There is a growing interest in the measurement of performance in the public sector in order to demonstrate that value for taxpayers’ money is being delivered (Micheli and Neely 2010). Performance information was used by Parliament’s LAAC to protect the interests of other stakeholders, such as Donors, NGOs, and Citizens.

The members of the parliamentary committee were of the view that they had to safeguard the interests of different groups of participants, particularly those participants not directly involved. Their commitment to the use of LGAs’ performance information was rewarded by different groups, as reflected in the following quotes:

[…] I am obliged to extend my regards to the Chairperson and Honourable members of the Local Authorities Accounts Committee (LAAC) for their commitment to deliberate on the contents of this report and the separate individual reports issued to the Councils […] (The CAG’s report on the Financial Statements of LGAs for FY ended 30th June 2006).

[...] The responsibilities of our committee are to examine the use of public funds by local government, if expenditures are in accordance with financial regulations. We also take seriously issues of national policies and compliance to government directives. We invite different stakeholders to our sessions, the important ones, like the mayor, ministries and commission [Local Government Services Commission] to hear the discussion of reports from local government [executives]. The committee pay particular attention to funds from central government and other donors, to safeguard their interests; we know these are likely to be misused because the providers are not involved in day-to- day activities [...] (Member, Parliamentary Committee).

This type of performance information was prone to low levels of manipulation and gaming. It was not perceived to be important by the LGA officials/managers unless it was linked to the LGDG system, as reflected in the following quotes:

[…] These other reports are for internal assessment… they are not so much important. You know for LGDG system, you have a Ministry responsible, you have citizens, media… if you do not qualify for grants everybody knows what happens […] (HoD, TMC).

4.2. Performance Information Collected/Used for Achievement of Efficiency

The collection and use of performance information for the achievement of efficiency was observed in both types of performance information, and by the Councillors, Parliament‘s LAAC and the LGAs officials/managers but not by the except Central government’s PMO-RALG. The Councillors and Parliament’s LAAC were the only two key stakeholders who were using the LGAs’ performance information not related to the LGDG.

Councillors were involved in revising the LGAs’ performance targets. Individuals’ practices which affected organizational performance were of interest to them. They made resolutions about appropriate action to be taken to individuals who impeded organizational performance. Therefore, individuals tailored their behaviour to the standards by which performance was evaluated (Schacter 2009). Councillors were in a position to assess the progress of project and budget implementation, as reflected in the following quote:

[…] budget is like our mirror … direction. It helps us to see how much our administrators are doing what we planned, agreed in the council. When we see things are different we shout unless there are changes approved by the council. Budget helps us to see a trend of expenditures; you know if the administrators are spending more on their allowances […] (Councillor, Finance Committee, District Council).

Similarly, the Parliamentary LAAC was involved in revising LGAs’ performance targets and gave instructions on appropriate action to be taken to individuals. In fulfilling its overseeing responsibility, the members of the parliamentary committee needed LGAs’ performance information to evaluate the performance of the LGA officials, and to take appropriate action, particularly when serious breaches of financial regulations occurred:

[…] the LAAC of the National Assembly has done a tremendous job in taking to task, Accounting Officers of LGAs where they had not performed to the expectation of the Committee and in emphasizing on the concept of value for money in the use of LGA’s resources […] (The CAG’s report on the Financial Statements of LGAs for FY ended 30th June 2006).

The LGAs’ officials/managers and Councillors perceived that there were some elements of efficiency in utilizing performance information under the LGDG system. They believed that there were some improvements in officials’ day to day operations. Some of the LGAs’ officials/managers used performance information in the preparation of their plans and budgets (Melkers and Willoughby 2005, Van Dooren 2005, Julnes 2006). As proposed by NIS, the LGAs were motivated to interact with their environment, in ways perceived as appropriate by the assessors/PMO-RALG, for the sake of survival and the maintenance of legitimacy (Dillard et al. 2004). As the system was linked to grants’ qualification, no official/manager wanted to be blamed because of his/her failure to achieve a certain performance measure and, thereafter, qualification for grants. Officials/Managers’ behaviours and practices were shaped by coercive processes related to the LGDG (DiMaggio and Powell 1983). They were therefore struggling to improve their operations to make sure that they met performance measures, as reflected in the following quotes:

[…] What I can see now… there are some improvements. Transparency… previously it was only an issue of LGA, but now, everybody knows if you have not qualified for grants… there are some improvements… you need to make sure everything is well for the sake of grants assessment […] (HoD, Municipal).

[…] This assessment make us… it make us fear… what can you tell to your Director? If you are the one who caused it… if it is your department which has not performed […] (HoD, Municipal).

It was also perceived by the Councillors that the LGDG performance information assisted them to conduct informed decision-making about various issues in the LGAs, thus increasing efficiency in the council. Performance information enabled Councillors to evaluate the performance of individual officials, which had an impact on organizational performance. Some of the LGDG performance measures, such as external audit opinion, were of significance and interest to Councillors because, when the measure was not met, then councillors investigated what happened and took appropriate actions, as reflected in the following quote:

[…] Because of this system (LGDG), we can now understand if the Council has obtained unqualified audit opinion or not. If the Council obtains qualified opinion we have to look at the reasons and take appropriate actions […] (Councillor, BDC).

4.3. Performance Information Collected for their Intrinsic Value

The “Intrinsic value” category includes performance information which was not related to the LGDG. The intrinsic value of something is said to be the value that that thing has “in itself,” or “for its own sake,” or “as such,” or “in its own right” (Zimmerman 2010), rather than because of its associations or consequences.

Some performance reports were produced for their intrinsic value. For example, some of the LGAs produced annual performance reports which were of no significance to both organizational actors and stakeholders. Similarly, except for the use in LGDG assessment, quarterly progress reports were produced which were of no interest to stakeholders (PMO-RALG, LAAC and Councillors) and organizational actors, except for the LGDG, when they were used as evidence that they had submitted the information to the appropriate authorities. Furthermore, organizational actors perceived that Councillors were overreacting in their quarterly progress reports and that they were discussing issues which were not legislated to be included in the submitted quarterly progress reports:

[…] even if we write the report … sometimes they (Councillors) are discussing on irrelevant issues that are not included in the reports… you know politics. You can see a Councillor bringing an issue which is out of the point […] (HoD, SMC).

[...] you may write something in your report. They will be asking you different issues...not those written in your report. Your report does not have any power. They have not read it […] (HoD, TMC).

Quarterly progress reports submitted by LGAs were perceived not to be read or used by the appropriate authorities. The information therein was perceived to be significant only when problems emerged and when there were issues to be resolved. At that point, authorities would request the reports again as if they had not been submitted, as reflected in the following quote:

[...] We are required to submit quarterly reports to TAMISEMI (PMO-RALG), I don’t think those people read the reports, because whenever there misunderstandings between the councillors and us, they request for reports that were already submitted to them [...] (Municipal HoD).

4.4. Uncollected/Unused Performance Information

This category includes performance information not related to the LGDG. All performance information related to the LGDG was collected by organizational actors. Some of the performance information advocated in the regulations which governed the LGAs was not collected at all by all LGAs, and some was not collected by some of the LGAs. For example, the budgeting guideline issued by the Tanzanian Ministry of Finance advocates a 3/5-year performance report, which was not prepared by all LGAs:

[…] The outcome report should be prepared at the end of the Strategic Planning cycle. It should focus on assessing the degree to which the institution is meeting its planned objectives or outcomes documented in the Strategic Plan. The report should summarize the findings of the main evaluations, analytical studies, and reviews undertaken during the period. For each objective the report should describe resulting impact from interventions undertaken. These assessments should be linked to all national frameworks including FYDP I, MDGs, MKUKUTA II and Ruling party Manifesto […] (Guidelines for the preparation of annual plan and budget for 2012/13 in the implementation of the five-year development plan 2011/12-2015/16).

The decoupling of the non-preparation of the 3/5-year performance report, as advocated in the guidelines, was not essential for resolving conflicts between legitimacy and efficiency (Meyer and Rowan 1977, Oliver 1991). The 3/5-year performance report was not known about by most of the organizational actors and other stakeholders. It was not perceived as significant and, therefore, was not prepared, as reflected in the following quote:

[…] We have not yet produced.... er... er... outcome performance report... we have not yet started preparing it […] (HoD, Municipal Council).

In some of the LGAs, other reports, such as annual performance reports, were not prepared at all. This type of report was perceived to be initiated by donors when it was linked to grants’ provision. When grants’ provision through this report ceased, it was perceived by the LGAs that there were no authorities who emphasized its preparation and its uses:

[…] you know, at the beginning when this report was introduced … it was UNICEF pressure. After a certain period of time, nobody asks about it… no one is interested on the annual performance report […] (HoD, Municipal Council).

4.5. Stakeholders Power and the Collection/Use of Performance Information

The use of performance measurement entails both elements of legitimacy and efficiency (Torres et al. 2010). Performance measurement gives out signals of rationality and efficiency to external stakeholders concerned with legitimacy and the value of services (Guthrie et al. 1999). An institutional theory perspective emphasizes the importance of trans-organizational processes in organizational functioning and enables a consideration of relations of power as they affect the organization. Any discussion of the interaction between the institutional environment and organizations needs to be located amidst relations of power (Collier 2001).

The organizational structure of the Tanzanian the LGAs locates Council at the top, reflecting the supreme power possessed by councillors. Councillors’ power was based on the managerial structure, with its internal power base, and supported by a range of rewards and sanctions (Fincham 1992). Councillors were perceived to be very powerful, as reflected in the following quotes:

[…] If they do not want a Director… or the Municipal Treasurer… or anyone in the Council… the person will be fired or transferred… They are very powerful […] (HoD, TMC).

[…] If they want something… when they are determined… “Let us develop our country”, people are very responsive and they contribute. However, if they say “Do not contribute” no one will contribute. When they want and determined to collect revenue… you collect a lot […] (MT, SMC).

Councillors were concerned about the accuracy of performance information/reports provided by the local government officials. They believed that sometimes the information was manipulated to cover up misuse of resources by the local officials. Hence, to deal with this problem, they decided to inspect projects implemented by the councils and sometimes, in an extreme situation, they called for a special audit or special assistance, as reflected in the following quote:

[…] It happened that the performance report sent to the DAS was different from the performance report submitted to us. For DAS, the report showed that fund was already received and utilized in the implementation of the project. In our report, we were told that fund was not yet released by the Treasury. We requested for an auditor to investigate if fund was received and where the fund was utilized […] (HoD, BDC).

The PMO-RALG carefully administers the LDGD exercise. Before an official assessment, the PMO-RALG was involved in a mock exercise performed by the LGAs. During the assessment, some of the members of the PMO-RALG were involved in the assessment exercise. After the assessment exercise, the PMO-RALG was involved in announcing the assessment results, which were the basis for grants’ qualification and allocation. Assessors involved in the LGDG were perceived to have more power than the PMO-RALG. The involvement of the PMO-RALG in the perceptions of the LGAs was on “oversight context” in one hand but mostly “an LGA partner” as reflected in the following quote:

[…] TAMISEMI (PMO-RALG) helps us a lot. We are conducting the assessment exercise on our own. Then we do that with TAMISEMI. We are making sure that all staffs are in place before assessors come [….] (Municipal HoD).

Parliament’s LAAC was the overseeing body of the LGAs and the PMO-RALG. Its power was supported by a range of rewards and sanctions (Fincham 1992). It had more power than the PMO-RALG and the Councillors. Its instructions were to be addressed in a specific section of the LAAC report. LGAs had to prepare a report for the LAAC on how they had addressed audit queries. The size of the LAAC reports increased year after year due to Members of Parliament (MPs)s’ demand for more detailed information. Parliament’s LAAC had the ability to get LGAs do things that they would not otherwise do (Pfeffer 1981). This is in line with Torres et al. (2010) s’ conclusion that motivation for improving efficiency in organizations results in the demand for, and increase of, performance information (Torres et al. 2010). Other performance information, such as revenue and expenditure and project implementation reports, was required by the LAAC. In addition, the members of the committee believed that external audit reports were inadequate as they failed to provide sufficient information about the performance of the LGAs, or indication of the value for money of various expenditures:

[…] We know the auditors have examined the accounts…we respect their work. Lots of funds are now going into local government and we have experiences of theft and misuse of funds in many councils. Adequate details allow us to satisfy ourselves on how public funds have been used […] (Member of the Parliamentary Committee).

Furthermore, the LGAs had to explain how they had addressed the LAAC instructions and the people involved. The LAAC had the ability to make additional inspections, apart from the CAG investigation, and had the ability to give instructions about demotion of individuals who caused poor performance in their LGAs. The LGDGD served as an elegant way of shaping accountability and formed the basis for discharging the accountability of both individuals and organizations (Eden and Hyndman 1999, Economic Commissions for Africa 2003, Wisniewski and Stewart 2004, Flynn 2007, Torres et al. 2010).The power and roles of the LAAC are reflected in the following quotes:

[…] The committee is so powerful. I know a number of fellows who have been sacked or demoted as a result of their recommendations. When they (members of the committee) give orders you can’t say no. I don’t agree with the way they set targets for revenues collection, it is not realistic […] (District Executive Director).

[…] Our committee reports to the parliament, you know the parliament needs to ensure that the local government performs, address the problems facing our people […] (Member of the Parliamentary Committee).

LAAC report was perceived to be significant by the other authorities. It was used in conjunction with other reports. For example, the CAG required LGAs to submit their financial statements for auditing purposes, together with LAAC reports for comparative purposes, as reflected in the following quote:

[…] The Project’s Implementation reports prepared in accordance with the LAAC format should be submitted to my office on or before 30th September each year together with financial statements for the year. This will enable the auditors to compare the financial performance for the period under review against other financial reports and to conduct site visits to verify physical implementation of planned activities as well as assessing the progress made in an effort to establish the existence of value for money in such projects undertaken by the Councils and report on the outcomes of assessment […] (The CAG’s report on the Financial Statements of LGAs for FY ended 3oth Jun 2011).

5. Conclusion

This study examines the use of performance information by LGAs’ stakeholders in Tanzania. Performance information was categorized into two main categories: performance measurement under the LGDG system, and performance measurement not under the LGDG system.

The study has revealed that performance information was produced and used for both efficiency and legitimacy purposes. Such a finding is consistent with the results in other previous studies, such as Torres et al. (2010). It also revealed that the power and interests of stakeholders influences the collection of and/or use of performance information for either legitimacy or efficiency purposes. The study has further indicated that sometimes performance reports were generated for their intrinsic value, while some reports prescribed by regulations were not produced in practice. Also multiple performance reports are produced by the LGAs and, hence, the need for measurement frameworks that consider the entirety of a municipality’s operations (Linna et al. 2010).

In the view of this study, although legitimacy and efficiency motivations exist concurrently, there is a need for further investigation of the extent of these motives and how each of them manifests. From the analysis, it can be concluded that performance information use is more likely to be driven by altruism than self-interest amongst government officials (Moynihan and Pandey 2010). It may have little direct impact on decision-making and still be of value in ‘enlightening’ various stakeholders (Julnes 2006). The power relations amongst various stakeholders may explain the extent of the efficiency and legitimacy motivations which organizational actors have. However, power may not be adequate for explanation of efficiency and/or legitimacy motivations. The nature of the performance measurement system imposed may also affect the extent of the legitimacy and efficiency motivations. From a policy perspective, the findings of this study can provide additional insights for reformers, academicians and governments in their exploration of the NPM reforms and the implications for efficiency and legitimacy. Further studies should be carried out to consider diverse performance measurement systems, in order to justify the significance of power, and other factors which may emerge as having an influence on the extent of organizational actors’ motivations regarding efficiency and legitimacy.

References

[1]  Abernethy, M.A. and Chua, W.F. (1996). "A Field Study of Control System “Redesign”: The Impact of Institutional Processes on Strategic Choice." Contemporary Accounting Research 13 (2): 569-606.
In article      CrossRef
 
[2]  Adcroft, A. and Willis, R. (2005). "The (Un)Intended Outcome of Public Sector Performance Measurement." International Journal of Public Sector Management 18 (5): 386-400.
In article      CrossRef
 
[3]  Ahmed, M.N. and Scapens, R.W. (2003). "The Evolution of Cost-Based Pricing Rules in Britain: An Institutionalist Perspective." Review of Political Economy 15 (2): 173-191.
In article      CrossRef
 
[4]  Ammons, D.N. and Rivenbark, W.C. (2008). "Factors Influencing the Use of Performance Data to Improve Municipal Services: Evidence from the North Carolina Benchmarking Project." Public Administration Review 68 (2): 304-318.
In article      CrossRef
 
[5]  Arnaboldi, M. and Azzone, G. (2010). "Constructing Performance Measurement in the Public Sector." Critical Perspectives on Accounting 21 (4): 266-282.
In article      CrossRef
 
[6]  Askim, J. (2007). "How Do Politicians Use Performance Information? An Analysis of the Norwegian Local Government Experience." International Review of Administrative Sciences 73 (3): 453-472.
In article      CrossRef
 
[7]  Askim, J. (2008). Determinants of Performance Information Utilization in Political Decision Making in W. Van Dooren and S. Van De Walle (Eds) Performance Information in the Public Sector: How It Is Used, Houndmills: Palgrave Macmillan, pp125-39.
In article      
 
[8]  Askim, J. (2009). "The Demand Side of Performance Measurement: Explaining Councillors' Utilization of Performance Information in Policymaking." International Public Management Journal 12 (1): 24-47.
In article      CrossRef
 
[9]  Aziz, N., Hui, W.S. and Othman, R. (2012). The Use of Performance Measurement System (Pms) in Transforming Public Sector Organization. Innovation Management and Technology Research (ICIMTR), 2012 International Conference on, IEEE.
In article      CrossRef
 
[10]  Basu, O.N., Dirsmith, M.W. and Gupta, P.P. (1999). "The Coupling of the Symbolic and the Technical in an Institutionalized Context: The Negotiated Order of the Gao's Audit Reporting Process." American Sociological Review: 506-526.
In article      CrossRef
 
[11]  Bealing Jr, W.E., Dirsmith, M.W. and Fogarty, T. (1996). "Early Regulatory Actions by the Sec: An Institutional Theory Perspective on the Dramaturgy of Political Exchanges." Accounting, Organizations and Society 21 (4): 317-338.
In article      CrossRef
 
[12]  Brignall, S. and Modell, S. (2000). "An Institutional Perspective on Performance Measurement and Management in the ‘New Public Sector’." Management Accounting Research 11 (3): 281-306.
In article      CrossRef
 
[13]  Broadbent, J., Jacobs, K. and Laughlin, R. (2001). "Organisational Resistance Strategies to Unwanted Accounting and Finance Changes: The Case of General Medical Practice in the Uk." Accounting, Auditing & Accountability Journal 14 (5): 565-586.
In article      CrossRef
 
[14]  Brunsson, N. (1990). "Deciding for Responsibility and Legitimation: Alternative Interpretations of Organizational Decision-Making." Accounting, Organizations and Society 15 (1): 47-59.
In article      CrossRef
 
[15]  Brusca, I. and Montesinos, V. (2006). "Are Citizens Significant Users of Government Financial Information?" Public Money and Management 26 (4): 205-209.
In article      CrossRef
 
[16]  Burns, J. (2000). "The Dynamics of Accounting Change Inter-Play between New Practices, Routines, Institutions, Power and Politics." Accounting, Auditing & Accountability Journal 13 (5): 566-596.
In article      CrossRef
 
[17]  Burns, J. and Baldvinsdottir, G. (2005). "An Institutional Perspective of Accountants' New Roles–the Interplay of Contradictions and Praxis." European accounting review 14 (4): 725-757.
In article      CrossRef
 
[18]  Burns, J. and Scapens, R.W. (2000). "Conceptualizing Management Accounting Change: An Institutional Framework." Management Accounting Research 11 (1): 3-25.
In article      CrossRef
 
[19]  Carmona, S. and Donoso, R. (2004). "Cost Accounting in Early Regulated Markets: The Case of the Royal Soap Factory of Seville (1525–1692)." Journal of Accounting and Public Policy 23 (2): 129-157.
In article      CrossRef
 
[20]  Carmona, S. and Macías, M. (2001). "Institutional Pressures, Monopolistic Conditions and the Implementation of Early Cost Management Practices: The Case of the Royal Tobacco Factory of Seville (1820–1887)." Abacus 37 (2): 139-165.
In article      CrossRef
 
[21]  Carpenter, V.L. and Feroz, E.H. (2001). "Institutional Theory and Accounting Rule Choice: An Analysis of Four Us State Governments' Decisions to Adopt Generally Accepted Accounting Principles." Accounting, Organizations and Society 26 (7): 565-596.
In article      CrossRef
 
[22]  Charbonneau, E. (2010). Use and Sensemaking of Performance Measurement Information by Local Government Managers: The Case of Quebec's Municipal Benchmarking System, Rutgers University-Graduate School-Newark.
In article      
 
[23]  Cohn Berman, B.J. (2008). "Involving the Public in Measuring and Reporting Local Government Performance." National Civic Review 97 (1): 3-10.
In article      CrossRef
 
[24]  Collier, P.M. (2001). "The Power of Accounting: A Field Study of Local Financial Management in a Police Force." Management accounting research 12 (4): 465-486.
In article      CrossRef
 
[25]  Corbin, J. M., & Strauss, A. (1990). Grounded theory research: procedures, canons, and evaluative criteria. Qualitative sociology, 13 (1), 3-21.
In article      CrossRef
 
[26]  Corbin, J. M., & Strauss, A. L. (2008). Basics of qualitative research: techniques and procedures for developing grounded theory: Sage Publications, Inc.
In article      
 
[27]  Covaleski, M.A. and Dirsmith, M.W. (1988). "An Institutional Perspective on the Rise, Social Transformation, and Fall of a University Budget Category." Administrative Science Quarterly 33 (4): 562-587.
In article      CrossRef
 
[28]  de Bruijn, H. (2002). "Performance Measurement in the Public Sector: Strategies to Cope with the Risks of Performance Measurement." International Journal of Public Sector Management 15 (7): 578-594.
In article      CrossRef
 
[29]  de Bruijn, H. (2007). Managing Performance in the Public Sector. London, Routledge.
In article      
 
[30]  Dillard, J.F., Rigsby, J.T. and Goodman, C. (2004). "The Making and Remaking of Organization Context: Duality and the Institutionalization Process." Accounting, Auditing & Accountability Journal 17 (4): 506-542.
In article      CrossRef
 
[31]  DiMaggio, P.J. and Powell, W.W. (1983). "The Iron Cage Revisited: Institutional Isomorphism and Collective Rationality in Organizational Fields." American Sociological Review: 147-160.
In article      CrossRef
 
[32]  Economic Commissions for Africa (2003). "Public Sector Management Reforms in Africa: Lessons Learned." Addis Ababa: Development Policy Management Division.
In article      
 
[33]  Eden, L., Dacin, M.T. and Wan, W.P. (2001). "Standards across Borders: Crossborder Diffusion of the Arm's Length Standard in North America." Accounting, Organizations and Society 26 (1): 1-23.
In article      CrossRef
 
[34]  Eden, R. and Hyndman, N. (1999). "Performance Measurement in the Uk Public Sector: Poisoned Chalice or Holy Grail?" Optimum, The Journal of Public Sector Management 29 (1): 9-15.
In article      
 
[35]  Edwards, P., Ezzamel, M., McLean, C. and Robson, K. (2000). "Budgeting and Strategy in Schools: The Elusive Link." Financial Accountability & Management 16 (4): 309-334.
In article      CrossRef
 
[36]  Figlio, D.N. and Kenny, L.W. (2009). "Public Sector Performance Measurement and Stakeholder Support." Journal of Public Economics 93 (9): 1069-1077.
In article      CrossRef
 
[37]  Fincham, R. (1992). "Perspectives on Power: Processual, Institutional and ‘Internal’forms of Organizational Power." Journal of Management Studies 29 (6): 741-760.
In article      CrossRef
 
[38]  Flynn, N. (2007). Public Sector Management. London, Sage Publications Ltd.
In article      
 
[39]  Fogarty, T.J. (1996). "The Imagery and Reality of Peer Review in the Us: Insights from Institutional Theory." Accounting, Organizations and Society 21 (2): 243-267.
In article      CrossRef
 
[40]  Fogarty, T.J. and Rogers, R.K. (2005). "Financial Analysts' Reports: An Extended Institutional Theory Evaluation." Accounting, Organizations and Society 30 (4): 331-356.
In article      CrossRef
 
[41]  Fogarty, T.J., Zucca, L.J., Meonske, N. and Kirch, D.P. (1997). "Proactive Practice Review: A Critical Case Study of Accounting Regulation That Never Was." Critical Perspectives on Accounting 8 (3): 167-187.
In article      CrossRef
 
[42]  Glaser, B. G., & Strauss, A. L. (1967). The discovery of grounded theory: strategies for qualitative research. London: Wiedenfeld and Nicholson.
In article      
 
[43]  Granlund, M. (2001). "Towards Explaining Stability in and around Management Accounting Systems." Management Accounting Research 12 (2): 141-166.
In article      CrossRef
 
[44]  Guthrie, J. and English, L. (1997). "Performance Information and Programme Evaluation in the Australian Public Sector." International Journal of Public Sector Management 10 (3): 154-164.
In article      CrossRef
 
[45]  Guthrie, J., Olson, O. and Humphrey, C. (1999). "Debating Developments in New Public Financial Management: The Limits of Global Theorising and Some New Ways Forward." Financial Accountability and Management 15 (3 & 4): 209-228.
In article      CrossRef
 
[46]  Halachmi, A. (2012). "Mandated Performance Measurement: A Help or a Hindrance?" National Productivity Review 18 (2): 59-67.
In article      CrossRef
 
[47]  Harrison, J., Rouse, P. and De Villiers, C. (2012). "Accountability and Performance Measurement: A Stakeholder Perspective." Journal of CENTRUM Cathedra 5 (2): 243-258.
In article      CrossRef
 
[48]  Hawke, L. (2012). "Australian Public Sector Performance Management: Success or Stagnation?" International Journal of Productivity and Performance Management 61 (3): 310-328.
In article      CrossRef
 
[49]  Hines, T., McBride, K., Fearnley, S. and Brandt, R. (2001). "We’re Off to See the Wizard: An Evaluation of Directors’ and Auditors’ Experiences with the Financial Reporting Review Panel." Accounting, Auditing & Accountability Journal 14 (1): 53-84.
In article      CrossRef
 
[50]  Hussain, M.M. and Hoque, Z. (2002). "Understanding Non-Financial Performance Measurement Practices in Japanese Banks: A New Institutional Sociology Perspective." Accounting, Auditing & Accountability Journal 15 (2): 162-183.
In article      CrossRef
 
[51]  Johnsen, A. (1999). "Implementation Mode and Local Government Performance Measurement: A Norwegian Experience." Financial Accountability & Management 15 (1): 41-66.
In article      CrossRef
 
[52]  Julnes, P.D.L. (2006). "Performance Measurement : An Effective Tool for Government Accountability? The Debate Goes On." SAGE Publications (London,Thousand Oaks and New Delhi).
In article      
 
[53]  Kanter, R.M. and Summers, D.V. (1987). Doing Well, While Doing Good: Dilemmas of Performance Measurement in Non-Profit Organizations and the Need for a Multi-Constituency Approach. The Non-Profit Sector: A Research Handbook. W. W. Powell. Yale, Yale University Press
In article      
 
[54]  Kloot, L. (1999). "Performance Measurement and Accountability in Victorian Local Government." International Journal of Public Sector Management 12 (7): 565-584.
In article      CrossRef
 
[55]  Kluvers, R. (2003). "Accountability for Performance in Local Government." Australian Journal of Public Administration 62 (1): 57-69.
In article      CrossRef
 
[56]  Kurunmaki, L., Lapsley, I. and Melia, K. (2003). "Accountingization V. Legitimation: A Comparative Study of the Use of Accounting Information in Intensive Care." Management Accounting Research 14 (2): 112-139.
In article      CrossRef
 
[57]  Lapsley, I. and Pallot, J. (2000). "Accounting, Management and Organizational Change: A Comparative Study of Local Government." Management Accounting Research 11 (2): 213-229.
In article      CrossRef
 
[58]  Lawton, A., McKevitt, D. and Millar, M. (2000). "Developments: Coping with Ambiguity: Reconciling External Legitimacy and Organizational Implementation in Performance Measurement." Public Money and Management 20 (3): 13-20.
In article      CrossRef
 
[59]  Lee Jr, R.D. and Burns, R.C. (2002). "Performance Measurement in State Budgeting: Advancement and Backsliding from 1990 to 1995." Public Budgeting & Finance 20 (1): 38-54.
In article      
 
[60]  Linna, P., Pekkola, S., Ukko, J. and Melkas, H. (2010). "Defining and Measuring Productivity in the Public Sector: Managerial Perceptions." International Journal of Public Sector Management 23 (5): 479-499.
In article      CrossRef
 
[61]  McDavid, J.C. and Huse, I. (2012). "Legislator Uses of Public Performance Reports Findings from a Five-Year Study." American Journal of Evaluation 33 (1): 7-25.
In article      CrossRef
 
[62]  Melkers, J. and Willoughby, K. (2005). "Models of Performance‐Measurement Use in Local Governments: Understanding Budgeting, Communication, and Lasting Effects." Public Administration Review 65 (2): 180-190.
In article      CrossRef
 
[63]  Meyer, J.W. and Rowan, B. (1977). "Institutionalized Organizations: Formal Structure as Myth and Ceremony." The American Journal of Sociology 83 (2): 340-363.
In article      CrossRef
 
[64]  Micheli, P. and Neely, A. (2010). "Performance Measurement in the Public Sector in England: Searching for the Golden Thread." Public Administration Review 70 (4): 591-600.
In article      CrossRef
 
[65]  Mimba, N., Helden, G. and Tillema, S. (2007). "Public Sector Performance Measurement in Developing Countries: A Literature Review and Research Agenda." Journal of Accounting and Organizational Change 3 (3): 192-208.
In article      CrossRef
 
[66]  Mimba, N.P.S.H., van Helden, G.J. and Tillema, S. (2007). "Public Sector Performance Measurement in Developing Countries: A Literature Review and Research Agenda." Journal of Accounting & Organizational Change 3 (3): 192-208.
In article      CrossRef
 
[67]  Modell, S. (2001). "Performance Measurement and Institutional Processes: A Study of Managerial Responses to Public Sector Reform." Management accounting research 12 (4): 437-464.
In article      CrossRef
 
[68]  Modell, S. (2003). "Goals Versus Institutions: The Development of Performance Measurement in the Swedish University Sector." Management Accounting Research 14 (4): 333-359.
In article      CrossRef
 
[69]  Modell, S. (2005). "Students as Consumers?: An Institutional Field-Level Analysis of the Construction of Performance Measurement Practices." Accounting, Auditing & Accountability Journal 18 (4): 537-563.
In article      CrossRef
 
[70]  Moynihan, D.P. and Ingraham, P.W. (2004). "Integrative Leadership in the Public Sector a Model of Performance-Information Use." Administration & Society 36 (4): 427-453.
In article      CrossRef
 
[71]  Moynihan, D.P. and Pandey, S.K. (2010). "The Big Question for Performance Management: Why Do Managers Use Performance Information?" Journal of Public Administration Research and Theory 20 (4): 849-866.
In article      CrossRef
 
[72]  Oliver, C. (1991). "Strategic Responses to Institutional Processes." Academy of management review 16 (1): 145-179.
In article      
 
[73]  Orton, J.D. and Weick, K.E. (1990). "Loosely Coupled Systems: A Reconceptualization." Academy of management review 15 (2): 203-223.
In article      
 
[74]  Pfeffer, J. (1981). Power in Organizations, Pitman Marshfield, MA.
In article      
 
[75]  Powell, W.W. (1985). "The Institutionalization of Rational Organizations." Contemporary Sociology 14 (5): 564-566.
In article      CrossRef
 
[76]  Propper, C. and Wilson, D. (2003). "The Use and Usefulness of Performance Measures in the Public Sector." Oxford review of economic policy 19 (2): 250.
In article      CrossRef
 
[77]  Rantanen, H., Kulmala, H.I., Lönnqvist, A. and Kujansivu, P. (2007). "Performance Measurement Systems in the Finnish Public Sector." International Journal of Public Sector Management 20 (5): 415-433.
In article      CrossRef
 
[78]  Raudla, R. (2012). "The Use of Performance Information in Budgetary Decision‐Making by Legislators: Is Estonia Any Different?" Public Administration.
In article      CrossRef
 
[79]  Schacter, M. (2009). "Means... Ends... Indicators: Performance Measurement in the Public Sector."
In article      
 
[80]  Seal, W. (1999). "Accounting and Competitive Tendering in Uk Local Government: An Institutionalist Interpretation of the New Public Management." Financial Accountability & Management 15 (3‐4): 309-327.
In article      CrossRef
 
[81]  Seal, W. (2003). "Modernity, Modernization and the Deinstitutionalization of Incremental Budgeting in Local Government." Financial Accountability & Management 19 (2): 93-116.
In article      CrossRef
 
[82]  Siti-Nabiha, A. and Scapens, R.W. (2005). "Stability and Change: An Institutionalist Study of Management Accounting Change." Accounting, Auditing & Accountability Journal 18 (1): 44-73.
In article      CrossRef
 
[83]  Soin, K., Seal, W. and Cullen, J. (2002). "Abc and Organizational Change: An Institutional Perspective." Management Accounting Research 13 (2): 249-271.
In article      CrossRef
 
[84]  Steinberg, H.I. (2009). State and Local Governments’ Use of Performance Measures to Improve Service Delivery. AGA CPAG Research Series Report No. 23
In article      
 
[85]  Torres, L., Pina, V. and Martí, C. (2010). Performance Measurement in Spanish Local Governments. An Empirical Study About the State of the Art, 32nd EGPA Annual Conference" Temporalities, Public Administration & Public Policies", 8-10 sept. 2010, Toulouse, France.
In article      
 
[86]  Torres, L., Pina, V. and Martí, C. (2010). Performance Measurement in Spanish Local Governments. An Empirical Study About the State of the Art. 32nd EGPA Annual Conference Toulouse. France.
In article      
 
[87]  Tsamenyi, M., Cullen, J. and González, J. (2006). "Changes in Accounting and Financial Information System in a Spanish Electricity Company: A New Institutional Theory Analysis." Management Accounting Research 17 (4): 409-432.
In article      CrossRef
 
[88]  Vakkuri, J. (2012). "Interpretive Schemes in Public Sector Performance–Measurement Problems Generating Managerial Action in Finnish Local Government, Julkaistaan." International Journal of Public Sector Performance Management.
In article      
 
[89]  Vakkuri, J. and Meklin, P. (2006). "Ambiguity in Performance Measurement: A Theoretical Approach to Organisational Uses of Performance Measurement." Financial Accountability & Management 22 (3): 235-250.
In article      CrossRef
 
[90]  Van Dooren, W. (2005). "What Makes Organisations Measure? Hypotheses on the Causes and Conditions for Performance Measurement." Financial Accountability & Management 21 (3): 363-383.
In article      CrossRef
 
[91]  Van Thiel, S. and Leeuw, F.L. (2002). "The Performance Paradox in the Public Sector." Public Performance & Management Review: 267-281.
In article      CrossRef
 
[92]  Wisniewski, M. and Stewart, D. (2004). "Performance Measurementfor Stakeholders: The Case of Scottish Local Authorities." International Journal of Public Sector Management 17 (3): 222-233.
In article      CrossRef
 
[93]  Zimmerman, M.J. (2010). "Intrinsic Vs. Extrinsic Value." Stanford Encyclopedia of Philosophy.
In article      
 
  • CiteULikeCiteULike
  • MendeleyMendeley
  • StumbleUponStumbleUpon
  • Add to DeliciousDelicious
  • FacebookFacebook
  • TwitterTwitter
  • LinkedInLinkedIn