Complexities in Performance Measurement and the Reaction of Actors: The Case of Tanzania

Mkasiwa T.A, Gasper A.F

  Open Access OPEN ACCESS  Peer Reviewed PEER-REVIEWED

Complexities in Performance Measurement and the Reaction of Actors: The Case of Tanzania

Mkasiwa T.A1,, Gasper A.F1

1The Institute of Finance Management, Tanzania

Abstract

The purpose of this paper was to explore the challenges and complexities of performance measures which caused less than desired behavior in the Tanzanian Local Government Authorities (LGAs). Specifically, the paper focuses on performance measures under the Local Government Development Grant (LGDG) system. The LGDG was the performance based grant system established to link LGAs’ performances and their financing. The methodology of the research incorporates review of LGDG documents, field-based interviews and observation. The results are interpreted using the law of decreasing effectiveness (de Bruijn and Van Helden, 2007) and Oliver (1991)’s strategic responses to institutional pressure. The study provides a more detailed analysis of the law of decreasing effectiveness and contributes by adding a component of complexity, to contribute to the causes of pervasive effect. The paper also explains how acquiescence and manipulation strategies (Oliver, 1991) can occur concurrently.

Cite this article:

  • T.A, Mkasiwa, and Gasper A.F. "Complexities in Performance Measurement and the Reaction of Actors: The Case of Tanzania." Journal of Finance and Accounting 2.3 (2014): 41-50.
  • T.A, M. , & A.F, G. (2014). Complexities in Performance Measurement and the Reaction of Actors: The Case of Tanzania. Journal of Finance and Accounting, 2(3), 41-50.
  • T.A, Mkasiwa, and Gasper A.F. "Complexities in Performance Measurement and the Reaction of Actors: The Case of Tanzania." Journal of Finance and Accounting 2, no. 3 (2014): 41-50.

Import into BibTeX Import into EndNote Import into RefMan Import into RefWorks

1. Introduction

[…] There is also the old issue of whether it is justifiable, realistic or even ethical for the so-called experts in rich countries with well-established administrations to ask their counterparts in poorer countries who work in an environment of extreme constraint to undertake theoretical reforms that even the former, with all their advantages, find extremely difficult or impossible […] (Caiden, 1998 p.42).

New Public Management (NPM) is promoted all over the world as a tool for improving efficiency and effectiveness, explicit and measurable standards being one of its main key components (Hood, 1995, Halachmi, 2012). Following this paradigm for managing results, performance measures have been advocated in various countries at different levels of government operations: local, regional, national, and even supra-national (Van Thiel and Leeuw, 2002). In the United States, for example, in April 1992, the American Society of Public Administration adopted a resolution that endorsed efforts by governments at all levels to develop, experiment with, and adopt performance measures (Nyhan and Marlowe Jr, 1995). Similarly, in 1998, in the Policy Paper for Local Government Reform, the Government of Tanzania expressed its commitment to achieving efficient use of resources for improved service delivery at all levels of government. Consequently, a performance-based Local Government Development Grant (LGDG) was conceptualized and approved by the Tanzanian government in 2004.

In accordance with the LGDG system, which is NPM based, the extent to which local governments access transfers from central government is conditional upon their overall performance (Frølich, 2008, UNCDF, 2010). The primary objectives of performance-based funding are improved efficiency, accountability and quality (Frølich, 2008). Local governments in many developing nations rely heavily on intergovernmental transfers to obtain the resources necessary to carry out their business (Schroeder, 2001). They are situated at the local level, where people live, and where the challenges of development are most keenly felt, thus necessitating devolution from central government (UNCDF, 2010). Devolution of many responsibilities to lower levels of government has generated increased demands to hold government agencies accountable in terms of what they have spent and the results they have generated (Poister and Streib, 1999). UNCDF, the UN’s Capital Investment Agency for the world’s least developed countries, through its support of the introduction of performance-based grants in many countries since the early 1990s, has been at the forefront of the development of innovative practices within the areas of intergovernmental fiscal transfers and the capacity development of local governments (UNCDF, 2010).

Performance measures have been introduced in local governments to create incentives for enhanced local government capacity and performance (UNCDF, 2010). They are helpful in achieving specific managerial purposes, such as evaluation, control, budget, motivation, promotion, celebration, learn, and improvement (Nyhan and Marlowe Jr, 1995, Behn, 2003). They can also be used as an incentive for output, and to improve transparency, accountability, credibility and legitimacy (Eden and Hyndman, 1999, de Bruijn, 2002, Johnsen, 2005).

Despite their positive impact, results indicate that performance measures remain in the embryonic stage (Nyhan and Marlowe Jr, 1995). In the United States, performance measurement rhetoric has outdistanced practice by far; its promise and potential greatly exceed actual usefulness in practice. It is still the exception rather than the norm, and it has not taken hold in local governments in a meaningful way (Nyhan and Marlowe Jr, 1995, Poister and Streib, 1999). For example, the link between performance measurements and resource allocation decisions is argued not to be straight forward (Caiden, 1998). This experience of rhetoric promoting NPM and performance measurement in the United States is also evident in other European countries, such as England, France, Denmark and Finland (Frølich, 2008).

The NPM approach has often been criticized for assuming the ability to quantitatively measure performance in the same way that this is considered possible in the private sector (Nyhan and Marlowe Jr, 1995). It tells “the score,” of performance, and not why (Hatry, 2013). Over-reliance on performance measures is argued to stifle innovation, bring negative unintended consequences, cause gaming in performance measures, sub-optimization, tunnel vision, myopia, measure fixation, a ratchet effect, ossification and the discrediting of performance indicators (Leeuw, 1996, Kloot and Martin, 2000, Propper and Wilson, 2003, Johnsen, 2005, Powell et al., 2012). Furthermore, performance measures are challenged due to complexities such as the relationship between input and output, quantity and quality, multiple values of products in the public sector, and a dynamic environment (Lapsley, 1999, Olson et al., 2001, De Bruijn, 2007). In the public sector there is no comparative information to evaluate the performance, and no immediate substitute agents to which the government can turn if it is dissatisfied with the agent's performance-except at a very high cost (Van Thiel and Leeuw, 2002)

When more coercive pressures for improving performance are in play from funding bodies, management of the focal organization can be expected to pursue the implementation of performance measures throughout the organizational hierarchy relatively forcefully (Brignall and Modell, 2000). As the funding bodies try to get greater effort and better public services by implementing performance measurement, the responses may be better services but also may be other, less desirable, behavior (Propper and Wilson, 2003). In the Tanzanian LGAs, Gasper and Mkasiwa (2013) found LGAs’ practitioners producing and manipulating evidence in order to meet performance measures. Authors either ignore how performance indicators affect organizational behavior or implicitly assume that organizations are rational (Frølich, 2008). Drawing on the law of decreasing effectiveness (de Bruijn and Van Helden, 2007), this research explores the challenges and complexities of performance measures which could cause less than desired behavior in the Tanzanian LGAs. The key research question addressed by the research was:

How do the LGAs’ actors respond to the challenges and complexities of performance measurement in the Tanzanian LGAs?

The following section outlines the prior research into performance measures and the law of decreasing effectiveness. The remaining sections present the methodology of the research, findings and conclusions.

2. Prior Research

Performance measures in the public sector, as one of the areas of NPM, have been an important topic in accounting research since the 1990’s (Adcroft and Willis, 2005, Lapsley, 2008). Performance measures may be derived from an in-depth evaluation of an organization’s processes and outcomes, typically involving a site visit and large amounts of documentation, or derived from administrative data (Propper and Wilson, 2003). The past 25 years have witnessed a proliferation of performance measures in public management and growth in the accompanying performance measurement industry (Johnsen, 2005). Performance measures are significant in public services because of the considerable effort expended on the development of performance measures by government, audit and overseeing bodies, and by researchers (Lapsley, 2008). Explicit formal measurable standards and measures of performance and success are currently promoted all over the world as an important tool to improve organizational efficiency and effectiveness (Hood, 1995, Halachmi, 2012). They are an incentive for productivity, contribute to the legitimacy of an organization, stimulate learning processes, and generate information that may enhance an organization’s intelligence (de Bruijn and Van Helden, 2007).

Some of the studies into performance measures have shed light on the characteristics of good performance measures. For example, Nyhan and Marlowe Jr (1995), and Leeuw (1996) argue that when applying performance measures, good performance indicators: should be acceptable to those being assessed and those undertaking assessment; should be feasible in the context of validity, reliability, and consistency in data collection; and, should be reliable in the context of minimal measurement error or the extent to which findings are reproducible should they be collected again by another organization. Melkers and Willoughby (2005) argued for consistent, active and integrated measures. Summary measures are preferred to detailed measures (Propper and Wilson, 2003) and a balance of financial and non-financial measures, internal and external measures and an expansion of the number of performance measures on the one hand, with a reduction of the measure pressure on the other hand, is advocated (Brignall and Modell, 2000, Van Thiel and Leeuw, 2002). Multiple measures should be established for multiple users, with no “one size that fits all” (Eden and Hyndman, 1999, Propper and Wilson, 2003), and only measures with “little or no chance of inducing unintended adverse consequences” should be used for accountability purposes (Powell et al., 2012). Performance measures should be comprehensive, correct and clear to minimize the dysfunctional effects and maximize the functional effects (Joyce, 1993, Leeuw, 1996, Van Thiel and Leeuw, 2002, Allen et al., 2004). Performance measures are also advised to be tightly coupled to organizational strategy and should involve users in their establishment (Hood, 1995, Lawton et al., 2000, Lapsley, 2008). The performance dilemma has not been resolved, despite considerable effort to develop good performance measures (Lapsley, 2008).

For example, Therkildsen (2000) found questionable assumptions about NPM-inspired measures in Tanzania. Mserembo and Hopper (2004) argued for simpler measures in developing countries. This experience has also been observed in the USA. Lee Jr and Burns (2002) discovered backsliding in the use of performance measures in a number of states. Population size was the only state characteristic that was related to the use of performance measures. Melkers and Willoughby (2005) found pervasive use of performance measures in the US local government, although survey respondents were less enthusiastic about measurement effectiveness. Groot (1999) argue that cruder and oversimplified measures were most effective to quickly and drastically economize on costs. Output measures stimulate output-maximizing behaviour, which outweighs short-term attempts to economize. Some measures had an impact on organizational conduct, while others did not. The difference was related to the degree by which measures coincided with professionals’ opinions about good practice. Nyhan and Marlowe Jr (1995) found performance measures unrelated to program objectives or agency missions. Based on six agencies, Joyce (1993) observed that it was extremely difficult for agencies to link their performance measures and budget process in a meaningful way. None of the agencies used performance measures to make decisions about the level of resources that the programme obtained in the budget process. He argued that performance measures were used more extensively in the budget execution process than in budget preparation. Caiden (1998) discussed the nature of performance measures, their purpose, difficulties in their implementation, and issues of institutional design and feasibility. Based on a US review, Caiden (1998) argued that the link between performance measures and resource allocation decisions was not straightforward. In his paper, Caiden (1998) argued that human services are not adapted to quantifiable measures. The need for every country to determine its own uses of performance measures within its own political and administrative institutions was emphasized. Performance measures reflecting a more pronounced citizen or user perspective, such as customer satisfaction indicators, have mainly been used for external reporting and have had little impact on internal control practices within state agencies (Modell and Wiesel, 2008).

Several of the challenges and complexities of performance measures have been addressed in the literature, such as: difficulties over the availability of appropriate, robust and objective measures (Lapsley, 2008), difficulties of developing good performance measures (Leeuw, 1996), the problem of accurately measuring performance (Frølich, 2008), problems concerning the content, position, and amount of measures (Van Thiel and Leeuw, 2002), and the proliferation and non-correlation of performance measures (Leeuw, 1996). Other challenges and complexities relate to the public sector environment, such as multiple principals, goals, tasks, and vague goals (Propper and Wilson, 2003). They also include divergent perspectives (different audiences require different information), unclear mission and objectives (a fact of governmental life), multiple and contradictory organizational, program, system goals, monitoring vs. evaluation informational needs, lack of consideration of the full range of outputs and outcomes, and measuring customer satisfaction in a regulatory environment (Kravchuk and Schack, 1996).

Other challenges have been identified by steering organs of the LGDG, which include: focus on “process” and “intermediate output” indicators which cannot directly measure service-delivery outcomes (such as poverty reduction), a range of external factors that can also dilute the impact of LGDG and impede their implementation, weak management capacities at the central level, which results in delays and uncertainties, the lack of political will to implement the consequences of poor LGA performance, pitfalls and inconsistencies, such as selecting the wrong indicators, for example, can be unfair (when they measure actions beyond the control of LGAs) or lead to perverse outcomes (when they encourage LGs to focus on certain things, but not others) (UNCDF, 2010).

Because of the complexity of performance measurement, individuals will respond to it in the way that maximizes their own utility or benefit (Propper and Wilson, 2003). Performance measures have resulted in misinterpretation and misrepresentation (Propper and Wilson, 2003). In health care in the United States, performance measures have caused inappropriate care and have decreased provider focus on patient concerns and services (Powell et al., 2012). In the UK, performance measures have caused massaging of truancy rates and waiting lists (Propper and Wilson, 2003). In the Tanzanian LGAs, performance measures have caused manipulation of evidence (Gasper and Mkasiwa, 2013).

Performance measurement, particularly performance measures, has been extensively investigated in the public sector in developed countries. Authors have addressed various issues of interest, such as the characteristics of good performance measures and the positive/negative/counter-productive consequences of performance measures. From another perspective, authors have addressed the challenges/complexities of performance measures.

Drawing on the law of decreasing effectiveness and Oliver(1991)’s strategic responses to institutional pressure, this research contributes to these areas by exploring the challenges/complexities of performance measures, and how these result in negative responses/counter-productive impacts on LGAs (Oliver, 1991, Norreklit, 2000, de Bruijn and Van Helden, 2007).

The law of decreasing effectiveness states that perverse effects manifest themselves when severe consequences for the actors involved (e.g. managers/LGAs officials) are attached to quantified performance indicators (de Bruijn and Van Helden, 2007). They argue:

[…] the more severe the consequences of performance indicators, the higher the desired impact of the system, but also the higher the risk of perverse effects. When the latter outweigh the former, the effectiveness of the PM (Performance Measurement) system will be challenged. Consequently, the stronger the steering intentions of a PM system – when appraisal and rewarding rather than informing and learning are at stake – the less effective it might be, which refers to the so called Law of Decreasing Effectiveness […] (p. 407).

de Bruijn and Van Helden (2007) argue that these severe consequences include: naming and shaming when figures for productivity and quality are published and the organizations involved are ranked, which makes it visible to everybody which organizations perform best and which perform worst; financial consequences when major financial consequences are attached to these figures, i.e. when additional funds are allocated to those with good scores, whereas those with bad scores lose funds; and managerial attention when a poorly performing organization or organizational unit has to give up autonomy, receives more attention and faces interventions by the managerial echelon, while a well performing organization or organizational unit is allowed greater degrees of freedom.

The Oliver(1991) framework of the range of responses available to actors and organizations exposed to institutional pressures, and the circumstances under which such responses are likely to occur, is particularly relevant when interpreting our empirical findings. Two strategic responses were of particular relevance to this study: acquiescence and manipulation. Acquiescence involves conscious or unconscious compliance to institutional pressures. Manipulation refers to the strategies of co-opting, influencing or controlling the power exerting institutional processes. Oliver (1991) also proposes predictive factors that influence the nature of strategic response, three of which are of particular relevance to this study (cause, content and context). The term ‘cause’ refers to the rationale or intended objectives that form the basis of institutional pressure. ‘Content’ means the nature of institutional pressures which may either be conformity with or constraint of the organization’s existing goals and policies. ‘Context’ refers to the environmental situation within which institutional pressures are exerted on organizations.

The table below provides a summary of acquiescence and manipulation strategies and their predictive factors.

Table. Institutional antecedents and predicted strategic responses

3. Methodology

The empirical research was carried out in two Tanzanian LGAs, namely LGA-A and LGA-B. LGA-A was amongst the best performers, while LGA-B was the poorest performer in the context of LGDG assessment. The collection of field data mainly took place in the year 2012. Data collection primarily comprised semi-structured interviews, observation, and document analysis. A total of 15 interviews, lasting between 45 minutes and 3 hours, were conducted.

Interviewees were selected based on their involvement in the LGDG assessment exercise. Each LGA had an LGDG selected individual who will work with assessors. These selected individuals were not selected on a permanent basis. The team mostly consisted of heads of departments and other experts/experienced staff from different areas, such as financial management, project implementation, monitoring and evaluation, and procurement. 2 Treasurers, 2 internal auditors, 2 Council clerks, and 6 Heads of departments/other staff involved in the LGDG were interviewed. As described below, PMO-RALG employs independent consultants (assessors) in assessing LGAs. The study included 3 assessors to shed further light on the challenges/complexities of performance measures and their adverse impacts on LGAs.

The authors had opportunities to observe the LGDG assessment exercise and to observe LGAs’ officials’ practices when visiting LGAs for interviews and documents collection. This allowed the authors to complement and compare with what was obtained during the interviews. Moreover, the initial findings were later presented to some of the interviewees for confirmation.

LGDG reports, manual and other LGDG reports provided some insights into the views of reformers and assessors underpinning the LGDG system. The LGDG structure is presented below to enhance understanding of the LGDG system, while the presentation of the findings is preceded by an overview of the LGDG system and structure.

3.1. LGDG System

The LGDG system was introduced in Tanzania by international steering bodies, namely the United Nations Capital Development Fund (UNCDF) and the World Bank. A coercive pressure is exerted on Tanzanian LGAs by these steering bodies, upon which they are financially dependent (DiMaggio and Powell, 1983). Steering intention and exhortation is reflected in the following quotes:

[…] UNCDF has piloted local government performance-based grant systems (PBGSs) that are now being adopted in a variety of countries. This is the subject of a forthcoming UNCDF publication that shares the considerable knowledge and extensive experience that has now been accumulated by UNCDF in designing, piloting, scaling up and implementing PBGSs […] (Jesper Steffesen, UNCDF report).

[…] “Performance-Based Grant Systems – Concept and International Experience” is the result of experiences from design and implementation of these new innovative grant systems by UNCDF, often in collaboration with the World Bank, the Asian Development Bank, other development partners and governments. It is the fruit of over a decade of experience and I trust it will prove useful to both governments and development practitioners engaged in the challenge of meeting the Millennium Development Goals […] (David Morrison, UNCDF Executive Secretary, UNCDF report).

The LGDG system has been implemented by the Government of Tanzania through the Prime Minister’s Office, Regional Administration and Local Government (PMO-RALG). It provides discretionary and sector-specific development funds to the Tanzanian LGAs. It consists of Council Development Grants (CDG), previously known as the Capital Development Grants (CDG), Capacity Building Grants (CBG), as well as sector specific grants.

The system is financed by the Government of the United Republic of Tanzania, together with its Development Partners and the World Bank. The funding bodies wish to establish a link between the financing of LGAs and their performance in key areas of financial management, participatory planning, pro-poor budgeting, budget execution and the broader areas of local governance, including gender, transparency and accountability, Council functional processes and the involvement of Lower Local Governments (LLGs) and communities at large.

In the Policy Paper for Local Government Reform (1998), the Government of Tanzania expressed its commitment to the reform of the intergovernmental transfer system. In 2004, the LGDG system was approved by the government. The LGDG system has been implemented in two phases: the first phase covered 2004-2008 and the second phase covered 2008-2013. There are indicators for performance measures and minimum conditions in nine areas. The scores for of performance measure for each functional area are: financial management (15), local revenue mobilization (10), development planning and budgeting (10), transparency and accountability (20), interaction between Higher Lower Government and Lower Local Government (10), human resource development (10), procurement (10), project implementation (10) and council functional processes (5), with a total possible score of 100.

Scores for each functional area has been subject to changes in different years. For example, the score for financial management has been increased from 10% in 2006/07 to 15% in 2011/12. Similarly, the total score for council functional processes has been decreased from 10% in the year 2006/07 to 5% in the year 2011/12. In addition, other functional areas have been changed. For example, the “fiscal capacity” functional area has been changed to “local revenue mobilization”.

Each LGA gets a minimum amount of CDG, ranging from 25% to 100% depending on the assessment performance. LGAs classified as “Very Good” performers receive 100% of the allocation, those classified as “Good” receive 80%, while those classified as “Poor” receive 50% of the allocation. LGAs which fail to meet the minimum conditions receive 25% of the LGDG allocation, subject to strict oversight from PMO-RALG.

4. Findings

4.1. Challenges/Complexities in LGDG Performance Measures

Some of the performance measures were perceived to be unfair, and out of the control of the LGAs. For example, in the human resources functional area, it was not under the control of the LGA to substantively fill the vacancies of key staff positions. LGAs depended on the central government to fill these vacancies. Punishment for not filling the vacancies was not perceived to be appropriate, as reflected in the following quote:

[…] When they come and ask about vacancies… I don’t understand them. It was for them to give us internal auditor, Treasurer, and others. If they don’t give us marks because of this… then I don’t understand them […] (HoD, LGA-B).

Performance measures used under the LGDG were not perceived as being comprehensive by LGAs officials. For example, timely submission of reports was not perceived as comprehensive enough to measure LGA performance of preparation of financial reports. The contents of the reports were equally perceived as important. Similarly, a measure, “% of own local revenue collected against planned, excluding compensation” was argued to be unsatisfactory, given the probability of under budgeting, as reflected in the following quotes:

[…] I think these performance measures are useful….however, assessors are not assessing the performance measures in detail. For example, they can argue that they want to assess whether the reports are submitted timely. But what are the contents inside those reports? They are not looking that far. It is enough for them that you have submitted the reports. They should look into that far to know what is inside those reports. Or, they can assess whether you have meet your revenue collection targets. But they cannot investigate in detail for example, was there under budgeting? They have to look into that far. I think there is a need to look that far. To be submitted timely is not sufficient for assessing the reports. Look…, what is inside those reports? […] (HoD, LGA-A)

[…] LGAs allocate funds to O&M (Operation and Maintenance) but it cannot be ascertained if the budget allocations were executed … future assessments should test the execution of Operation and Maintenance i.e. beyond allocation of funds in the budget […] (LGDG Report 2011/12, p. 36).

[…] They come this year, and they are asking about what happened in the year 2007/08? Why can’t they look things of recent? Or revenue has been dropped from a certain percentage to another. They do not look why revenue has dropped. Are there donors who have withdrawn funds? They do not look into that far […] (HoD, LGA-B).

[…] a good number of LGAs registered abnormal budget outturns over 100%. There is need to establish whether it was a result of poor budgeting or seasonal bumper collections. This was the same case with increases in local revenue from FY 2007/08 […] (LGDG Report 2010/11, p. 29).

[…] Realization of more than 100% of the budget indicating unrealistic budgeting […] (LGDG Report 2010/11, p. 30).

Some of the performance measures did not make sense to LGAs officials. According to them, measures did not reflect their actual practices. Some of the document-based evidence was perceived as being illogical. Document-based evidence was perceived to be easily manipulated and did not reflect the extent of actual performance, as reflected in the following quote:

[…] eeeer… let me come to my first point which I told you that I am not going far than that. You know, when you are doing an assessment you have to look if the particular activity has been performed or not. But if you assess whether a particular thing exist or not…you know… there are two different things. In Tanzania we are assessing if something exist and not something has been performed […] (HoD, LGA-B).

For some areas, performance measures were perceived to be overloaded. In this case, a performance measure did not reflect the related number of marks/scores and, therefore, decision making. Establishing too many indicators was argued to be caused by diverging policy opinions about what was important to measure, which may give rise to the problem of interpreting a vast amount of information while each performance indicator makes only a limited contribution to decision-making (Frølich, 2008), as reflected in the following quotes:

[…] The assessment results disclosed that majority (61%) of the LGAs did not undertake this critical analysis and only 52(39%) made a fair attempt. In most LGAs there were simply statements in a paragraph about these critical issues. Socio-economic profiles were also not analyzed and included in the respective MTEF Plans/budgets. This is an indication of inadequate capacity on the part of CMTs in carrying out these laborious analytical processes. On the side of the Assessment Manual, this indicator was too much loaded and congested yet awarded only 1 mark. It should be noted that compliance to MKUKUTA priorities is also assessed under this same indicator […] (LGDG Report 2011/12, p. 22).

[…] the manual provides for only 1 mark for analysis of cross cutting issues and provisions for

MKUKUTA yet the indicators are too many […] (LGDG Report 2011/12, p. 36).

The measures of the LGDG were not perceived to be objective. It was perceived that it was difficult for two different assessors to come up with the same results/marks/conclusion. Performance measures were therefore perceived as unreliable (Nyhan and Marlowe Jr, 1995, Leeuw, 1996), as reflected in the following quote:

[…] You know when there are two people doing it, there must be differences. Even if I give you an examination and that examination is not one plus one… if there are two different markers the results won’t be the same. Sometimes we afraid when we are assessing ourselves… However, when they come…you obtain huge marks […] (HoD, LGA-A).

Some LGAs’ functional processes, such as the education sector, were not included in the LGDG system. For these officials, the LGDG system was not of interest, nor important, to them, as reflected in the following quotes:

[…] I cannot see performance measures related to education sector. I mean, there is no any performance measure which measure the performance of the education sector […] (HoD, LGA-B).

[…] No, it is for those who are involved in the LGDG. There are no measures which can reflect the extent of my performance. It is not for all of us […] (HoD, LGA-B)

[…] I think they should look at my work plan. When they come here they should look at what I have planned to do…They should look to assess whether or not I have actually done as per my work plan […] (HoD, LGA-A).

However, for officials whose functional processes were included in the LGDG performance structure, such as procurement, performance measures were perceived as ordinary and measured routine activities of their functional areas, as reflected in the following quote:

[…] All the performance measures relate to my job. Activities are part of my job description. If I am being assessed or not, I have to work on them. Because I have to arrange board meetings, I had to take minutes, I have to prepare quarterly procurement reports and submit them to relevant authorities… Then these are issues which I have to work on them whether there is an assessment or not […] (HoD, LGA-A).

The LGDG system in totality was perceived to be based on “paper work”. Confidence placed on the results of the LGDG system was low. The system was perceived as deficient in being able to accurately reflect its desired impact, as reflected in the following quotes:

[…] It is also important to note that the results are based more on paper work hence the need to further track the impact of this performance on realizing the objectives of decentralization […] (LGDG Report 2011/12, p. 34).

[…] It is also important to note that the results are based more on paper work hence the need to further track the impact of this performance on realizing the objectives of decentralization by conducting an independent value for money audit to complement the assessment findings […] (LGDG Report 2010/11, p.29).

4.2. Challenges/Complexities Facing LGAs during the Implementation of the LGDG

LGAs’ stakeholders, such as Councilors, were important actors in enhancing LGAs’ performance. However, LGAs’ officials perceived Councilors to be ignorant of important aspects of performance, especially those related to the technical aspects of accounting. For example, Councilors did not have knowledge of the meaning and causes of an adverse audit opinion. This made them raise arguments over issues which were not related to the subject matter, as reflected in the following quote:

[…] They don’t understand the meaning of assessment. I think if councilors could have known the meaning of assessment it could have help. When they hear that we have failed to acquire grants they think it is Director’s fault. That is what they know. That is the problem. You cannot believe this but they even don’t know the meaning of adverse reports. That is why when we are in a meeting with them; you will hear “these things cause adverse reports…” […] (HoD-B).

Funding uncertainties was a main challenge facing LGAs’ officials in meeting performance measures. LGAs were supposed to have sufficient funds to operate at the level of meeting performance indicators. To the contrary, funding from the central government was released late, or was inadequate, according to the budget. Moreover, when a LGA failed to obtain a grant in the previous year, it was difficult to obtain the same in the following year because the LGA would not have sufficient funds for its operation, as reflected in the following quotes:

[…] Under the LGCDG system, late release of funds from the Centre, coupled with unclear instructions on the utilization of funds to LGAs delayed fund utilization and slowed down implementation rates of projects. Both CDG and CBG should be released on time to enable the LGA to implement their plans on schedule. Funds should only be released after LGAs have met the requirements instead of the other way round […] (LGDG Report 2006/07, p. 28).

Performance measures should be objective. However, those involved in the assessment exercise were perceived to be subjective. LGAs’ officials perceived the performance assessment exercise to be subjective because assessors were human beings, as reflected in the following quote:

[…] Assessment exercise is a human being exercise. For example, I don’t understand your today’s mood, how you have slept last night and how I have slept last night. I might be in the mood, coming here and respond well to your questions, but I don’t know how you have slept last night. At the end of the day, even if all things are ok… I do not say that assessors do not follow all the guidelines they are supposed to have, but there is humanity in it […] (HoD, LGA-A).

The consequences of performance measures were perceived to be unfair. When an LGA failed to obtain a grant from the central government because of not meeting a certain performance measure, the consequence was that the LGA would not obtain a grant. Grants was not used by the LGA for its own sake only; when the LGA failed to obtain a grant, it was citizens who suffered because the construction of roads, schools and other developmental issues would have to cease because of lack of funds. The punishment was therefore perceived as inappropriate, as reflected in the following quote:

[…] This system is not good to be honest. Citizens are the ones who suffer. I would suggest another alternative punishment for the one who failed to meet performance measures. Grants help LGA a lot. Those who caused the failure are not the whole community. It is few people in the LGA. There should be an alternative punishment […] (HoD, LGA-B).

There was poor record keeping in the LGAs. It was difficult to retrieve documents which were needed as evidence for the performance measures, as reflected in the following quote:

[…] Records keeping in most LGAs were generally weak. Retrieval of documents was in most cases cumbersome with some documents not being found at all. LGAs should improve on records keeping for ease of accessibility of records […] (LGDG Report 2006/07, p. 28).

The performance measurement exercise was interfered with by LAAC preparation which caused stress to the LGA officials. It was also carried out outside the planning and budgeting cycles of LGAs, as reflected in the following quote:

[…] There is need to harmonize the assessment period with the LAAC period so that LGA officials are not stressed as they are being assessed. LAAC time table should take into consideration the assessment time table first […] (LGDG Report 2007/08, p. 33).

[…] The assessment exercise is carried outside the planning and budgeting cycles of LGAs. The assessment exercise should be concluded before December so as to fit within the planning and budgeting cycle of LGAs […] (LGDG Report 2010/11, p. 31).

5. Responses to Challenges/Complexities and Discussion

The main response by LGAs’ actors was mocking/imitating the performance measurement exercise. In addition, in order to cope with the complexities of performance measures, strategies, such as data manipulation and game playing, have emerged in public sector organizations (de Bruijn, 2002, Chang, 2006, Lapsley, 2008).

Mocking the performance measurement exercise is one of the strategies of complying with institutional processes (Oliver, 1991). Imitation in this study was in the form of mocking the performance measurement exercise through internal assessment. The LGAs’ internal assessment was initiated to mock the LGDG exercise:

[…] PMO-RALG and respective LGAs should initiate the process of internal assessment for the LGAs in order to prepare adequately for the national assessment and copies of the results circulated […] (LGDG Report 2006/07, p. 27).

Deconstructive efforts may elicit mocking (Amernic, 1996). In order to adhere to performance measurement guidelines, some of the LGAs’ information was supposed to be posted on public and council notice boards. Information, such as tender awards, annual budgets, plans, audited accounts, performance measures and minimum conditions, and indicative planning figures, was supposed to be posted on council higher level government and council lower level government boards. This was impracticable because the size of the notice board did not support the amount of information to be posted. In addition, it was not practicable for the notice board to hold that information for a long period, as reflected in the following quotes:

[…] they are saying that the documents should be posted on notice board. It is not easy because… the documents are usually destroyed by citizens. How can a paper stay on a notice board for 8 months without being destroyed? […] (HoD, LGA-A).

[…] Posting of Tender awards, IPFs, Annual Approved projects was not done across most of the LGAs as required. LGAs should increase their transparency and accountability. All weather notice boards should be strategically placed […] (LGDG Report 2006/07, p. 28).

[…] Internal assessment as a way of advance preparation for the annual assessment by each council should be made part of the annual assessment system. The quality of internal assessment reports requires improvement through access and internalization of the Assessment Manual […] (LGDG Report 2007/08, p. 32).

Actors were supposed to print the documents and post them on a notice board one or two days before the arrival of assessors, as reflected in the following quote:

[…] Posting of relevant information was done late targeting the assessment exercise as some papers were still new. Posting and presentation of information should be done in time and in a user-friendly manner for public scrutiny. Posting and presentation of information should be done in time and in a user-friendly manner for public scrutiny. […] (LGDG Report 2011/12, p. 37).

Imitation is more likely in the context of uncertainty (DiMaggio and Powell, 1983, Oliver, 1991, Vakkuri and Meklin, 2006). Mocking the performance measurement exercise was the process which made LGAs’ actors officially prepare themselves, because of uncertainties, such as funding, as reflected in the following quote:

[…] The main problem facing us is for the Treasurer to meet expenditures which we are talking about. That is the main problem which causes us not meeting performance measures. Like when we were talking about grants. You have not given us grants in the previous year. What do you expect will happen this year? That is the main problem facing us. […] (HoD, LGA-A).

Information which provided evidence of meeting performance measures was manipulated. This was especially the case for council functional processes relating to evidence that meetings had been conducted. Council clerks had to make sure that they constructed the evidence to reflect the existence of the meeting. Minutes were constructed even if the meetings were not conducted, as reflected in the following quote:

[…] When you delay to conduct meetings, you can backdate… It is just as I have told you before. We had genuine reasons for the delay. But Assessors won’t understand. We have backdating in order to meet performance measures. But what is the date by the way? I know that date does not carry the value for money. Even if I meet a date... so what? That is why we are mixing lies and truth and we give them. We as Council clerks have a lot to do. We have to do record keeping and construct discussions. They do not meet, then you have to construct discussion […] (Council Clerk-LGA-A).

Manipulation is a purposeful and opportunistic attempt to co-opt, influence, or control institutional pressures and evaluations (Oliver, 1991). It occurs when managers of an organization intentionally misstate their information to favorably represent an organization’s performance (Trussel, 2003). Organizational actors engage in manipulation when the benefits from it exceed the costs (Dye, 2002). It can be conducted by Non Government Organizations for the purpose of receiving funds from donors (Trussel, 2003), or local governments for the purpose of receiving funds from central government (Gasper and Mkasiwa, 2013). Because perverse learning is a prerequisite condition of manipulation (Van Thiel and Leeuw, 2002, Mkasiwa, 2011), organizational actors must learn which aspects to manipulate.

Manipulation was important because organizational actors, who departed substantially from prior practice, often had to intervene pre-emptively in the cultural environment in order to develop bases of support specifically tailored to their distinctive needs (Suchman, 1995). In addition, organizations that develop their own indicators have more opportunities to manipulate information to their benefit (Van Thiel and Leeuw, 2002). Management may also consciously manipulate the information provided to certain groups of stakeholders, particularly if these exert more limited institutional pressures on the organization (Brignall and Modell, 2000). It was difficult/ impossible to manipulate evidence which depended on external parties, such as a “clean audit report” performance measure. Assessors were supposed to witness the Controller and Auditor General report which could not be manipulated. Similarly, another external financial management performance measure “evidence that queries raised & recommendations made in External Audit reports have been fully acted upon” could not be easily manipulated, as reflected in the following quote:

[…] They can’t manipulate audit opinion. The evidence is there, you can see it. It is not the same as minutes. They can cook minutes but not the audit opinion. They can come with documents which are very clean and you can guess that they were printed just in a short while […] (Assessor).

All these manipulation strategies emerged because of the consequences of the performance measurement exercise. When the council failed to meet performance measures and minimum conditions, a penalty was applied to grant allocations. This was a severe financial consequence for not meeting the LGDG performance measures (de Bruijn and Van Helden, 2007). LGAs’ officials responded in a “whatever it takes” approach, through adopting strategies such as manipulating assessors and evidence in order to meet performance measures, as reflected in the following quote:

[…] There is no way. We have to make sure that the evidence is there… we have to do anything to make sure that we have met the performance measures […] (HoD, LGA-B).

The severe financial consequences of failing to meet performance measures would expect to achieve a highly desired impact on the LGDG system (de Bruijn and Van Helden, 2007). Therefore, it is expected that, by linking the motivations of achieving legitimacy and high financial rewards, there would be increased efficiency and effectiveness in organizations However this was not accompanied by managerial attention. In explaining managerial attention, de Bruijn and Van Helden (2007) argue that a poorly performing organization or organizational unit has to give up autonomy and will receive more attention and face interventions by the managerial echelon, while a well performing organization or organizational unit will be allowed greater degrees of freedom. This was possible in some of the sectors, such as health. In other areas, the level of managerial attention in the Tanzanian LGAs was low, as reflected in the following quote:

[…] Specific sanctions should be put in place on Accounting Officers of LGAs which do not comply with sharing guidelines while PMO-RALG should be vigilant in enforcing the sanctions […] (LGDG Report 2011/12, p. 37).

Moreover, because of the challenges and complexities of performance measures, perverse effects outweighed the desired impact and therefore challenged the effectiveness of the LGDG performance measures (de Bruijn and Van Helden, 2007), as reflected in the following quotes:

[…] It is also important to note that the results are based more on paper work hence the need to further track the impact of this performance on realizing the objectives of decentralization […] (LGDG Report 2011/12, p. 34).

[…] It is also important to note that the results are based more on paper work hence the need to further track the impact of this performance on realizing the objectives of decentralization by conducting an independent value for money audit to complement the assessment findings […] (LGDG Report 2010/11, p. 29).

Consequently, the steering bodies(UNCDF, World Bank)’ intentions for LGDG performance measure – appraisal and rewarding were at risk. Rather than informing and learning, the less effective the LGDG system was, the more the so called Law of Decreasing Effectiveness occurred (de Bruijn and Van Helden, 2007). Steering intention and exhortation is reflected in the following quote:

[…] “Performance-Based Grant Systems – Concept and International Experience” is the result of experiences from design and implementation of these new innovative grant systems by UNCDF, often in collaboration with the World Bank, the Asian Development Bank, other development partners and governments. It is the fruit of over a decade of experience and I trust it will prove useful to both governments and development practitioners engaged in the challenge of meeting the Millennium Development Goals […] (David Morrison, UNCDF Executive Secretary, UNCDF report).

6. Conclusion

The paper explores the challenges and complexities of performance measures under the LGDG system and how these have resulted in counter-productive responses by the Tanzanian LGAs. The challenges and complexities of performance measures included unfair and out of control measures, too many performance measures, measures which were not comprehensive, measure which did not make sense to the officials, subjective measures, subjectivity of the assessors, funding uncertainties, and unfair consequences of the assessment results. These challenges and complexities resulted to responses in the form of mocking the performance measurement exercise, manipulation and gaming strategies. This phenomenon is explained using the de Bruijn and Van Heldens’ law of decreasing effectiveness and Oliver (1991)’s strategic responses to institutional pressure. Using Tanzanian LGAs, the study provides an illustration of how the law of decreasing effectiveness occurred and how the acquiescence and manipulation strategies (Oliver, 1991) occurred together. It thus contributes to the law of decreasing effectiveness by adding a component of complexity to the contribution of the causes of perverse effects.

Moreover, this article has proposed that the rules of institutional environment may not be completely resisted or completely complied with. Organizations may comply with some of the elements of those rules and may resist other elements, based on their contextual circumstances.

References

[1]  Adcroft, A. and Willis, R. (2005). "The (Un)Intended Outcome of Public Sector Performance Measurement." International Journal of Public Sector Management 18 (5): 386-400.
In article      CrossRef
 
[2]  Allen, R., Schiavo-Campo, S. and Garrity, T.C. (2004). Assessing and Reforming of Public Financial Management: A New Approach, World Bank Publications.
In article      
 
[3]  Amernic, J.H. (1996). "The Rhetoric Versus the Reality, or Is the Reality “Mere” Rhetoric? A Case Study of Public Accounting Firms' Responses to a Company's Invitation for Alternative Opinions on an Accounting Matter." Critical Perspectives on Accounting 7 (1): 57-75.
In article      CrossRef
 
[4]  Behn, R.D. (2003). "Why Measure Performance? Different Purposes Require Different Measures." Public Administration Review 63 (5): 586-606.
In article      CrossRef
 
[5]  Brignall, S. and Modell, S. (2000). "An Institutional Perspective on Performance Measurement and Management in the ‘New Public Sector’." Management Accounting Research 11 (3): 281-306.
In article      CrossRef
 
[6]  Caiden, N. (1998). A New Generation of Budget Reform. Taking Stock: Assessing Public Sector Reforms. B. G. Peters and D. J. Savoie. Canada, Canadian Centre for Management Development and McGill-Queen’s University Press: 252-284.
In article      
 
[7]  Chang, L. (2006). "Managerial Responses to Externally Imposed Performance Measurement in the Nhs: An Institutional Theory Perspective." Financial Accountability & Management 22 (1): 63-85.
In article      CrossRef
 
[8]  de Bruijn, H. (2002). "Performance Measurement in the Public Sector: Strategies to Cope with the Risks of Performance Measurement." International Journal of Public Sector Management 15 (7): 578-594.
In article      CrossRef
 
[9]  De Bruijn, H. (2007). Managing Performance in the Public Sector. London, Routledge.
In article      
 
[10]  de Bruijn, H. and Van Helden, G.J. (2007). "A Plea for Dialogue Driven Performance Based Management Systems: Evidence from the Dutch Public Sector." Financial Accountability & Management 22 (4): 405-423.
In article      CrossRef
 
[11]  DiMaggio, P.J. and Powell, W.W. (1983). "The Iron Cage Revisited: Institutional Isomorphism and Collective Rationality in Organizational Fields." American sociological review 48 (2): 147-160.
In article      CrossRef
 
[12]  DiMaggio, P.J. and Powell, W.W. (1983). "The Iron Cage Revisited: Institutional Isomorphism and Collective Rationality in Organizational Fields." American Sociological Review: 147-160.
In article      CrossRef
 
[13]  Dye, R.A. (2002). "Classifications Manipulation and Nash Accounting Standards." Journal of Accounting Research 40 (4): 1125-1162.
In article      CrossRef
 
[14]  Eden, R. and Hyndman, N. (1999). "Performance Measurement in the Uk Public Sector: Poisoned Chalice or Holy Grail?" Optimum, The Journal of Public Sector Management 29 (1): 9-15.
In article      
 
[15]  Frølich, N. (2008). The Politics of Steering by Numbers: Debating Performance-Based Funding in Europe. RAPPORT 3/2008.
In article      
 
[16]  Gasper, F. and Mkasiwa, T. (2013). "Managing Performance or Legitimacy? The Case of Tanzanian Local Government Authorities." Journal of Accounting in Emerging Economies 6 (2).
In article      
 
[17]  Groot, T. (1999). "Budgetary Reforms in the Non-Profit Sector: A Comparative Analysis of Experiences in Health Care and Higher Education in the Netherlands." Financial Accountability & Management 15 (3&4): 353-376.
In article      CrossRef
 
[18]  Halachmi, A. (2012). "Mandated Performance Measurement: A Help or a Hindrance?" National Productivity Review 18 (2): 59-67.
In article      CrossRef
 
[19]  Hatry, H.P. (2013). "Sorting the Relationships among Performance Measurement, Program Evaluation, and Performance Management." New Directions for Evaluation 2013 (137): 19-32.
In article      CrossRef
 
[20]  Hood, C. (1995). "The “New Public Management” in the 1980s: Variations on a Theme." Accounting, Organizations and Society 20 (2): 93-109.
In article      CrossRef
 
[21]  Johnsen, Å. (2005). "What Does 25 Years of Experience Tell Us About the State of Performance Measurement in Public Policy and Management?" Public Money and Management 25 (1): 9-17.
In article      
 
[22]  Joyce, P.G. (1993). "Using Performance Measures for Federal Budgeting: Proposals and Prospects." Public Budgeting & Finance 13 (4): 3-17.
In article      CrossRef
 
[23]  Joyce, P.G. (1993). "Using Performance Measures for Federal Budgeting: Proposals and Prospects." Public Budgeting and Finance 13 (4): 3-17.
In article      CrossRef
 
[24]  Kloot, L. and Martin, J. (2000). "Strategic Performance Management: A Balanced Approach to Performance Management Issues in Local Government." Management Accounting Research 11 (2): 231-251.
In article      CrossRef
 
[25]  Kravchuk, R.S. and Schack, R.W. (1996). "Designing Effective Performance-Measurement Systems under the Government Performance and Results Act of 1993." Public Administration Review: 348-358.
In article      CrossRef
 
[26]  Lapsley, I. (1999). "Accounting and the New Public Management: Instruments of Substantive Efficiency or a Rationalising Modernity?" Financial accountability and Management 15 (3 & 4): 201-207.
In article      CrossRef
 
[27]  Lapsley, I. (2008). "The Npm Agenda: Back to the Future." Financial Accountability & Management 24 (1): 77-96.
In article      CrossRef
 
[28]  Lapsley, I. (2008). "The Npm Agenda: Back to the Future." Financial Accountability and Management 24 (1): 77-96.
In article      CrossRef
 
[29]  Lawton, A., McKevitt, D. and Millar, M. (2000). "Developments: Coping with Ambiguity: Reconciling External Legitimacy and Organizational Implementation in Performance Measurement." Public Money and Management 20 (3): 13-20.
In article      CrossRef
 
[30]  Lee Jr, R.D. and Burns, R.C. (2002). "Performance Measurement in State Budgeting: Advancement and Backsliding from 1990 to 1995." Public Budgeting & Finance 20 (1): 38-54.
In article      
 
[31]  Leeuw, F.L. (1996). "Performance Auditing, New Public Management and Performance Improvement: Questions and Answers." Accounting, Auditing & Accountability Journal 9 (2): 92-102.
In article      CrossRef
 
[32]  Melkers, J. and Willoughby, K. (2005). "Models of Performance-Measurement Use in Local Governments: Understanding Budgeting, Communication, and Lasting Effects." Public Administration Review 65 (2): 180-190.
In article      CrossRef
 
[33]  Melkers, J. and Willoughby, K. (2005). "Models of Performance‐Measurement Use in Local Governments: Understanding Budgeting, Communication, and Lasting Effects." Public Administration Review 65 (2): 180-190.
In article      CrossRef
 
[34]  Mkasiwa, T. (2011). Accounting Changes and Budgeting Practices in the Tanzanian Central Government: A Theory of Struggling for Conformance, University of Southampton.
In article      
 
[35]  Modell, S. and Wiesel, F. (2008). "Marketization and Performance Measurement in Swedish Central Government: A Comparative Institutionalist Study." Abacus 44 (3): 251-283.
In article      CrossRef
 
[36]  Mserembo, P.K. and Hopper, T. (2004). Public Sector Financial Reform in Malawi: Ppbs in a Poor Country. Research on Accounting in Emerging Economies: Supplement 2, Accounting and Accountability in Emerging and Transition Economies. T. Hopper and Z. Hoque, Elsevier, Oxford: 559-583.
In article      
 
[37]  Norreklit, H. (2000). "The Balance on the Balanced Scorecard a Critical Analysis of Some of Its Assumptions." Management Accounting Research 11 (1): 65-88.
In article      CrossRef
 
[38]  Nyhan, R.C. and Marlowe Jr, H.A. (1995). "Performance Measurement in the Public Sector: Challenges and Opportunities." Public Productivity & Management Review: 333-348.
In article      
 
[39]  Oliver, C. (1991). "Strategic Responses to Institutional Processes." Academy of management review: 145-179.
In article      
 
[40]  Olson, O., Humphrey, C. and Guthrie, J. (2001). "Caught in an Evaluatory Trap: A Dilemma for Public Services under Npfm." European accounting review 10 (3): 505-522.
In article      CrossRef
 
[41]  Poister, T.H. and Streib, G. (1999). "Performance Measurement in Municipal Government: Assessing the State of the Practice." Public Administration Review: 325-335.
In article      CrossRef
 
[42]  Powell, A.A., White, K.M., Partin, M.R., Halek, K., Christianson, J.B., Neil, B., Hysong, S.J., Zarling, E.J. and Bloomfield, H.E. (2012). "Unintended Consequences of Implementing a National Performance Measurement System into Local Practice." Journal of general internal medicine: 1-8.
In article      
 
[43]  Propper, C. and Wilson, D. (2003). "The Use and Usefulness of Performance Measures in the Public Sector." Oxford review of economic policy 19 (2): 250-267.
In article      CrossRef
 
[44]  Schroeder, L. (2001). "Social Funds and Local Government: The Case of Malawi." Public Administration and Development 20 (5): 423-438.
In article      CrossRef
 
[45]  Suchman, M.C. (1995). "Managing Legitimacy: Strategic and Institutional Approaches." Academy of management review: 571-610.
In article      
 
[46]  Therkildsen, O. (2000). "Public Sector Reform in a Poor, Aid-Dependent Country, Tanzania." Public Administration and Development 20 (1): 61-71.
In article      CrossRef
 
[47]  Trussel, J. (2003). "Assessing Potential Accounting Manipulation: The Financial Characteristics of Charitable Organizations with Higher Than Expected Program-Spending Ratios." Nonprofit and Voluntary Sector Quarterly 32 (4): 616-634.
In article      CrossRef
 
[48]  UNCDF (2010). Performance-Based Grant Systems: Concept and International Experience UNCDF.
In article      
 
[49]  Vakkuri, J. and Meklin, P. (2006). "Ambiguity in Performance Measurement: A Theoretical Approach to Organisational Uses of Performance Measurement." Financial Accountability & Management 22 (3): 235-250.
In article      CrossRef
 
[50]  Van Thiel, S. and Leeuw, F.L. (2002). "The Performance Paradox in the Public Sector." Public Performance & Management Review: 267-281.
In article      CrossRef
 
  • CiteULikeCiteULike
  • MendeleyMendeley
  • StumbleUponStumbleUpon
  • Add to DeliciousDelicious
  • FacebookFacebook
  • TwitterTwitter
  • LinkedInLinkedIn