Artigo Revisado por pares

Changing Bureaucracies. Adapting to Uncertainty, and how Evaluation Can Help, By Perrin,Burt and Tyrrell,Tony (Eds) (New York: Routledge, 2021). ISBN : 978–1–003‐10058‐4 (ebk)

2022; Wiley; Volume: 83; Issue: 1 Linguagem: Inglês

10.1111/puar.13579

ISSN

1540-6210

Autores

Patria de Lancer Julnes,

Tópico(s)

Public Policy and Administration Research

Resumo

As one can surmise from the title of this edited book, Burt Perrin, Tony Terrell and their contributors offer an optimistic view about bureaucracies. Instead of joining the choir of critics of bureaucracy, the authors recognize the complexity and turbulence of current times and the challenges that public bureaucracies face. They do not expound on the values and virtues of bureaucracy in-depth as one can find plenty of literature on the subject; chiefly among them are the multiple versions of Charles T. Goodsell's The Case for Bureaucracy and the latest rendition (2015), The New Case for Bureaucracy. Rather, Perrin and his colleagues acknowledge that bureaucracy, as a form of organization with all its strengths and inherent limitations, is here to stay but needs to change. And for the authors, systems of knowledge creation such as program evaluation and performance measurement can be valuable in helping bureaucracies confront complex challenges, adapt to emerging situations, and succeed in addressing them. Although this volume was written before the COVID-19 pandemic, in hindsight, the optimism of this book, both about bureaucracies changing and the role that evidence can play, was and remains well-founded. The common criticism is that government institutions are incapable of adaptation due to rigidity, fragility, and government action geared to maintaining the status quo (Pelling & Manuel-Navarrete 2011). The COVID-19 pandemic, however, showed that government institutions, and society in general, have an enormous capacity for flexibility, adaptation, and quick response, at least in the face of imminent harm to humans. From universities to hospitals, city councils, federal agencies, and local communities, organizations responded with various forms of adaptation and innovation. Perhaps some of the most impressive examples of adaptation, innovation, and flexibility included moving functions of government and society to the virtual world with lighting speed and the development and manufacturing of vaccines in record time. These involved quick reassessment, decision-making, and collaboration among many governance networks, including government bureaucracies. Klasche (2021) has argued that successful COVID-19 governance responses were the result of constant learning and adjustment “to the new realities” (p. 3). Program evaluation and performance measurement help with needed learning and adjustment because they can provide the evidence to strengthen bureaucracy's capacity to prepare for contingencies, respond when the unpredictable happens, and, as argued by Stack (2018), develop new evidence to understand the conditions under which different programs and interventions work. Lest we think that embedding program evaluation and systems of performance assessment in bureaucracies is an easy task, there are well-documented challenges. Examples include the tension between program evaluation and accountability, the political nature of decision-making and the demand for evidence-based decision-making, the scope of the evaluation and the appropriateness of the policy subject to evaluation (Bray et al., 2020). Moreover, as stated by Kathy Newcomer in the foreword of the book, it is hard to change organizational culture. The professional experiences and research of Perrin, Tyrell and contributing authors demonstrate that these tensions continue to exist. However, the authors provide suggestions for overcoming these challenges. While the book primarily covers experiences in the European and international development contexts, the lessons and guidance apply to other bureaucratic contexts as well. As stated by Perrin in the concluding chapter, though there are differences from one context to another, bureaucratic systems tend to be very similar. This edited volume consists of 12 chapters. In the first chapter, Tyrrell presents the rationale for the book. At its core, the book seeks to help improve government by making government institutions more responsive to the rapidly changing needs of the people they serve. According to Tyrrell, the book was born out of frustration with bureaucratic behavior concerning public organizations' [lack of] use of available evidence or strict adherence to preconceived notions about what a program should achieve or what should be measured while disregarding emerging needs and opportunities. Instead, Tyrrell sees the rapid changes and complexity enveloping governments across the globe as arguing for public bureaucracies to be nimble. Thus, the central questions of the book: “how (or, perhaps even whether) bureaucracy can continue to adapt…and, more specifically, what is the role that evaluation can play in supporting adaptation to change on the part of bureaucracies” (p. 2). At the same time, the author also concedes that the practice of evaluation and evaluators also have the responsibility to improve so that their interaction with bureaucratic practice can be more productive. Following the introductory chapter, Part I – Working with Bureaucratic Constraints- includes four chapters that focus on organizational learning, bureaucratic agility, adaptive programming, and radical innovation. In this part, the authors use empirical evidence from interviews, international development case studies, and literature analysis to assess the role of evaluation in decision-making in increasingly complex and rapidly changing bureaucratic contexts. As much as we would like to think that evaluation information is sought out and often used in decision-making, consistent with extant research, the findings presented here show that this is not the case. Deeply ingrained beliefs inside and outside bureaucracies work against evaluation and performance measurement systems from playing a meaningful role in helping bureaucracies learn, adapt, and improve. These include the preferred management paradigm of the agency (e.g., new public management with a focus on results, or traditional bureaucratic, with a focus on compliance); the conflicting and constraining demands on bureaucracies and evaluators; and the narrow conventional evaluation approach. In international development, in particular, evaluation approaches are viewed as overly standardized, often in the form of mandated logical frameworks (logframes), in which the context is assumed to be predictable and unchanging during the intervention in question. Recognizing the challenges, the authors offer a way forward, suggesting how evaluators can be more flexible in terms of the methods, length of evaluation studies, and skillsets they bring to the evaluation. This flexibility can help bridge the knowledge use gap and support bureaucracies in building capacity for acting in predictive environments and being agile and adaptive as the context and needs change. Evaluation Support to Bureaucracies, Part II of the book, is comprised of three chapters that provide valuable insights from the vantage point of individuals who have had extensive experience working as evaluation managers and in senior evaluation positions. They have worked with various international organizations such as the European Commission, the Organization for Economic Co-operation and Development (OECD), the United Nations, and the World Bank Group. Based on this experience, they reflect on several themes, including how evaluation has evolved, what it looks like in different bureaucratic contexts, and the danger of bureaucratic capture of evaluation-- defined as “the use of evaluation just to document that something has been evaluated” (p. 131). Across the chapters, a consensus emerged that while there has been progress in developing extensive evaluation expertise, overall, “the impact of evaluations, their quality, their use, and ownership by senior officials of their findings has still considerable room for improvement” (p.106). Also, with lower levels of management often driving the evaluation process, there appears to be a lack of interest in evaluation by senior managers, resulting in evaluations not asking critical questions that can lead to policy and program improvement. Moreover, in places where evaluation is not mandated or part of the culture, it is seen as a “non-essential function and has to compete against other functions for resources” (p. 110). Therefore, a challenge for evaluators is to demonstrate the utility of evaluation. In the final part of the book, Part III - Challenges to a Meaningful Role for Evaluation, the authors take head-on three issues: “bureaucratic capture” of evaluation, the perennial problems with evaluation quality, and the continuing difficulty in operationalizing outcomes or results-orientation. One of the chapters provides practical information such as warning signs of evaluation failure, which occurs when evaluation “yield[s] to bureaucratic capture and fuel, instead of remedying, bureaucratic failure” (p. 143). In another, the authors adapted Hirschman's theory of “exit, voice and loyalty” to analyze how bureaucracies address the low quality of the evaluations commissioned by development agencies. Insights from the application of this framework provide alternative ways different stakeholders could exercise voice, exit, and loyalty to improve quality. A third chapter, using the example of the Structural Funds program of the European Cohesion Policy, shows that a lack of conceptual clarity about the purpose of monitoring systems has led to an apparent failure in implementing a results-orientation culture. One of the reasons presented is the separation of monitoring and evaluation. Other reasons include the perception that monitoring is merely a regulatory requirement and not a learning tool that can contribute to program design and implementation, and the rigidity of standardized evaluation procedures that inhibit innovation in decision making because of the potential of a “decrease in efficiency in the Weberian sense.” (p. 191). More collaboration between monitoring and evaluation stakeholders, and less standardization of indicators and evaluation approaches, will go a long way in encouraging a results-orientation. The concluding chapter by Perrin brings home the ideas presented throughout the book and reminds us that bureaucracy is not evil. Instead, bureaucracy has what Perrin called a “problematique” (French for difficulties, issues, challenges) that can hinder evaluation's potential contributions to improving government performance. Some examples of this problematique include that bureaucracies are meant to be stable institutions with built-in continuity, but, by definition, stability often precludes innovation and responsiveness. Bureaucracies implement policies created by elected officials but are blamed for negative results of the policies and are rarely celebrated for positive results. They also are expected to balance competing values and contradictory objectives. The author also reminds us that the book's point is not to call into question bureaucracy as a form of organization, which has many strengths. Instead, the book focuses on bureaucratic practices, as it is specific practices that hinder performance. To that end, he suggests ways to overcome the problematique. These include that bureaucracies should stop focusing on processes and be more concerned with outcomes and impacts. Related, bureaucracies should adopt a culture that allows examining the purpose and utility of bureaucratic practices. This self-examination will help to determine if these practices serve the objectives that the organization is trying to achieve or are leading to unintended consequences. Moreover, Perrin encourages bureaucracy to examine all initiatives, including those that are precious to political leaders. This latter can prove particularly difficult for bureaucracies to heed since political leaders rarely want their policies questioned. While acknowledging the value of monitoring to evaluation and vice-versa, Perrin also argues that all involved in program evaluation should gain a better understanding of the differences between the two to avoid “distorted and misleading findings” and “faulty decision” (p. 217). It is worth remembering that monitoring tells managers what is going on; it does not tell how or why an issue occurs. Without this information, it is not possible to decide how to address the issue. It is with the “how” and “why” that evaluation can help. Thus, in collaboration, quality monitoring and evaluation assessment can lead to improvement and change. They can support learning, agility and adaptive management in bureaucracies. The lessons of this book offer pathways for bureaucrats and evaluators to become more reflective in employing evaluation and performance measurement in support of improving government. Patria de Lancer Julnes is the inaugural Rosenthal Endowed Professor of Public Administration and Director of the School of Public Administration at the University of New Mexico. Her research interests include the utilization of program performance measurement information, performance management, government accountability, government innovation, and community resilience. Email: [email protected]

Referência(s)
Altmetric
PlumX