“An A Is An A”: The New Bottom Line For Valuing Academic Research
2019; Academy of Management; Volume: 34; Issue: 1 Linguagem: Inglês
10.5465/amp.2017.0193
ISSN1943-4529
AutoresHerman Aguinis, Chailin Cummings, Ravi S. Ramani, Thomas G. Cummings,
Tópico(s)scientometrics and bibliometrics research
ResumoAcademy of Management PerspectivesVol. 34, No. 1 ArticlesFree Access"An A Is An A": The New Bottom Line For Valuing Academic ResearchHerman Aguinis, Chailin Cummings, Ravi S. Ramani and Thomas G. CummingsHerman AguinisThe George Washington University, Chailin CummingsCalifornia State University–Long Beach, Ravi S. RamaniPurdue University Northwest and Thomas G. CummingsUniversity of Southern CaliforniaPublished Online:27 Feb 2020https://doi.org/10.5465/amp.2017.0193AboutSectionsPDF/EPUB ToolsDownload CitationsAdd to favoritesTrack Citations ShareShare onFacebookTwitterLinkedInRedditEmail AbstractIn sports, the phrase "a win is a win" refers to the bottom line in those competitions: winning a game. How the game was won is not as important as the fact that it was won. In many ways, we have reached a similar point in the management field. The increased pressure to publish in "A" journals means the new bottom line for valuing academic research is "an A is an A." Faculty recruiting committees and promotion and tenure panels readily discuss how many A's a candidate has published and how many A's are needed for a favorable decision, while conversations about the distinctive intellectual value of a publication are often secondary to its categorical membership in journals. We describe reasons why this new bottom line has taken hold and delineate its positive and negative consequences. Also, we offer insights for a variety of stakeholders, including (a) nonspecialist academics in all management domains, including scholars from universities worldwide because the new bottom line for valuing academic research is a global phenomenon, (b) university administrators and funding agencies interested in evaluating research quality and impact, and (c) individuals interested in responsible scholarship and in addressing the current credibility crisis in management. Finally, we offer a forward-looking analysis and policy implications of how to address challenges associated with the new bottom line for valuing academic research.Following a centuries-old tradition, modern research universities ground their legitimacy and authority in the value of published knowledge, which provides an objective and measurable standard for institutional performance and control (Wellmon & Piper, 2017). This publication ethos has gradually become embedded in universities' growing managerialism and economic rationality (Callahan, 2018; Lorenz, 2012; Roberts & Donahue, 2000), or what some critics have referred to as the "McDonaldization" of academe (Hays & Wynyard, 2002; Parker & Jary, 1995), the "market" university (Berman, 2012), and the "managerial" university (Anderson, 2008). University performance management and resource allocation systems, for example, are increasingly driven by a corporate audit culture where resources and rewards are contingent on quantifiable measures of research value (Lorenz, 2012; Parker & Jary, 1995; Walsh, 2011).An increasingly common method for measuring the value of research derives from the quality of the academic journals in which the research is published (Garfield, 2005). In other words, the higher the judged quality of the journal, the higher the attributed quality and hence value of its published articles (Bedeian, 1996). The same procedure is used to measure the total value of research produced by a particular individual, which is done by simply adding all articles published in journals deemed to be of high quality. This journal-proxy method provides a relatively objective and generalizable measure of research value that can apply across individual researchers, research disciplines, and academic organizations.The growing use of journal-proxy measures of research value has led to ever-increasing pressure on academics to publish in elite journals to gain professional rewards and status (e.g., De Rond & Miller, 2005; Edwards & Roy, 2017; Hogler & Gross, 2009; Pettigrew & Starkey, 2016; Shapiro, 2017). In business schools around the world these elite journals are identified by different labels, including "A," "top," "premiere," and other designations such as "A+," "A*," or even "A++" and "A**" that indicate their high status. We will refer to them simply as "A journals."The need to identify which are A journals and which are not has led to a myriad of journal ranking lists that vary by disciplinary orientation and the metric used to rank journals (Adler & Harzing, 2009; Ryazanova, McNamara, & Aguinis, 2017; Van Noorden, 2010). These lists serve as an indicator of the meritorious quality of the journals and, by extension, the respective scholarly publications included therein and the researchers who authored those publications. The use of such lists in assessing the bottom line for valuing academic research (i.e., how many A's) has spread across universities in Asia, Europe, North America, and South America (Ryazanova et al., 2017), and across academic disciplines (e.g., Deegan, 2016; Polonsky & Ringer, 2009; Tadajewski, 2016; Treviño, Mixon, Funk, & Inkpen, 2010; Xu, Poon, & Chan, 2014).Clearly, the distinction between A and other journals emerged some time ago (Garfield, 1972; Van Fleet, McWilliams, & Siegel, 2000). What is ominously different today, however, is the excessive attention to journal lists that signal which journal articles count in terms of promotion, tenure, and rewards decisions and which ones do not (Connelly & Gallagher, 2010; Gomez-Mejia & Balkin, 1992; Honig et al., 2018; Shapiro, 2017). This "an A is an A" dictum serves as an expressive addendum to the more general call to publish or perish.The institutional logic of universities has changed in the last two or three decades, which has forced them to change the way they operate and function (Edwards & Roy, 2017; ter Bogt & Scapens, 2012). Business schools have gone through many transformations, and these have made the issue of faculty evaluation and rewards suddenly more salient (e.g., Certo, Sirmon, & Brymer, 2010; Khurana, 2007; Starkey & Tiratsoo, 2007). Indeed, the "an A is an A" phenomenon has now reached a point that, in many cases, faculty-recruiting committees and promotion and tenure panels readily discuss how many A's a candidate has published and how many A's are needed for a favorable decision, while conversations about the distinctive intellectual value of a publication are often secondary to its categorical membership in journals (Davis, 2015; Edwards & Roy, 2017; Macilwain, 2013). For management researchers, this categorization can translate into a stark dichotomy, and imposed choice between scholarship that counts (i.e., published in A journals) and scholarship that does not count (i.e., published anywhere else) (Aguinis, Shapiro, Antonacopoulou, & Cummings, 2014). This phenomenon has daunting consequences for management researchers, the scientific validity and usefulness of the knowledge they produce, and the sustainability of business schools.THE PRESENT ARTICLEOur focus is on the practice of counting A-journal publications as the new bottom line for valuing academic research in the management field. In the following sections, we draw attention to this practice and its attendant simplification of "an A is an A," and call for collective action and policies to address its negative consequences.The remainder of our article is organized as follows. First, we describe the use of A-journal lists in the management field. Second, we address reasons why the "an A is an A" phenomenon has taken hold by focusing on two primary drivers: performance management systems and research accountability. Third, we provide a discussion and critique of the effects of A-journal counting practices. On the positive side, there are administrative and perceived-equity benefits of replacing subjective measures of research value with a common, verifiable, and objective measure that can be compared across researchers and academic disciplines (Kula, 1986). Disconcertingly, however, there are mounting concerns about unintended negative effects of using A-journal lists to assess research value. Among these deleterious outcomes are questionable research practices; narrowing of research topics, theories, and methods; and lessening of researcher care and intrinsic motivation for doing research, to name but a few (Davis, 2015; Edwards & Roy, 2017; Schwarz, Cummings, & Cummings, 2017). Finally, we offer recommendations and policy implications of how the management field might address these negative effects while preserving the positive outcomes in the future.THE USE OF A-JOURNAL LISTS IN THE MANAGEMENT FIELDManagement scholars have typically addressed the use of A-journal lists informally among themselves, in the literature addressing broader assessments of the field (e.g., Bennis & O'Toole, 2005; Tsui, 2013), and directly in professional presentations and publications devoted to the subject (e.g., Adler & Harzing, 2009; Macdonald & Kam, 2007). Despite considerable literature on A-journal lists, we lack systematic studies assessing the extent of their use and the attendant effects on management scholars. Thus, our appraisal about whether the use of A-journal lists is excessive in the management field relies on the current literature, regular reports by journal editors at editorial board meetings aimed at providing evidence that their journals should be included on the A-list, informal conversations with colleagues, prevalent institutional practices at research-driven universities, and our own firsthand experience in leadership roles at several universities as well as professional organizations such as the Academy of Management. As additional evidence, consider the numerous sessions offered at Academy of Management annual meetings that address how to improve the odds of publication in an A journal, with titles such as "Publishing in Top-Tier U.S. Journals for Non-U.S. Scholars." We encourage readers to take a moment to reflect and to judge for themselves whether the excessive use of A-journal lists rings true to their own perceptions and experiences in the management field.Our own experience suggests that A-journal counting has become routine and has taken on some of the trappings of a sports competition. For instance, publishing in an A journal is often referred to as getting a "hit," to use a baseball analogy, or a "goal," to use a soccer (football outside the United States) one. As experienced by Harley (2019, p. 294) after attending academic management conferences, "People spoke in awe of 'big hits.' Those whose work had made it into 'top five' journals were paid homage by junior colleagues. If anything, this kind of language has become more prevalent." As an example, a recent job posting in management states clearly what counts for a win on the job market: "Applicants for this position must have a Ph.D. in a related discipline and a strong record/potential for publication in the A journals in Management with an emphasis on the Academy of Management Journal."1Many other academic disciplines have apparently reached a similar point (Abbott et al., 2010; Carpenter, Cone, & Sarli, 2014). Consistent with the principles of tournament theory (Connelly, Tihanyi, Crook, & Gangloff, 2014), faculty compete against each other for the finite number of pages available in the few A journals. Just as individual faculty within a department are competing with each other, departments within a college are also engaged in competition. At an even higher level, different business schools are also occupied in cutthroat competition, as are the universities that house them. These "victories" are increasingly crucial to academic rewards, such as intellectual status, job placement, tenure and promotion, salary, and research funds (Aguinis et al., 2014; Butler, Delaney, & Spoelstra, 2017; Honig et al., 2018; Shapiro & Kirkman, 2018). In an eloquent summary statement, Honig et al. (2018, p. 413) argued that "today's challenge to the integrity of management scholarship does not come from external demands for ideological conformity, rather from escalating competition for publication space in leading journals that is changing the internal dynamics of our community."Together, all of this suggests that A-journal counting practices are sufficiently prevalent and troublesome in the management field to warrant analyzing their causes and effects and exploring possible solutions to their unintended negative outcomes. We recognize the instrumental value of A-journal counting in assessing research value and in producing institutional and researcher hierarchies in academe. After all, research institutions' rankings and prestige are determined at least to some extent by their members' A-journal publications (Adler & Harzing, 2009; Edwards & Roy, 2017; Gioia & Corley, 2002; Trieschmann, Dennis, Northcraft, & Niemi, 2000). And rankings are becoming the bottom line for many business schools (Morgeson & Nahrgang, 2008; Ryazanova et al., 2017). Also, for many universities and schools that are trying to encourage more and higher-quality research, establishing lists of journals that should be targeted, albeit far from perfect, may be beneficial compared to having no target at all. However, our concern is that using this singular measure in the context of a results-only, bottom-line approach reduces prized scholarship to a simple count of number of A's. Furthermore, measuring research value exclusively by counting A-journal publications can perilously neglect how management researchers cope with A-journal competition and progressively cultivate scholarship while remaining true to the meaning and value of their intellectual pursuit.REASONS FOR THE USE OF A-JOURNAL LISTS IN THE MANAGEMENT FIELDExplanations for the rise of A-journal counting practices include larger cultural, political, and economic forces shaping higher education across the globe, particularly universities' institutional arrangements for acquiring and allocating resources and controlling and rewarding performance (Edwards & Roy, 2017; Lynch, 2014; Schrecker, 2010; ter Bogt & Scapens, 2012). Because these institutional practices can differentially affect the use and outcomes of A-journal counting across academic units and disciplines, recent research has investigated patterns of publications and their impact across settings and management subfields (Aguinis, Suarez-González, Lannelongue, & Joo, 2012). Such context-specific understanding is essential to develop appropriate solutions for the negative effects of A-journal counting on a discipline's research practices, knowledge base, and member motivations and careers.Our analysis focuses on two powerful mechanisms that drive the "an A is an A" phenomenon: performance management systems and research accountability. These mechanisms derive from the business schools that house and support most management researchers, and their increasing need to measure the value of research products (Connelly & Gallagher, 2010; De Rond & Miller, 2005; Hogler & Gross, 2009; Moschieri & Santalo, 2018; O'Brien, Drnevich, Crook, & Armstrong, 2010; Vermeir, 2013).Performance Management SystemsBusiness schools, like many organizations, struggle with the need for performance management systems that distribute rewards in a systemic, standardized, and fair manner while not relying heavily on self-reported performance measures (Aguinis, 2019; DeNisi & Murphy, 2017; DeNisi & Smith, 2014). Consequently, journal lists have gradually become the arbiter for determining the value of management research. As Gomez-Mejia and Balkin (1992) documented, the use of journal ranking lists to evaluate researcher productivity and quality of research began as an attempt by university administrators overseeing diverse departments to create a common measure of the value of research performance across those units. By instituting a journal ranking system, administrators sought to replace subjective evaluations of research quality with "common, intersubjective, verifiable standards, independent of human individuality" (Kula, 1986, p. 120).Because management researchers have considerable freedom in defining their research agenda and how to pursue it, many business schools and their functional departments developed their own lists of A journals as a proxy for evaluating the quality of research output (Van Fleet et al., 2000). These A-journal lists enabled business schools and departments to establish "quanta," that is, a basis for measurement (Power, 2004) that was intended to be equitable and provide performance-measurement guidelines for administrators (Van Fleet et al., 2000). These lists were intended to supplement, not replace, the more traditional qualitative assessment of research based on internal and external peer review of the research itself. Like many other quanta, however, journal ranking lists, which were initially a loosely structured framework to aid administrators, have become reified and are now a taken-for-granted measure of the value of management research within the academic community (Adler & Harzing, 2009; Nkomo, 2009).As we mentioned earlier, this phenomenon is not restricted to the management field. The compilation and analysis of journal ranking lists is now ubiquitous in many academic disciplines (e.g., Pontille & Torny, 2010; Singleton, 1976) and other business fields, including marketing (Tadajewski, 2016), finance (Guo, Wang, Qiao, & Liu, 2016), accounting (Deegan, 2016), and international business (Tüselmann, Sinkovics, & Pishchulov, 2016). Interestingly, crystallization around the use of journal lists to gauge the value of management research has occurred despite growing evidence that so-called "A journals" are not necessarily better at publishing insightful and influential articles than non-A journals or other sources of academic contribution such as books or chapters in edited volumes (Pfeffer, 2007; Singh, Haddad, & Chow, 2007; Starbuck, 2005; Wang, Veugelers, & Stephan, 2016). Moreover, a recent bibliometric study of more than 85,000 papers published in 168 management and business journals found that top-rated journals strongly favor empirical studies that use quantitative methods applied to large datasets (Vogel, Hattke, & Petersen, 2017). Thus, counting publications in A journals means that "data that cannot be readily quantified are marginalized and rendered invisible, and proxy measures end up representing the thing itself" (Power, 2004, p. 775), thereby contributing to the new bottom line for valuing academic research.Research AccountabilityIn addition to performance management systems, a second mechanism contributing to the "an A is an A" phenomenon is the growing pressure on business schools to be accountable for the costs and benefits of their research. The issue of accountability is relevant not only to the field of management but also across many other fields in both the humanities and sciences (Lorenz, 2012; Schrecker, 2010).Beginning in the late 1950s, business schools began the long, arduous transition from vocational- or practitioner-oriented trade schools to research-focused institutions (Bennis & O'Toole, 2005; Gordon & Howell, 1959; McLaren, 2019). Fueled by the demand for more professionally educated managers as well as stinging rebukes of the quality of the research and teaching of their faculty, business schools adopted the scholarly paradigm of the social sciences as their path to legitimacy (Bailey & Ford, 1996; Pfeffer & Fong, 2002). And this approach entailed defining and measuring the value or quality of their research production (Bennis & O'Toole, 2005).The need to quantify the value of research has become even more pressing today given the growing competitive pressures business schools face because of less government funding, greater emphasis on rankings, mounting faculty shortage, and universities' entrenched research values (Cummings, 2011). The dominance of the new bottom line for valuing academic research then is an inevitable outcome of this need to measure the value of scholarly knowledge and to link it to financial outcomes (Hogler & Gross, 2009; O'Brien et al., 2010; Radder, 2010). The practice of measuring and rewarding A-journal publications is starkly visible as business schools use this metric to implement pay-for-article compensation systems (Honig et al., 2018; Shao & Shen, 2011), provide faculty with summer financial support (e.g., from one-ninth to three-ninths of additional salary at many U.S. universities), reduce teaching loads, and determine faculty base salary (Gomez-Mejia & Balkin, 1992). By enabling the measurement of what was once the abstract concept of desirable research productivity, business schools can use A-journal hit counts to determine whether the price they pay (in terms of faculty salary and research funding) is commensurate with the value of the research output they receive, and then share this information with external stakeholders including current and potential students, donors, alumni, and funding agencies.In addition to being used to make comparisons among individual researchers, journal lists can also be used to compare departments or specific research domains across universities (e.g., Trieschmann et al., 2000). For example, business school deans can compute the total number of A-journal articles published by their schools' organizational behavior (OB) faculty and compare that to the total number of A-journal publications by OB faculty in peer, competing, and aspiring institutions. This information can be useful for accreditation, fundraising, and other purposes (e.g., Ryazanova et al., 2017).In sum, the new bottom line to measure the value of research follows naturally from the practices used by business schools and their university domiciles to attempt to make the process of evaluating research more standardized, transparent, and fair. It is also the consequence of increasing pressures on business schools and universities to become more accountable and to provide evidence regarding the costs and benefits of the research they produce.POSITIVE AND NEGATIVE CONSEQUENCES OF THE NEW BOTTOM LINE FOR VALUING ACADEMIC RESEARCHThe new bottom line for valuing academic research based on the "an A is an A" dictum has a significant impact, both positive and negative, on researchers, the knowledge they produce, and the business schools that employ them. We discuss these consequences next.Positive ConsequencesThe ostensible appeal of using A-journal counting to measure research value is inherent in its features. It is fast and easy to use and defend; enables evaluators to readily compare scholars' research performance to one another and to standard benchmarks; and provides a straightforward, relatively conflict-free approach for making decisions about whom to hire, promote, and reward. In fact, it speeds up the process of conducting faculty performance evaluations because the role of department chairs and other administrators responsible for this task is greatly limited to simply counting the number of A's.Our own experience with A-journal counting underscores its ready attraction for assessing research value, especially when the assessment task is voluminous or involves comparisons among scholars. For example, when faced with a plethora of candidates for a beginning faculty position, a first cut may include a quick scan of CVs and elimination of those candidates without an A-journal publication. For those remaining, the higher the number of A-journal articles, the higher the candidate is likely to be placed on the campus visit list. Similarly, for senior positions, experienced candidates are unlikely to be considered unless they have a strong if not stellar record of A-journal publications, usually averaging one or more a year since starting their academic careers. At the doctoral level, even though students may be near completion of their dissertations, they may be advised to stay another year to get A-journal publications, or at least revise and resubmits, on their CVs.Or consider the standard cohort analysis used in faculty promotion assessments. It includes information about the research performance of faculty who have recently been promoted to the rank in question at comparable schools. The number of A-journal publications, other journal articles, and total citations at the time of promotion for each faculty member are typically reported in a table along with the mean and median number of publications and citations for the cohort. Lengthy and in-depth discussion is generally reserved for promotion candidates whose research record is considered promising yet questionable, generally slightly below the cohort's A-publication means/medians. For those candidates falling significantly below or above the cohort measures, decisions to deny or recommend promotion can be relatively short and perfunctory.So one of the most important seemingly positive outcomes of A-journal counting is the development of clear standards for judging the value of research independent of personal opinions (Kula, 1986). Like the use of other types of rankings (e.g., Rindova, Martins, Srinivas, & Chandler, 2018), the use of journal ranking lists as the arbiter of research quality enables business schools to avoid having to translate subjective opinions about the quality of research into quantifiable ratings (Van Fleet et al., 2000). Adopting this process increases the transparency of schools' performance management systems as well as the actual and perceived fairness of the procedures used to make decisions about the allocation of rewards, key factors in ensuring perceptions of trust and organizational justice (Colquitt et al., 2013). Consistent with our previous discussion of the tournament model, as in sports, faculty know the "rules of the game" even before the game starts. As long as the rules are followed, even the losers accept the inevitability that the winners will walk away with the trophy and the losers will get nothing. Sometimes the difference between victory and defeat is a technicality, but everyone is fine with it, as illustrated by these quotes from promotion and tenure meetings we have attended: "If she had received that acceptance a week earlier she could have gotten tenure, but points scored after the buzzer do not count"; "He was a few inches short and now has to punt."Another positive consequence is that journal ranking lists also help management faculty effectively counter biased criticism of their research by scholars in other fields that may adversely affect reward allocations. For example, because of standards based on a certain number of A's, a management department can provide clear and compelling evidence that a faculty member should be granted tenure, be promoted, receive an endowed chair, or attain other scholarly rewards. When such a standard is present, it is more difficult for evaluation committees, including members outside of management (e.g., finance, accounting), to discount the research produced by a management researcher because it uses theories, samples, and measures that may seem inappropriate from the perspective of other fields. Similarly, junior faculty members may be protected from biased decisions on the part of their own department chairs (and other administrators), who in many cases are senior faculty members who are no longer active researchers and no longer have the necessary skills to evaluate the rigor, quality, and relevance of any given study.The use of A-journal lists also provides clear objectives and guidelines for training doctoral students and helping junior scholars establish and manage their careers (Greenberg, 2006; Mitchell, 2007). Because formulating clearly delineated goals can enhance performance (Locke & Latham, 2002), knowing the kind of research performance that is valued enables schools to train future scholars in the knowledge and skills needed to compete for jobs and to obtain valued rewards such as tenure and promotions.Delineating the value of A-journal publications can also serve as a self-selection mechanism. Specifically, doctoral students and faculty who do not wish to compete under a performance management system based on a particular journal list can purposefully opt out of applying to or working for a particular business school. Instead, they can pursue opportunities in schools that consider more than the number of A-journal publications to allocate rewards (Mitchell, 2007; Tushman & O'Reilly, 2007).Finally, careful examination of A journals can provide information and exemplars about the type of theorizing, methodology, and reporting required to publish successfully in them (Ashkanasy, 2010; Bartunek, Rynes, & Ireland, 2006; Bergh, 2006; Kilduff, 2007). This signaling function is inherent in the popularity of how-tos that often appear in A journals (e.g., the Academy of Management Journal series titled "Publishing in AMJ"). By making clear the expectations regarding what constitutes acceptable research rigor, journal lists can enhance the quality of research that is published, thereby benefiting the management field.Negative ConsequencesMuch of the writing and conversation about the application of A-journal lists to assess the value of management research is critical of this practice and suggests that its use is rising along with negative effects on the field's research methods, knowledge generation, and social dynamics. Critics have bemoaned journal list fetishism (Cluley, 2014; Hussain, 2015; Willmott, 2011), warned of the seductive power of journal lists (Nkomo, 2009), complained about the "escalating competition for publication space in leading journals that is changing the dynamics of our community" (Honig et al., 2018, p. 413), and concluded that "the pressure to publish in only A journals affects what scholars write, what scholars cite in their papers, what outlets scholars seek for their papers, what scholars te
Referência(s)