From the Editors—Improving the Transparency of Empirical Research Published in AMJ
2021; Academy of Management; Volume: 64; Issue: 4 Linguagem: Inglês
10.5465/amj.2021.4004
ISSN1948-0989
AutoresKatherine A. DeCelles, Jennifer Howard‐Grenville, László Tihanyi,
Tópico(s)scientometrics and bibliometrics research
ResumoAcademy of Management JournalVol. 64, No. 4 Thematic Issue: Improving Transparency of Empirical Research Published in AMJFree AccessFrom the Editors—Improving the Transparency of Empirical Research Published in AMJKatherine A. DeCelles, Jennifer Howard-Grenville and Laszlo TihanyiKatherine A. DeCellesUniversity of Toronto, Jennifer Howard-GrenvilleUniversity of Cambridge and Laszlo TihanyiRice UniversityPublished Online:13 Sep 2021https://doi.org/10.5465/amj.2021.4004AboutSectionsPDF/EPUB ToolsDownload CitationsAdd to favoritesTrack Citations ShareShare onFacebookTwitterLinkedInRedditEmail It is unusual for a theory-oriented journal such as Academy of Management Journal (AMJ) to highlight papers for anything other than their contribution. Nonetheless, contribution can only be meaningful when the reported empirical results are reliable. Establishing the reliability of results in science can be a long process. Many new questions may emerge long after the publication of a paper, new analytical tools enable researchers to reveal hidden evidence in previous studies, and replications often demonstrate different results. This process can only work when authors are transparent about their research process—being open about how they completed their studies, how the research progressed and changed, and explain their analyses and results accurately. We would argue, therefore, that contribution and transparency go hand in hand. The significant contribution we seek in manuscripts submitted to AMJ cannot be achieved without providing opportunities for review teams to understand the research fully, coupled with authors' willingness to answer questions about the work.In this thematic issue1 on transparency, our goal is to illustrate both recent and wide-ranging practices in papers accepted at our journal as examples for how authors are making research more transparent across methods and content areas.2 While we are proud to share them with our readers, we do not hold them up as "perfect." Rather, we point to them as some ways in which authors are currently improving the transparency of their work in the AMJ review process.RESEARCH TRANSPARENCY AT AMJThe importance and impact of our research contributions rest on the following two assumptions: (1) the public can trust management scholarship, and (2) the scholarship we produce is conducted with the highest level of scientific integrity to earn that trust and influence. While many others have written about the replicability crisis as it applies to science generally and to the field of management, the purpose of this editorial essay is to highlight our responsibility at AMJ—as authors, reviewers, and editors—to collectively work on these issues, not just limited to replicability, but to making our scholarship increasingly robust and trustworthy. We encourage authors to improve the transparency of their work considering rapidly evolving best practices to better establish the assumptions underlying their research and to increase the reliability of evidence-based understanding of management.We believe that more transparency enables authors to make more impactful contributions, in at least two ways. First, more transparent manuscripts increase readers' confidence that the results are trustworthy, as they can better examine the evidence and help make corrections to the scientific record. Second, transparent research also can generate more lively scholarly conversations and citations, as other scholars can use similar paradigms or even shared data to answer related questions, such as theorizing and testing boundary conditions, mechanisms, and alternative dependent variables. More transparent scholarship overall can help the field to correct false positive findings and misspecified theories before generations of scholars (and management practitioners) are misguided into years of wasted efforts.We value scholarship that rigorously develops theoretically valuable and practically important findings over the presentation of perfect results. We endorse the use of open science and other practices that can improve management scholarship's trustworthiness and reproducibility. Online spaces, including the Open Science Framework (OSF), Social Science Research Network (SSRN), Dropbox, and Google Drive, now allow authors to share virtually unlimited and, if needed, anonymized information with interested readers, meaning that they are no longer bound by page constraints to provide supplementary in-depth description and analyses.The accumulation of scientific knowledge in the field, growing opportunities to share information about research, improvements in analytical tools, and enhanced understanding of good practices have all increased reviewers' and readers' demands for increased transparency. Recently published articles using quantitative methods look quite different from papers using similar methodological approaches published a few years ago. For example, in micro-quantitative papers, we are increasingly seeing the use of multiple and often highly powered studies that build on one another. Since no research is perfect—and often hindsight is 20/20—the use of multiple studies and methods can help navigate the potential conflict between presenting supported hypotheses and truly understanding one's data. As an authorship team collects field data, they may come to realize that there was an inherent assumption in their theorizing beforehand, and then deductively test that assumption in an experimental follow-up. By actively showing more of the process of learning across studies, and with the use of online supplements, reviewers and readers can more transparently see the evolving and often iterative process of scientific discovery. Many recent macro-quantitative papers also tend to utilize multiple studies that provide opportunities together for deeper empirical insights. In parallel, the transparency of macro-quantitative studies regarding sample selection and measurement has increased substantially over the recent years. The past standard statement of "results of supplementary analyses can be obtained from the authors" has been replaced with the presentations of detailed post hoc tests and statistical validations in the articles or their online appendices.Authors of micro-quantitative papers are also increasingly using detailed preregistration of hypotheses and analysis plans, and supplying time-stamped, anonymized open data. These practices increase readers' confidence that, indeed, certain hypotheses and procedures were made prior to data collection, and other elements were exploratory. While many of these practices emerged in micro-research, due to its proximity to psychology, they are equally applicable and have been expanding in macro-quantitative research as well. Open materials, such as experimental paradigms, full survey items, as well as output and code, are also being more broadly presented in both micro- and macro-quantitative papers.Nonetheless, we acknowledge that data sets from organizations may be proprietary, subject to non-disclosure agreements (NDAs), or part of several studies (see Tihanyi & DeCelles, 2021). It is not the point of this editorial to say that all transparency practices, like open data, should be used all the time, or even that those papers that use more of these practices should be blindly accepted as better. By virtue of their characteristics, different studies call for different transparency practices—and not all practices are appropriate for all papers. Rather, we encourage scholars to continue to learn and update their practices to improve their work's transparency according to what is appropriate for their area. For example, this might involve presenting extensive supplementary analyses, descriptive statistics, code, and output, and lists of variables in lieu of open data protected by an NDA, providing the data to reviewers only, or giving details on how scholars can obtain the data themselves.Qualitative research benefits from transparency practices for somewhat different reasons than quantitative research. Although a key reason valuing transparency in quantitative research derives from a desire to enable replication (i.e., would other scholars, using the same data, have the same intuitions? Could an independent researcher collect a similar data set and have the same findings?), a replication logic has been argued to be inappropriate for inductive, theory-building qualitative research (Pratt, Kaplan, & Whittington, 2020). This is in part because inductive qualitative research findings result from unique perspectives and relations between researchers and their selected fields. In addition, with rich qualitative data (from interviews, ethnographic observations, archival records, etc.), there are often many possible questions to ask, puzzles to pursue, and hence pathways through the data. These pathways will be informed by the researcher's theoretical intent, interests, and what becomes salient, including surprises and tensions (Jarzabkowski, Langley, & Nigam, 2021).3 However, disclosing those relations and viewpoints, and the process researchers took to arrive at their theory, are critical to understanding the theoretical journeys taken (Anteby, 2013)—and this is where transparency for qualitative research comes into play.For the review process and ultimately the paper's readers, transparency about the data themselves (why, where, and how were they collected?), the researchers' stance in relation to the data (how were they involved, what did this mean?), their ontological perspective (what did the data represent to them?), and their reasons for and discoveries from the analytical moves they made (what was asked of the data, what was found, then what?) are critical to explain. This is because the quality of qualitative research ultimately rests on its trustworthiness; have the researchers established that the review team, editor, and readers can trust that their process was rigorous (Pratt et al., 2020)? The good news is that all the major management journals, including AMJ, have qualitative reviewers and editors who are not only dedicated to the responsibility of ensuring trustworthiness in the research they evaluate, but are also accustomed to navigating the tensions between structure and creativity in how trustworthiness is conveyed.Transparency in qualitative work also helps authors themselves to trace their data collection and analytical processes carefully, enabling reflexivity and ultimately better theory building. Additionally, the coding process and other analytical moves involve multiple choices that should be actively taken and eventually described clearly for readers, enabling other scholars and students to learn and understand the process behind the research. Authors who are fully engaged in understanding and explaining how concepts and mechanisms arise can build theory more generatively than they might by passively applying a formulaic coding strategy or conforming to a template (Grodal, Anteby, & Holm, 2020).Finally, a high degree of transparency in qualitative research can make it more impactful. This impact is not only felt if authors can speak with confidence about results to those outside the field, knowing they rest on a thorough and robust process, but the work also has impact within our field by demonstrating and extending the craft of good scholarship. Particularly for qualitative work in which methodological pluralism is to be celebrated rather than constrained (Eisenhardt, Graebner, & Sonenshein, 2016; Gehman, Glaser, Eisenhardt, Gioia, Langley, & Corley, 2018), it is extremely important for authors of qualitative papers to actively engage and carefully convey their method.As editors and reviewers, we also have responsibility to increase transparency of the research published at AMJ. We must take on additional labor to examine supplementary evidence provided by authors, and not assume that the use of certain practices makes some research necessarily better or more reliable than others. We should embrace the rich epistemological and methodological diversity in our field, which inform research practices and assumptions, and likely result in different areas of the field adopting certain transparency practices that are not appropriate for others. We also should not expect perfection from research and not shy away from asking questions when perfect results are presented. We encourage evaluating the quality of the work and its implications alongside its level of transparency,4 rather than expecting perfect results and transparency. We should reject suggestions to change hypotheses to match results, and encourage deeper discussion sections about what was learned, and honest reflection about what is still left to learn. Lastly, we aim to continue to make AMJ's review process as transparent as possible, while maintaining its double-blind review process. We continue to require the disclosures of conflicts of interest and disclose operational metrics at professional meetings and paper development workshops, including impact factors, acceptance rates across the micro, macro, and qualitative areas, and review turnaround times.PAPERS IN THIS ISSUENext, we briefly mention some of the transparency features of the papers selected for this issue and encourage readers to take full advantage of the additional posted resources for each paper. The first paper, by Sitzmann and Campbell (2021) on the relationship between religiosity and gender wage gap, includes detailed preregistrations for the authors' experiments, along with an online supplement on SSRN detailing the precise study manipulations. The paper also includes helpful footnotes on missing data, why certain alternative measures could not be used, and discussion and details on marginal effects.Ruebottom and Toubiana (2020) explain in detail the setting of their qualitative study, establishing why they chose the sex industry to study how stigma impacts entrepreneurship. Given the sensitive nature of the topic and their awareness of their social distance from participants, the authors describe their entry to the field in some detail. They divulge that they first established their own social media accounts to gain a non-intrusive presence in the industry and build trust with potential informants for later interviews. Sharing not just the what but also the how and why of their data collection techniques gives the reader a sense of trust in the authors' knowledge of the field, their sensitivity to how they gained this knowledge and access, and their viewpoint going into and informing the research.In their paper, Bettinazzi and Feldman (2020) consider divestitures as a possible solution firms use to resolve conflicts among their stakeholders. These authors provide a detailed report on how they selected their large sample of U.S. public firms from Thomson Reuters ASSET4, SDC Platinum, and Compustat databases. They describe in detail the steps of their empirical analyses and help future replication efforts by discussing alternative measures and tests with different statistical analyses. The authors contribute to governance research by sharing the 44 items of their stakeholder orientation measure in an appendix.In their paper on the effects of self-presentation in job applications, He and Kang (2020) include all data, complete materials, variable coding, analysis scripts, and helpful "read me" documents for each study in a well-organized OSF page for their paper. This will help interested parties better understand how the data are organized and analyzed, conduct analyses themselves, and build on the work. The paper also has an extensive online appendix including results of pilot studies, additional analyses, and supplementary figures, and includes well-powered experiments based on a priori power analyses included in the paper.In their paper, Livne-Tarandach and Jazaieri (2020) offer a rich portrayal of their qualitative data collection at Camp Magic. Multiple forms of data—from primary participant observations of daily camp life to staff and camper interviews, pre-camp surveys, and audio recordings of daily camp staff meetings—were used in a complementary fashion to build the evidence for the findings on the emergence of a swift sense of community. The authors also thoroughly describe the various stages of their analysis, but not in generic terms; they are transparent about how they considered different theoretical lenses and what led them to select one, how they asked specific questions of the data at different stages, and what emerged. Further, they reflect on questions raised by reviewers and how these questions triggered further analysis. The reader is guided carefully through the authors' journey and gains a textured understanding of their process of discovery.Soda, Mannucci, and Burt (2021) analyze the dynamic relationship between networks and creativity using data from 273 episodes of Doctor Who, a British science-fiction television show that first aired in 1963. The authors make extra efforts to provide sufficient details on the television show and their coding process to those readers who are not familiar with the Doctor Who series. The thorough description of their analytical steps makes the paper accessible to those who are less familiar with social network analysis. Supplementary information on the sample and additional analyses are presented in a 13-page online appendix.Berg and Yu (2021) provide a well-powered archival study on creative work in the U.S. film industry (with over 5,600 observations). They show a baseline model without covariates in their paper and include an impressive and extensive online appendix detailing coding procedures, examples, additional figures, and a host of supplementary analyses. These authors also conducted a highly powered experiment (120 participants per cell of the design), and include open data in the OSF page for their paper. This will allow others who are interested to examine their data and run their own analyses, and to use similar coding procedures as they build on their work from the field.The next paper, by Chin, Zhang, Jahanshahi, and Nadkarni (2021), shows how social and economic ideologies by CEOs influence strategic decision-making and corporate entrepreneurship. The authors collected survey responses from the CEOs and three (or more) top management team members of 192 small- and medium-sized enterprises in Iran. They present the survey items in an appendix and explain how the survey was administered in six Iranian provinces. Another strength of this paper is its transparency regarding how the social and economic scales were developed and validated for the study. These extra details are necessary, given the variance in the meaning of these ideologies across countries. The authors illustrate this methodological challenge by comparing the political party systems of the United States and Iran in a table.Luciano, Fenters, Park, Bartels, and Tannenbaum (2021) use audio and video recordings that capture paramedics interacting in mass-casualty incident simulation scenarios. The qualitative analytical methods developed and pursued by this author team include some familiar moves, like temporal bracketing and following forward (Langley, 1999) of task transitions and breakdowns, and other moves demanded by the nature of the data (e.g., invoking medical professions to help establish criteria for measures of pressure paramedics were under in the scenarios) and affordances of the technology (e.g., watching multiple videos simultaneously to capture what other paramedics were doing to test the emergent understanding of multiteam system functioning). As with the other qualitative papers in this issue, this one conveys an appropriate level of detail and nuance to give the reader a strong understanding of the analytical choices made, and how those suited the data, research questions, and theory.Hill, Matta, and Mitchell (2020) examine relationships between employees' optimistic and pessimistic states and outcomes, and, in doing so, provide an example of multiple methods, with an experience sampling study and series of well-powered experiments replicating these field findings in their paper. While the majority of their hypotheses are supported, the authors openly report when some of their hypotheses were not. The authors report the complete measures used in both studies and additional validity evidence for measures, supplemental analyses, experimental manipulations, and other appendices.Finally, the paper by Bain, Kreps, Meikle, and Tenney (2021) examines how amplifying others' voice influences perceptions of employee status. They include a series of extensive supplements on OSF with all data, analysis scripts, full materials, and preregistration/analysis plans for the two experiments. These authors are also transparent about not knowing if their field study would be sufficiently powered to detect effects, and detailing which analyses were exploratory versus tested in a deductive manner in the experiments. These authors clearly indicate which tests are intended to be confirmatory, and provide detailed a priori rules for exclusions and scale construction.MOVING FORWARDAs with all efforts to advance scientific rigor, AMJ's increasing emphasis on research transparency is intended to lead to improvements in the field. These improvements will only be realized with continuous efforts to engage input by readers, authors, and reviewers and implement changes together with other journals in the field. Thus, we continue to learn best practices from one another while making further improvements. At the same time, we also need to be realistic with our expectations regarding increased research transparency. For example, transparency cannot compensate for lack of original research contributions (see Tihanyi & DeCelles, 2021, for discussion on what is considered original at AMJ). Papers with hundreds of pages of online appendices are not necessarily better than papers that succinctly provide the required details on the reported empirical studies.5 Authors should resist the temptation to bury inconvenient details or weaknesses of their work in large appendices. At the same time, we encourage lively engagement with published papers and their supplements by the larger scholarly audience.Research transparency also works differently in various fields. Management researchers using data based on responses by human subjects in their research must protect those subjects, maintain confidentiality, and follow the ethical guidelines set by the institutional review boards and similar organizations in many countries. However, laws and ethical standards governing the treatment of human subjects differ greatly around the world and thus professional organizations and journals need to develop guidelines that allow the adoption of universal transparency practices. These practices should exceed local guidelines in countries with limited human rights, or where laws insufficiently protect the well-being of human subjects.While writing research transparency policies for specialist management journals focusing on narrower sets of research foci or methods might be easier, AMJ and similar other "big tent" journals can advance this process by providing platforms where micro and macro as well as quantitative and qualitative researchers can be aware of and learn from each other. For example, publishing mixed-methods papers in which authors follow transparency practices for each method could provide new opportunities to accelerate the creation of management knowledge.We encourage preregistrations but would like to remind readers and reviewers to continue to evaluate the preregistrations as compared to the manuscripts and to continue to rigorously review empirical details of the studies even when preregistered. While preregistering studies has been a change in the right direction, "peeking into the data" and running preliminary tests may remain valid concerns without time-stamped open data as well. Until preregistration and open data become more widespread, they can signal status (e.g., school affiliation or publishing experience) and unintentionally disclose the location of authors, since preregistration services are not available in many countries, or the use of online services could reveal the authors' home countries.Another valid concern has been the integrity of the double-blind review process. We can only increase the transparency of empirical research if our double-blind review process is preserved. Additional documents on OSF, Dropbox, and other websites must be shared anonymously—with institution and author information removed from all documents that will be shared with reviewers. We check each submission for anonymity carefully, but the process becomes increasingly challenging with the increase in the number and types of files. Visitor tracking at these online services represents another set of challenges that needs to be addressed.As we continue our journey toward more research transparency, the many available options to post non-peer-reviewed papers and authors' desire to make their results available has meant a growing practice of sharing full papers online before peer review. The urgent need of the public to learn from scientists to help address societal problems can be understood better than ever, considering the unprecedented worldwide crisis created by the COVID-19 pandemic. However, many results from unvetted studies have led to the premature adoptions of treatments and solutions that sometimes proved ineffective, or could even be ill advised. Science will be unlikely to be able to progress without careful peer evaluation of the merits of research, regardless of who produced the research and how widely the authors broadcast their findings. Blind peer review therefore remains important since we rely on objective expert reviewers to ultimately help practitioners to gain deeper understanding of important and complex phenomena that demand specialized expertise.We raise these questions for the ongoing discussions on transparency in management research, even though we do not have the answers for all of them. Our intention with this editorial and papers presented in this issue is to optimistically aspire to being a stronger scientific community by adopting more transparent practices.6 By relentlessly pursuing trustworthiness in the design, execution, reviewing, and publishing of our research, rather than the pursuit of perfect results, we will have more valid and impactful scholarship.AcknowledgmentsWe are grateful to Andrew Carton, Luis Diestre, Lindred Greer, Denis Grégoire, Ivona Hideg, Cindy Muir (Zapata), Floor Rink, Matthew Semadeni, Elizabeth Umphress, and Tammar Zilber for their comments on an earlier version of this editorial.1 This is a collection of articles accepted at AMJ in the regular review process, not a special issue that sought specific submissions.2 Our intent with this editorial is to encourage prospective AMJ authors to increase the transparency of their papers as well as reviewers to seek it, rather than to outline new submission guidelines.3 This type of work, importantly, does not deductively test hypotheses, making such inductive processes part of the craft of qualitative work. This contrasts with this type of exploration and theorizing from results, which, when presented as a priori theorizing, would be problematic for quantitative scholars.4 Note that transparency is now an item included as part of the formal reviewing process, along with papers' theoretical and empirical contributions and practical relevance.5 For example, we are concerned that (volunteer) reviewers who are overwhelmed by the large number of documents presented to them may overlook flaws without the necessary time and infrastructure that are customary in other professions processing large amounts of information, such as law and auditing.6 We acknowledge that there are many issues that are widely discussed in the field and profession that we cannot address here that might discourage full transparency—for example, incentives, tenure pressures, field sites' constraints, the validity of NHST, and results-blind reviewing.REFERENCESAnteby, M. 2013. Relaxing the taboo on telling our own stories: Upholding professional distance and personal involvement. Organization Science, 24: 1277–1290. Google ScholarBain, K., Kreps, T. A., Meikle, N. R., & Tenney, E. R. 2021. Amplifying voice in organizations. Academy of Management Journal, 64, 4: 1288–1312.Link , Google ScholarBerg, J. M., & Yu, A. 2021. Getting the picture too late: Handoffs and the effectiveness of idea implementation in creative work. Academy of Management Journal, 64, 4: 1191–1212.Link , Google ScholarBettinazzi, E. L. M., & Feldman, E. R. 2020. Stakeholder orientation and divestiture activity. Academy of Management Journal, 64, 4: 1078–1096. Google ScholarChin, M. K., Zhang, S. X., Jahanshahi, A. A., & Nadkarni, S. 2021. Unpacking political ideology: CEO social and economic ideologies, strategic decision-making processes, and corporate entrepreneurship. Academy of Management Journal, 64, 4: 1213–1235.Link , Google ScholarEisenhardt, K. M., Graebner, M. E., & Sonenshein, S. 2016. Grand challenges and inductive methods: Rigor without rigor mortis. Academy of Management Journal, 59: 1113–1123.Link , Google ScholarGehman, J., Glaser, V. L., Eisenhardt, K. M., Gioia, D., Langley, A., & Corley, K. G. 2018. Finding theory–method fit: A comparison of three qualitative approaches to theory building. Journal of Management Inquiry, 27: 284–300. Google ScholarGrodal, S., Anteby, M., & Holm, A. L. 2020. Achieving rigor in qualitative analysis: The role of active categorization in theory building. Academy of Management Review. Published online in advance. doi: 10.5465/amj.2018.1280 Google ScholarHe, J. C., & Kang, S. K. 2020. Covering in cover letters: Gender and self-presentation in job applications. Academy of Management Journal, 64. Published online in advance. doi: 10.5465/amr.2018.0482 Google ScholarHill, E., Matta, F. K., & Mitchell, M. 2020. Seeing the glass as half full or empty: The role of affect-induced optimistic and pessimistic states on justice perceptions and outcomes. Academy of Management Journal, 64, 4: 1288–1312. Google ScholarJarzabkowski, P., Langley, A., & Nigam, A. 2021. Navigating the tensions of quality in qualitative research. Strategic Organization, 19: 70–80. Google ScholarLangley, A. 1999. Strategies for theorizing from process data. Academy of Management Review, 24: 691–710.Link , Google ScholarLivne-Tarandach, R., & Jazaieri, H. 2020. Swift sense of community: Resourcing artifacts for rapid community emergence in a temporary organization. Academy of Management Journal, 64, 4: 1127–1163. Google ScholarLuciano, M. M., Fenters, V., Park, S., Bartels, A. L., & Tannenbaum, S. I. 2021. The double-edged sword of leadership task transitions in emergency response multiteam systems. Academy of Management Journal, 64, 4: 1236–1264.Link , Google ScholarPratt, M. G., Kaplan, S., & Whittington, R. 2020. Editorial essay: The tumult over transparency: Decoupling transparency from replication in establishing trustworthy qualitative research. Administrative Science Quarterly, 65: 1–19. Google ScholarRuebottom, T., & Toubiana, M. 2020. Constraints and opportunities of stigma: Entrepreneurial emancipation in the sex industry. Academy of Management Journal, 64, 4: 1049–1077. Google ScholarSitzmann, T., & Campbell, E. M. 2021. The hidden cost of prayer: Religiosity and the gender wage gap. Academy of Management Journal, 64, 4: 1016–1048.Link , Google ScholarSoda, G., Mannucci, P. V., & Burt, R. 2021. Networks, creativity, and time: Staying creative through brokerage and network rejuvenation. Academy of Management Journal, 64, 4: 1164–1190.Link , Google ScholarTihanyi, L., & DeCelles, K. A. 2021. Publishing original research in AMJ: Advice to prospective authors. Academy of Management Journal, 64: 679–683.Link , Google ScholarFiguresReferencesRelatedDetailsCited ByFrom the Editors—Insights on How We Try to Show Empathy, Respect, and Inclusion in AMJElizabeth E. Umphress, Floor Rink, Cindy P. Muir (Zapata) and Ivona Hideg14 April 2022 | Academy of Management Journal, Vol. 65, No. 2From the Editors—Achieving Fit and Avoiding Misfit in Qualitative ResearchJennifer Howard-Grenville, Andrew Nelson, Heather Vough and Tammar B. Zilber27 October 2021 | Academy of Management Journal, Vol. 64, No. 5 Vol. 64, No. 4 Permissions Metrics Downloaded 4,451 times in the past 12 months History Published online 13 September 2021 Published in print 1 August 2021 Information© Academy of Management JournalAcknowledgmentsWe are grateful to Andrew Carton, Luis Diestre, Lindred Greer, Denis Grégoire, Ivona Hideg, Cindy Muir (Zapata), Floor Rink, Matthew Semadeni, Elizabeth Umphress, and Tammar Zilber for their comments on an earlier version of this editorial.PDF download
Referência(s)