Editorial Acesso aberto Revisado por pares

Editorial: New developments at the Journal of Operations Management

2018; Wiley; Volume: 64; Issue: 1 Linguagem: Inglês

10.1016/j.jom.2018.12.005

ISSN

1873-1317

Autores

Tyson R. Browning, Suzanne de Treville,

Tópico(s)

Supply Chain Resilience and Risk Management

Resumo

As we draw near to a year in office as co-Editors-in-Chief (EICs) of JOM, we would like to bring the JOM community up to date on what is happening at the journal and share new developments that we have been discussing with the Department Editors (DEs) and also presenting in various venues. We will also use this occasion to share our goals and aspirations for the journal and the field. We would like to begin by expressing our gratitude to Dan Guide and Mikko Ketokivi for their leadership of JOM over the past several years. They instituted many important advancements, not the least of which is the department structure from which we all now benefit. They also formalized the editorial process for evaluating submissions, established clearer expectations for Editorial Review Board (ERB) members and Associate Editors (AEs) in terms of accepting invitations to review, and raised the methodological standards for acceptable research. These difficult transitions took the journal to a new level. It is an honor to follow their vanguard. We look forward to working together with the JOM community not only to maintain the excellent reputation of the journal but also to broaden and enhance it. As EICs, our top goals include building the community of empirical scholars in operations management (OM), increasing the number of high-quality submissions both to JOM and to the field in general (whether or not those manuscripts end up published at the journal), maintaining reasonable lead times, developing the reviewer community, and bolstering JOM's scholarly impact and practical relevance. We have added new departments to encourage submissions in underutilized regions of the journal's scope. Building on the work done by the previous EICs, we are working to clarify expectations with respect to methods such as survey research. Developing the peer-review process and encouraging manuscripts that target underutilized regions of the journal's scope provide a wealth of opportunities that we intend to mine. The remainder of this article provides further details on these points. Let us first take a moment to reflect on the meaning of peer review. An article is published when a group of peers agrees that it is sufficiently interesting, well written, and doing what it claims to do. Authors sometimes complain that the review team did not understand their submission and suggest that a "better" team would have figured it out. This demonstrates wrong understanding of the objective of peer review: If the review team did not understand, then that provides useful feedback to the authors about needed clarification. Review team members are not perfect, and their assessment of a manuscript is not a declaration from an oracle—but many members of the research community expect oracle-like outcomes. When we take the peer-review process for what it is, useful insights emerge into how to improve its functioning both for reviewers and for review teams. Making reviews developmental seems to be a common priority across journals. We would like to recognize the emphasis developmental reviewing has received at JOM. We are delighted when reviewer reports give authors and editors a clear understanding of what it will take to improve a manuscript. A gatekeeper reviewer who concludes that a manuscript's contribution does not warrant publication in JOM, feeling that their job is finished, may shift focus to capturing arguments to reject the paper. A developmental reviewer, in contrast, will seek to produce a review that will aid the authors in moving forward with their research. Although that developmental reviewer identifies flaws and missing pieces in a manuscript, they do so while maintaining a "glass half full" approach by answering the question, "What would it take to make this manuscript publishable in JOM?" Even when the specific manuscript does not make a JOM-level contribution, the guidance provided by the review team should be of service to the authors. The reviewer then makes a recommendation to the AE as to whether the points raised are addressable within a 90-day revision cycle. This new definition of responsibilities makes it easier to bring inexperienced reviewers up to speed. When the reviewer's job is to vote on whether the manuscript is accepted or rejected, recommending acceptance can be quite frightening, especially when other reviewers find fatal flaws and recommend rejection. We have observed cases where an inexperienced reviewer submits a positive review and is then shamed when other members of the review team find fatal flaws. New reviewers quickly develop the skill of finding fault, resulting in a review process built around rejection. Defining the review task as making recommendations for improvement not only helps authors, but also provides a safer environment for reviewers. This role redefinition also creates space for reviewers to indicate parts of the manuscript that they do not understand. A reviewer who is oracle and gatekeeper is expected to demonstrate full understanding of the work. But, a key benefit of the peer-review process is to identify opportunities to make the submission easier to understand. Emphasizing that the process is about peer review rather than gatekeeping also leads to a redefinition of the role of the AE. The AE task becomes to synthesize the reports from reviewers together with their own review into (1) a recommendation to the DE, (2) an action plan for authors that clearly lays out the path for improving the manuscript (whether it moves forward at JOM or not), and (3) a formal evaluation of the reviewers. Under the recommendations-for-improvement model of reviewing, the quality of feedback to reviewers also improves. Under the gatekeeping model, the main feedback that the reviewer receives is whether their recommendation matches what was decided at the editorial level. We have already noted the negative impact of a reviewer's recommending further consideration followed by a rejection decision by the editors. There are also negative consequences when a rejection recommendation by the reviewer is followed by the editors' inviting the authors to revise the manuscript: A reviewer may well feel that their review was not appreciated. Under the developmental definition, the reviewer sees from the AE and DE report which recommendations were carried forward. A reviewer that raised ten points to be addressed may find that three are identified by the AE and DE as essential, five as worth considering, and two as taking the manuscript in a wrong direction. This feedback combines with the chance to see points raised by other members of the review team and how responding to the feedback transforms a manuscript into something publishable to create learning opportunities. Fig. 1 shows our updated version of the standard editorial process that was introduced a couple of years ago (Guide and Ketokivi, 2015b). While it provides thorough and efficacious evaluations of manuscripts, it has more steps than most journals' review processes, so it is challenging to improve its efficiency while maintaining its high effectiveness. Many journals have set 90-day goals for each round of review: This is a common anchor point in many authors' minds. Currently, our throughput time (average time from submission to first decision) for manuscripts that go out for first-round review is over our 90-day goal. (The grand average for all submissions is less than 90 days, but this includes desk rejections, so the average gives a misleading impression of the highly bimodal distribution.) To help with timeliness, we need to improve the flow of reviewed papers. This is something that we as operations managers should be well positioned to do! JOM's editorial process. Fig. 2 provides a linear view of one (first-round) review cycle, which, like any process, is prone to non-value-adding holdups. Any delay typically carries forward through the process, making the 90-day goal very difficult to achieve. Yet, short-cutting the process could compromise its effectiveness. We have an efficiency-effectiveness tradeoff here: Our multi-stage review process, while providing excellent feedback to authors, challenges throughput time. We have found that the redefined AE and reviewer roles appear to make the workload for review-team members more predictable and manageable, making it possible to work with shorter assignment horizons. A linear view of one review cycle, with standard deadlines and cumulative duration. To keep lead times reasonable, we ask our editors and reviewers to do to following. Assemble review teams as quickly as possible, ideally within a couple of days, but within a week if at all possible. Invite one ERB reviewer, one ad hoc reviewer, and an AE. Optionally invite a third reviewer who is new to JOM. Not invite an AE to fill the role of a regular reviewer. Encourage reviewers and AEs to respond quickly to invitations to review. Complete their DE report within two weeks of receipt of the AE report (even sooner if the manuscript is already running late). Document their assessment of the AE and reviewer reports. Currently this is done in the "Confidential comments to the editor" section. Respond immediately to AE invitations, ideally the same day, but within a couple of days at most. Complete their own review of the paper while awaiting the reviewer reports due 28 days hence. Submit their AE report within 14 days of receiving all reviewer reports. Comment to the DE on the quality of the reviewers' reports. Look for the final decision letter from the EIC, which contains the entire review team's and editors' comments, and make a point of using the complete set of review-team reports to calibrate their work and improve as a reviewer and AE. Respond immediately to reviewer invitations, ideally the same day, but within a couple of days at most. Offer constructive and developmental comments about what it would take to make the manuscript acceptable for JOM. Submit their review within 28 days of accepting a review invitation. (As above, extensions may be granted by the DE in extenuating circumstances, and such extensions should be arranged when accepting the review invitation.) Use the final decision letter from the EIC that contains the full set of review-team reports to calibrate and improve as a reviewer. Depending on the needs of an individual manuscript, we may deviate from the standard process. For example, a DE might want to confer with an AE about whether to send a paper out for review and/or to recommend reviewers. The EICs or DEs may request an immediate revision from authors to improve a manuscript's chances in the review process. The EICs also reserve the right to "fast-track" a manuscript. Also, JOM Forum submissions (see section 5.6) do not follow this standard process. A recent addition to the process flow is the option to send a manuscript for a pre-review methods check. This allows us to send manuscripts with complex methodological issues to the new Empirical Research Methods in OM department (see section 6.3) for an initial assessment of whether the methods are correctly executed. It is often possible to address methodological concerns in a revision before sending the paper out for more general review by the primary department. This option also allows a determination about questionable methods to occur before involving a full review team. If the manuscript then goes out for review, the review team is informed about the results of the methods check, which allows them to focus more on substance. Although we do not have much data yet, our initial observations suggest that a smaller number of papers go out for review and that they do better in the review process. The general expectation is that a manuscript sent out for initial review will be evaluated by at least two reviewers, an AE, and a DE. In some journals the standard practice is to invite more reviewers than needed in case some decline the invitation. The ERB policy implemented by the previous editors allows JOM largely to avoid this issue: ERB members have agreed to review up to four manuscripts per year. Thankfully, our ERB members seldom decline invitations to review. AEs are expected to submit up to 3–4 AE reports per year. (AEs are nominally associated with a department, but any DE may invite any AE to a review team.) At the DE's discretion, a revision may be evaluated by any or all of the first-round review team. Getting away from the gatekeeper role, it is not essential that all of the reviewers "sign off" on the manuscript. The expectation is that DEs and AEs are able to evaluate whether a revision has sufficiently addressed the points raised by the review team. Nevertheless, it is the DE's prerogative (perhaps in consultation with the AE) to decide whether a revised manuscript would benefit from one or more of the reviewers evaluating the responses. The more extensive the revision, the more likely it is that a revised submission will go back to one or more reviewers: Authors may be advised of this likelihood when they receive the invitation to revise the submission. We encourage DEs to use the minimal resources necessary, usually just the AE, to evaluate a revision, and to make extra rounds with the full review team the exception rather than the norm. JOM uses a double-blind review process for authors, reviewers, and AEs. Author, reviewer, and AE identities are available only at the DE level and above. EIC and DE names are visible to authors. Finally, we want to emphasize that we do not take late reviews lightly. Reviewing is an important responsibility, and we expect everyone involved with the review process to keep their commitments to meet deadlines. We regularly review the status of manuscripts. Late reviewers and AEs will be approached first by the Managing Editor and then by the DE (or even the EICs) if the situation does not resolve. We also recognize excellence in reviewing through our annual awards to the best AE and reviewer (often with several honorable mentions as well). Best reviewer awards come with promotion to AE. Improving editorial efficiency without compromising effectiveness requires that we utilize editorial and reviewer resources judiciously and sustainably. The manuscript is insufficiently empirical. Despite the clear statement on the journal's home page that JOM publishes empirical research, we nevertheless receive many analytical modelling and optimization submissions, which we redirect to analytical OM, operations research, or industrial engineering outlets. The manuscript is insufficiently about OM. JOM is very open to cross-functional research (see section 6.1) as long as OM is the focus. In case of doubt, two key indicators are whether (1) the main variables in a study are ones that a practicing operations manager could directly affect or (2) the list of references includes substantial OM literature. The manuscript demonstrates a general lack of quality. It is the responsibility of the authors, not the review team, to craft a top-notch manuscript for JOM. Manuscripts that fall short in terms of structure, writing, or readability do not deserve the benefit of evaluation by a review team. We encourage authors to use professional copy editing services to increase the readability of manuscripts. The research employs inappropriate or insufficiently rigorous methodology. We expect authors to read the editorial on methodologies (Guide and Ketokivi, 2015a) as well as papers providing methodological guidance for case and field studies (Handfield and Melnyk, 1998; Meredith, 1998; Stuart et al., 2002; Ketokivi and Choi, 2014), partial least squares (PLS) path modelling (Rönkkö et al., 2016), and endogeneity (Ketokivi and McIntosh, 2017, Lonati et al., 2018). The manuscript has plagiarism problems (often self-plagiarism). We run all submissions through plagiarism-detection software. If authors paste sentences verbatim from other works (including their own, especially where they no longer own the copyright), they must use quotation marks and cite the source. All too often, we find that authors have built their literature-review section largely by pasting in sentences from other works that they cite without quoting. It is acceptable for authors to submit a paper that contains text taken from unpublished works (for which they retain the copyright) such as dissertations, conference papers, or working papers, as long as the source is acknowledged. For more in-depth information on this topic and on the journal's policies on ethics in publishing, we refer readers to the editorial on ethics (Guide and Ketokivi, 2016) and the journal's home page. The manuscript makes an insufficient contribution or just is not interesting. We often ask a DE who is closer to the subject area for a second opinion before drawing this conclusion. Some manuscripts do a poor job of motivating their research. Some add nothing consequential to theory or to practicing managers. The fact that no one else has yet studied something is not necessarily a good reason to study it. The fact that some relationships among variables might be statistically significant does not necessarily make them interesting. When one or more of these primary causes of desk rejection is present, it is clear to us that the manuscript will fare poorly in JOM's review process. Desk rejection spares reviewer time so that it is available for manuscripts that have greater potential to move toward publication, improving outcomes across the board. We are deeply committed to developing and sustaining JOM's community of reviewers and editors. Working with a highly motivated team of dedicated ERB members (who have each committed to review up to four manuscripts per year) as reviewers has helped greatly with reliability and quality, but it comes with an important caveat: We must prioritize the continual expansion and refreshment of the ERB. This requires that non-ERB members be given the opportunity to build a track record of providing quality reviews. Therefore, we would like DEs to use one ERB reviewer and one ad hoc reviewer per manuscript. Ad hoc reviewers who demonstrate an ability to provide quality (and timely) reviews will eventually earn promotion to the ERB. Also, we encourage DEs to invite a promising individual—e.g., a junior faculty member working in the area of the manuscript—as an optional third reviewer. The DE invites this person to set up an account in the online editorial system and advises them (when sending the invitation) that this should be considered as a trial run, and that the DE and AE might not share the review with the authors. The AE and DE are expected to comment on the quality of all reviewers. We assess these comments when considering promotions to ERB, AE, and DE positions. We are also taking steps to improve our keywords list and the system for matching reviewers and AEs with manuscripts on that basis. We look to realize a significant improvement of this capability in 2019, and we expect this to help us improve the distribution of reviews more evenly across the AE, ERB, and ad hoc reviewer pools. Currently, JOM has 94 ERB members (included 15 added so far in 2018), 96 AEs (including 20 added so far in 2018), and 17 DEs. We want to encourage research that lies in regions that fit the scope of JOM yet have been underrepresented in its recent publications. This requires fresh thinking about what a JOM paper should be. Let us briefly describe some of these regions and how they challenge the conventional wisdom of how an interesting contribution might appear. Scientific knowledge is created through learning from observation. Many interesting observations serve primarily to bring research questions to the attention of researchers: Science is advanced along the entire continuum from initial, interesting, or surprising observation to full generalization. Instead of restricting scientific knowledge creation to generalizing, we seek to encourage research that contributes at other parts of the continuum. The contribution of an empirically grounded research question is to inform the choice of which studies to run next. Generalization in this view results gradually from configurations of articles. This incremental approach to generalization encourages investment in choosing impactful research questions, creates space for sense making, and encourages rigorous construction of the theoretical foundation. Knowledge creation in the OM field has suffered from the expectation that each individual article should cover the entire continuum: It is commonly assumed that each individual article will both propose and test a theory or model. One outcome has been cases in which research questions are selected because data are available rather than because solid arguments support their capturing what we need to learn. Knowledge creation around lean production and Six Sigma exemplifies a second concern: Decades after the arrival of these concepts to the OM field, there is still disagreement as to their definition, and many possibilities remain to translate these interesting, practical observations into foundational knowledge. More generally, we hope to streamline the process of translating interesting observations at the practitioner level into basic theoretical building blocks. By encouraging knowledge creation across the entire spectrum, from observation to generalizability, we hope to create space for dialogue. Empirically grounded research questions may take the form of an extended case study or even a conceptual paper that is motivated by an anecdote or general observation. The use of an extended case study to generate research questions is exemplified by Browning and Heath (2009), who identified important moderators of the relationship between implementation of lean production and cost reduction. Gray et al. (2017) and Yin et al. (2017) illustrated the use of simpler cases whose primary role is to provide evidence that a phenomenon has been observed. The observations made in Gray et al. (2017) led to a system dynamics model, with loops in the model grounded in the cases. Those made in Yin et al. (2017) resulted in a conceptual paper that extended the theory of swift, even flow (Schmenner and Swink, 1998). As researcher participation in the empirical setting increases, learning from the interaction between the researcher and the practitioner becomes an increasing part of the contribution of the work. When learning from this interaction is the primary focus of the paper, the contribution type moves toward design science, which we address in the following subsection. Design science is a methodology that has as its primary objective the creation of knowledge from the academic-practitioner interaction. Appreciation of the value represented by the scientific contribution of this knowledge—historically considered insufficient for publication in a top journal—has dramatically increased in recent times, leading the previous EICs to create the Design Science department to incubate use of this methodology. Understanding of the use of design science in the operations context remains a work in progress: We are working together with the DEs of this department to provide guidance to authors and review teams about how to create and evaluate a design science contribution in the OM field. Meanwhile, authors should read the article by van Aken et al. (2016) and representative papers such as Groop et al. (2017). We expect this type of knowledge creation to not only grow in importance in the field over the next years, but also to facilitate other knowledge-creation types. We would like to highlight a type of paper that falls under the design science designation: empirically grounded analytics (EGA). Analytical models such as the economic order quantity (EOQ) or newsvendor models predate the field of OM by a wide margin. Once an analytical model is developed, the tradition has been to conclude that the work is done and the full contribution made. But, just like organizational silos—where designs are thrown over the wall to production—do not work in industry, the creation of an analytical model is just a step towards improved decision making. Moreover, analytical models tend to make many simplifying assumptions, and many do not receive widespread adoption in practice. We would like to encourage papers that follow an analytical model into its implementation in an organization and capture what needs to be worked out in order for the model to contribute to actual decision making. For example, Schweitzer and Cachon (2000) demonstrated that merely training decision makers in how to use the newsvendor model to calculate the profit-maximizing order quantity did not equate to them using it in practice. The model facilitates incorporation of nonlinear/counterintuitive relationships into decision making. When the model is applied to actual data, the results suggest that the decisions being made should be adjusted. Decision makers agree that the model results are logical and make an effort to implement them. Implementation is not trivial: Factors arise during implementation that require action that was not considered when building the model or planning its implementation. Implementation identifies additional, salient factors or conditions that the model should incorporate. The implementation may fail. For example, EGA research might consider how decision making changes with assumptions about an underlying distribution, such as tail heaviness or symmetry: de Treville et al. (2014) modeled the weight of the right tail of a distribution of venture-capital deals as a decision variable, illustrating how the value of spare capacity increased with tail weight. OM research (and practice) often tend to assume a normal distribution to simplify analysis. This simplifying assumption sometimes works well, but may result in bias. Another type of EGA paper explores whether the extra work and complexity involved in improving the accuracy of a model is warranted by the improvement in decision making. For example, Biçer et al. (2018) investigated whether a model that differentiates between usual demand volatility and shifts in median demand provides enough extra guidance to warrant the extra complexity or if it is sufficient to approximate demand uncertainties with a model that assumes usual demand. Biçer et al. (2018) showed that it depends on the expected direction of the shift in median demand: For a shift that is expected to be positive, the implications for the value of responsiveness suffice to justify use of the more complex model. When, however, median demand is expected to shift down, assuming that all demand uncertainty comes from usual demand volatility provides a reasonable approximation. We would also like to encourage EGA research that evaluates the expected impact of new tools in data science combined with increased data availability on the field of OM. In contrast to the general expectation that "big data" will dramatically change how operations are managed, we hope to encourage work that relates new data tools and types to foundational knowledge in the field. Let us close by considering an example of what does not qualify as EGA—a model that is developed and fit to company data without providing information as to how the results influenced decision making, or whether anything else was learned during implementation. Papers whose goal is to present a model, with empirical testing being ancillary at best, are not considered for publication at JOM, and are better suited for an engineering or operations-research journal. Yes, JOM is open to literature reviews, although the bar is high. Quality literature reviews are challenging to write. We are not sanguine about "systematic literature reviews" that count papers and citations so as to identify trends of "hot topics" captured using graphs, clusters, etc. Rather, we see potential in literature reviews that contribute to theory by proposing taxonomies or frameworks that identify important research questions and managerial insights, or that compare and contrast alternative theories or conflicting results. Part of the challenge with building theory from reviewing the literature is getting from the huge population (sometimes thousands of papers) to the level of detailed exploration of what each of these papers does and does not do. A high-level analysis makes it harder to delve into the details of the methods and theories within each paper. It is thus useful to distinguish between literature surveys (which stay at a high level) and literature reviews (which go deeper). We are not ruling out literature surveys completely, but we encourage researchers considering a literature survey to be aware of how challenging it is to achieve the breadth of search required for rigor while still delving into key research questions and managerial insights (which requires depth of search)—all within a single manuscript. In deciding whether to consider a literature review, we ask whether the submission advances the field by contributing new insights that elevate future research endeavors and guide practitioners. Some examples of OM literature reviews that typify what we would like to encourage include those by Bozarth and McDermott (1998), Ramdas (2003), Bendoly et al. (2006), and Browning and Ramasesh (2007). JOM has historically published conceptual papers, sometimes qualifying them as "Conceptual Notes." Some may employ limited, empirical data for motivation or demonstration without this being the focus of the paper. We remain open to top-notch conceptual papers (such as can be found in journals such as the Academy of Management Review) that contribute to OM theory and direct future empirical research. The criteria of merit thus have much in common with literature reviews—and, indeed, good conceptual papers are well grounded in the literature. Some examples of largely conceptual papers in JOM include those by Choi et al. (2001), Choi and Krause (2006), de Treville et al. (2004), Ramasesh and Browning (2014), and Spring et al. (2017). Such papers often use a vignette to motivate the conceptual analysis, often approaching empirical grounding of a research question. Forum articles allow senior scholars to present their viewpoint on a strategic topic. These provide interesting insights and provoke stimulating discussions and research in the OM community. They prompt reflection on and vision for the OM field. That a senior scholar has taken a stand on a topic provides warrant for other researchers to pursue that topic, making it easier to convince reviewers that there is an issue to be addressed that represents a substantial contribution to the field. Because Forum articles present one viewpoint, we may sometimes invite a response to elicit further discussion. Forum articles are submitted by invitation only through a special category in the online editorial system. We also seek to stimulate submissions to underutilized regions of JOM's scope through new and updated departments. See the journal's home page for the latest list of departments, DEs, and statements of each department's mission and scope. Department Editor: Mike Galbreth, University of Tennessee. We expanded the Marketing and Retail department into the Operations Interfaces department. This department is for articles that cross the boundary between OM and other functional areas such as accounting, human resources/organizational behavior, finance, and marketing. Importantly, articles must still have OM as their primary focus. We see this department as creating an environment that encourages interdisciplinary research. Department Editor: Susan Helper, Case Western Reserve University. This new department solicits two kinds of articles. First, OM research often has policy implications: Outcomes as measured by private profit and social welfare might diverge. For example, understanding whether effective management of the innovation process requires production to be done in close geographic proximity to where innovation occurs has implications for whether government policies should promote domestic manufacturing. Second, effective implementation of public policy often benefits from a sound understanding of operations principles: Many public programs would benefit from incorporating OM insights about the effect of non-linearities and discontinuities that result from the bottlenecks, uncertainties, and information problems that characterize real-world processes. Papers submitted to this department must have an OM focus that connects to policy implications. Department Editor: Mikko Rönkkö, University of Jyväskylä. This new department has two missions. First, it will handle manuscripts addressing methods and tools for OM research. In the past, JOM has published many papers on topics such as case study research methodology and various analytical approaches and techniques. The number of OM research methods continues to increase, so it is important to explore their appropriateness and rigor for various research designs. Second, this department will organize a pool of method-expert AEs and reviewers who may be invited to a review team by any DE. As noted in Section 3, we also send a number of new submissions to this department for a preemptive methods check before deciding whether to put the manuscript into the regular review process. It was terrific to assemble a majority of the DEs at the first official JOM DE meeting in Chicago on August 9, 2018. We increased our shared understanding of JOM's aims, policies, procedures, and opportunities. Thanks to the Association for Supply Chain Management (ASCM—formerly APICS—which owns JOM) for hosting the event at their headquarters and to Elsevier for providing travel support for the DEs and a helpful presence. In closing, we are delighted to find such interest, excitement, and positive momentum throughout the OM community. We appreciate all of the authors, reviewers, and editors that contribute to JOM's success. Our goal is for JOM to continue to improve as a flagship journal in a vibrant and growing OM field.

Referência(s)
Altmetric
PlumX