Artigo Revisado por pares

In This Issue

2012; Institute for Operations Research and the Management Sciences; Volume: 60; Issue: 3 Linguagem: Inglês

10.1287/opre.1120.1080

ISSN

1526-5463

Tópico(s)

Optimization and Search Problems

Resumo

In this issue of the journal, we are introducing the newly redesigned OR Forum. In this OR Forum paper, R. Sioshansi analyzes different tariff strategies for charging electric vehicles on the power grid in his article, “Modeling the Impacts of Electricity Tariffs on Plug-in Hybrid Electric Vehicle Charging, Costs, and Emissions.” As in the past, online commentary about this OR Forum paper and its contributions, with responses by the author, is posted at the OR Forum blog at http://www.informs.org/Blogs/Operations-Research-Forum . To better integrate the online commentary with the print copy, we are introducing a summary of the commentary, prepared by the OR Forum Area Editor, Ed Pinker. We hope you will enjoy the summary and contribute to the blog discussion. Open-pit mine planning formally consists of scheduling the life-of-mine production of a mineral deposit. Despite the fact that integer programming models have been proposed for this problem since the 1960s, in practice, the large size of real problems has made such models impractical. In “A New Algorithm for the Open-Pit Mine Production Scheduling Problem,” R. Chicoisne, D. Espinoza, M. Goycoolea, E. Moreno, and E. Rubio propose new optimization techniques for tackling this important problem. First, a new decomposition algorithm is proposed for solving the linear relaxation of the problem. Second, heuristics are proposed for obtaining feasible integer solutions from these relaxations. Computational tests on real mine planning problems show that the approach can be used to obtain near-optimal solution in just hours. Anyone who has ever driven in traffic knows that sharing resources with others can be burdensome. At the same time, however, sharing construction and maintenance costs with others is often useful and may be inevitable. What happens when these two conflicting effects operate simultaneously? In “Conflicting Congestion Effects in Resource Allocation Games,” M. Feldman and T. Tamir put forward a model that combines both the negative and the positive effects of congestion in resource-sharing settings, and they study its implications on the social welfare obtained in equilibrium. They find that the quality of equilibria that are reached in such settings differs significantly from the quality obtained in previous models that considered either the negative or the positive effects but not both. Market makers provide liquidity to financial markets and in return benefit from the fixed bid-ask spread. Because the amount of assets the market makers buy from and sell to their clients may not be balanced, market makers may end up with an undesirable inventory that is subject to the market price uncertainty. A common practice to control unwanted inventory in the foreign exchange market is for market makers to actively trade with other market makers, which also results in the loss of potential spread gain as these trades are executed at the bid or ask price quoted by other market markers. In “Optimal Market-Making with Risk Aversion,” K. Huang, D. Simchi-Levi, and M. Song identify the quantity of these active trades that optimizes the trade-off between the risk associated with the unwanted inventory position and the spread loss of the active trades. They prove that if the inventory position is too high, the market maker should actively sell the asset until the inventory reaches a certain upper limit. Similarly, if the inventory position is too low, the market maker should buy the asset from other market makers up to a certain lower limit. If the inventory is between these two limits, active trades are not needed. They also provide insights about the values and the monotonicity of these limits under different market conditions. Because of capital constraints, trade credits are widely used in practice, and they are viewed as the largest source of short-term financing of retail inventories. Although trade credit has been intensively studied in economics and finance, there is not much literature that investigates trade credits and trade credit contracts from the supply chain perspective. An upstream supply chain party, let us say a manufacturer, might be interested in knowing how to optimally structure a trade credit contract when providing it to the downstream parties, and whether a credit limit should be placed on the available trade credits. On the other hand, the manager of a downstream party, let us say a retailer, might be interested in knowing whether to use trade credits or bank credits for procurement purpose. In “Financing the Newsvendor: Supplier vs. Bank, and the Structure of Optimal Trade Credit Contracts,” P. Kouvelis and W. Zhao shed light on these issues by studying the strategic interaction of a supplier and a retailer in the framework of a Stackelberg game in a two-party linear supply chain. The authors establish that a profit-maximizing upstream supplier should always provide cheap trade credits without a limit to downstream parties, and cheap trade credits improve supply chain performance, when compared with a wholesale price contract. Optimally priced trade credit is cheaper than bank credit and improves supply chain efficiency by inducing larger retailer's order quantities. However, it does not perfectly coordinate the chain. It always leads to increased profitability for the supplier, and it also benefits most retailers (all but the “very poor” ones wealth-wise). Advances in information technologies have allowed many industries to adopt dynamic pricing strategies, including retailing and manufacturing industries. There is growing research interest in analyzing joint pricing and inventory control policies in a variety of inventory systems. However, most of the models require the assumption of zero replenishment leadtime, whereas a more realistic inventory system typically has a positive leadtime. It remains an open question what the optimal policy looks like in the presence of positive leadtime. In “A Note on the Structure of Joint Inventory-Pricing Control with Leadtimes,” Z. Pang, F. Y. Chen, and Y. Feng characterize the optimal policy structure with the notation of L-natural concavity. Recent years have seen a proliferation of the use of combinatorial auctions in industrial markets and government allocation processes, in which bidders submit bids on sets of items, rather than on individual items for sale. In “Quadratic Core-Selecting Payment Rules for Combinatorial Auctions,” R. W. Day and P. Cramton describe their technique for assigning payments to winning bidders, as used in recent spectrum auctions in the United Kingdom and other European spectrum markets. Rather than paying what is bid, each bidder is assigned a reduced payment, such that no set of payments could be challenged as not large enough to beat a competing set of bids in the auction, a stability property called “core-selecting.” This characterization is not unique, however, and thus various selection criteria are introduced and explored. The authors' main results compare the relative strengths of two payment selection paradigms, minimizing the Euclidean distance to the truth-preserving Vickrey-Clarke-Groves payments, or to a seller-designated fixed point, such as the zero vector or reserve prices. Other topics, such as the decomposition of payments into economically meaningful components, and the impact of seller-reserve price regimes, are also discussed. When designing telecommunication networks, demand uncertainty is an inherent characteristic that should be considered explicitly because the traffic demand should be forecast over a long planning horizon. Especially in telecommunication network design for emerging services, demand uncertainty plays an important role given that the service demand is difficult to estimate, whereas the service level is a key success factor. To handle the uncertainty of demand data efficiently, C. Lee, K. Lee, K. Park, and S. Park propose a decomposition approach based on the Dantzig-Wolfe decomposition by introducing the so-called edge patterns. Their approach, described in the article “Branch-and-Price-and-Cut Approach to the Robust Network Design Problem Without Flow Bifurcations,” benefits not only from the improved linear relaxation bound but also from the encapsulation of the uncertainty into the column generation problem. Numerical experiments on the randomly generated network and the real-world networks show their approach outperforms the previous methods in solving times and robustness of the solutions. “Survival of the fittest” should not be interpreted as “survival of the toughest.” Education counts, as does evolving within a diversified environment. In “A Hybrid Genetic Algorithm for Multidepot and Periodic Vehicle Routing Problems,” T. Vidal, T. G. Crainic, M. Gendreau, N. Lahrichi, and W. Rei apply this idea and propose a hybrid genetic algorithm combining the exploration breadth of population-based evolutionary search, the aggressive-improvement capabilities of neighborhood-based meta-heuristics (the “education”), and an advanced population-diversity management mechanism. Diversity thus becomes a defining characteristic of each individual, completing and competing with solution quality in establishing its fitness within the population and weighting on its chances to be selected for mating and to survive. Empirical studies show that this general mechanism does not only avoid premature population convergence efficiently, but also leads to higher quality solutions in reduced computation time when compared to traditional approaches. Hence, the proposed meta-heuristic becomes the current champion for three important classes of vehicle routing problems—the capacitated VRP, the periodic VRP, the multidepot VRP—and their combinations. The question of how much to consume and how much to invest in risky assets is a fundamental challenge individuals face during their lifetimes. Existing studies on this topic typically employ preference models that consist of habit formation or correlation aversion, but not both. The former is an attractive feature because people tend to follow habits when making consumption decisions. The latter is attractive because it captures the fact that an individual may dislike positive serial correlation in a consumption stream (similar to the manner in which many people exhibit risk aversion when faced with a static lottery). In “Habit Formation from Correlation Aversion,” K. C. Lichtendahl Jr., R. O. Chao, and S. E. Bodily introduce two preference models that can capture habit formation and correlation aversion simultaneously. With these preferences, the authors solve a classic consumption and portfolio planning problem and show that a correlation-averse individual may optimally follow a consumption habit. In the classical fixed-charge transportation problem (FCTP), when the fixed costs are quite small compared to the variable transportation costs, generic solvers such as CPLEX perform quite well because the optimality gap—percentage difference between the optimal solution of the FCTP and that of the relaxed LP—is quite small. However, the optimality gap increases sharply as fixed costs grow larger compared to the transportation costs, making the problem harder to solve. In an attempt to reduce this gap, the paper “Fixed Charge Transportation Problem: Facets of the Projection Polyhedron” by Y. Agarwal and Y. Aneja studies the polyhedral structure of the projection polytope of the FCTP that involves only the binary variables associated with fixed costs. Several classes of facet inequalities are identified and heuristics are developed to detect the violated inequalities. Usage of these inequalities reduces the optimality gap, as demonstrated in the computational experiments. As a result, optimal solutions of problems with 15 source nodes and 15 destination nodes have been obtained within a few minutes of the CPU time. In principle, dynamic programming can be applied to solve problems involving sequential decision making under uncertainty in a broad range of important applications. In practice, however, this approach is frequently untenable due to the “curse of dimensionality.” Approximate dynamic programming (ADP) attempts to address this difficulty by computationally constructing good policies in a tractable manner. In “Approximate Dynamic Programming via a Smoothed Linear Program,” V. V. Desai, V. F. Farias, and C. C. Moallemi develop a new formulation for ADP called the smoothed approximate linear program (SALP). Based on the linear programming approach to ADP, SALP offers several advantages. First, strong bounds are available on the quality of approximations to the optimal cost-to-go function afforded by SALP. Second, experiments with SALP on a pair of challenging problems (the game of Tetris and a queueing network control problem) show that the approach outperforms a number of existing methods by a substantial margin. Appointment scheduling with random durations has been an active research topic for the last 60 years. This challenging problem has broad real-world applications in healthcare (surgery scheduling, physician exam bookings, and diagnostic test planning), transportation (container vessel scheduling and airport gate and runway scheduling), and other industries (e.g., shop scheduling in manufacturing, project management, and staff scheduling in service organizations). Given a sequence of appointments (surgeries, jobs, tasks), the goal is to determine the planned starting time of each appointment so as to minimize the expected total underage costs (idle time of resources) and overage costs (waiting time and/or overtime) that are caused by the mismatch between planned and realized durations. The problem is trivial if all job durations are deterministic and known in advance; it is also solvable when the job durations follow certain known distributions. Appointment scheduling, however, is particularly challenging in real-life situations where the duration distributions are unknown. Indeed, in many practical situations one may only rely on historical observations (which we call samples) of realized job durations. In “A Sampling-Based Approach to Appointment Scheduling,” M. A. Begen, R. Levi, and M. Queyranne develop a nonparametric sampling-based approach to solve the appointment scheduling problem when the duration distributions are unknown. Their approach only requires historical data without any assumption on the underlying duration distributions; it is thus applicable in many practical situations. Moreover, the authors obtain bounds on the number of independent samples required to obtain with high probability a provably near-optimal schedule, i.e., to obtain, with probability at least a specified value, a schedule of expected total cost within a given small multiple of that which could be achieved if the duration distributions were fully known. Optimization under chance constraints is a standard approach to ensure that bad events such as portfolio losses are unlikely to happen. They do nothing, however, to protect against terrible events (e.g., deep portfolio losses or bankruptcy). In “Optimization Under Probabilistic Envelope Constraints,” H. Xu, C. Caramanis, and S. Mannor extend the notion of chance constraints to a formulation that provides different probabilistic guarantees at each level of constraint violation. Thus, the authors develop a notion of guarantee across the spectrum of disasters, or rare events, ensuring these levels of protection hold across the curve, literally. They show that the corresponding optimization problem can be reformulated as a semi-infinite optimization problem, and provide conditions that guarantee its tractability. Interestingly, the resulting formulation is what is known as a comprehensive robust optimization in literature. This work thus provides a new fundamental link between two main methodologies in optimization under uncertainty: stochastic optimization and robust optimization. A growing number of technology markets are characterized by a “winner-takes-all” outcome in which the first firm that launches a new product gains all the returns while its competitors gain nothing. This happens as a result of patent protection or short product life cycles. The wide access to information that is now available to all enables competitors to estimate fairly accurately the investment decisions made by others. The problem faced by a firm competing in such a race is how much to invest in developing the product to maximize its utility while accounting for the effect its decision will have on its competitors. In “A Stochastic Competitive R&D Race Where ‘Winner-Takes-All,’ ” P. G. Canbolat, B. Golany, I. Mund, and U. G. Rothblum establish the existence and uniqueness of a Nash equilibrium for the problem and present an efficient way to compute it. They also provide several insights about the equilibrium solution and compare it to a centralized variant where a single decision maker decides how much each firm will invest to maximize the overall utility of all firms. The proportional fair allocation scheme allocates service capacity among job classes dynamically in stochastic networks, accounting for both congestion levels of job classes and the loads of resources. How do we evaluate key performance measures such as workload and queue lengths? How good are these system-wide performance measures in comparison against other allocation schemes? In “A Stochastic Network Under Proportional Fair Resource Control—Diffusion Limit with Multiple Bottlenecks,” H.-Q. Ye and D. D. Yao study these questions by applying fluid and diffusion scalings to the network and investigating the corresponding limiting regime. They establish the diffusion limit of the network that involves multiple bottlenecks, and they identify the class of allocation schemes among which the proportional fair allocation minimizes a quadratic cost objective. This research aims to provide tools for engineering designs and business decision-making in dynamic systems with networked resources.

Referência(s)
Altmetric
PlumX