Risk Topography
2012; University of Chicago Press; Volume: 26; Issue: 1 Linguagem: Inglês
10.1086/663991
ISSN1537-2642
AutoresMarkus K. Brunnermeier, Gary Gorton, Arvind Krishnamurthy,
Tópico(s)Complex Systems and Time Series Analysis
ResumoPrevious articleNext article FreeRisk TopographyMarkus K. Brunnermeier, Gary Gorton, and Arvind KrishnamurthyMarkus K. BrunnermeierPrinceton University and NBER Search for more articles by this author , Gary GortonYale University and NBER Search for more articles by this author , and Arvind KrishnamurthyNorthwestern University and NBER Search for more articles by this author Princeton University and NBERYale University and NBERNorthwestern University and NBERPDFPDF PLUSFull Text Add to favoritesDownload CitationTrack CitationsPermissionsReprints Share onFacebookTwitterLinked InRedditEmailQR Code SectionsMoreI belong to those theoreticians who know by direct observation what it means to make a measurement. Methinks it were better if there were more of them.—Erwin Schrödinger (quoted in Walter Moore, Schrödinger: Life and Thought, 1989, 58–59)I. IntroductionThe financial crisis of 2007–2008 dramatically revealed that it is time to rethink the measurement of economic activity. In particular, because of derivative securities, off-balance sheet vehicles, and other financial innovations, traditional measures of aggregate risk, such as leverage, are inadequate. It is imperative that we build an economy-wide risk topography, and submaps of different financial sectors of the economy. Measuring only cash instruments and income and balance sheet items is not sufficient for understanding the economy; instead we should measure risks, and think in terms of risks, in addition to quantities.The situation today, and during the crisis, is not so different from the 1930s when Simon Kuznets, Arthur Burns, Wesley Mitchell, and their colleagues developed the first official measures of economic activity for the overall US economy, the National Income and Product Accounts (NIPA), and business cycle chronology. This occurred in the midst of and just after the Great Depression. Referring to the Great Depression, Richard Froyen (2009) put it this way:One reads with dismay of Presidents Hoover and then Roosevelt designing policies to combat the Great Depression of the 1930s on the basis of such sketchy data as stock prices indices, freight car loadings, and incomplete indices of industrial production. The fact was that comprehensive measures of national income and output did not exist at the time. The Depression, and with it the growing role of government in the economy, emphasized the need for such measures and led to the development of a comprehensive set of national income accounts (Froyen 2009, 13).1During the financial crisis of 2007–2008 policymakers faced a similar problem. Relevant information about the financial sector and its linkages to the real economy was missing. Very basic measures were inadequate. For example, a measure such as "leverage" has little meaning in a world with derivatives and off-balance sheet vehicles. "Liquidity" was not clearly defined, let alone appropriately measured. Existing measures did not account for the shadow banking system, the size of the repo market, or the extent of different financial institutions' exposure to residential mortgages and credit derivatives.Measurement is the root of science, and is also the basis of macroprudential regulation and of firms' risk management systems. Recognizing these measurement problems, the Dodd-Frank Wall Street Reform and Consumer Protection Act (Pub. L. 111–203, H.R. 4173) includes a provision for the establishment of the Office of Financial Research (OFR), a new division within the Treasury. The OFR is tasked with providing research and information to the newly created Financial Stability Oversight Council. The OFR has subpoena power to require financial institutions to produce data that the OFR requests. One possible role for the OFR would be to implement new measurement systems. Similarly, in Europe, the European Systemic Risk Board (ESRB) was established to oversee the build-up of systemic risk.In this paper we outline a system of measuring risks and liquidity in the financial sector and producing a risk topography for the economy. We see two tangible benefits to implementing these ideas.First, such a measurement system would improve significantly on the standard accounting paradigms in capturing the risks that are most relevant for systemic risk assessment by regulators and financial market participants. The basic idea behind the measurement metrics is to elicit from financial firms their sensitivity to a number of prespecified factors and scenarios on a regular basis. Essentially, firms report their "deltas" with respect to the specified factors; that is, the dollar gain or loss that occurs when the specified factor changes by a specified amount. In addition, they report their liquidity deltas: the increase or decrease in their liquidity as defined by a liquidity index, the Liquidity Mismatch Index (LMI).2 For example, we ask what the capital gain or capital loss is to your firm if house prices fall by 5%, 10%, 15%, and 20%, and what if they rise by the same increments. By deviating from standard accounting paradigms for measurement and moving closer to risk-management scenarios, these metrics reflect derivatives, liquidity, and other important features of a modern financial system. For example, an important point we develop in Section III is that the liquidity / capital delta measures are more informative than accounting measures of leverage. Standard measures of leverage may be meaningless in a world of derivatives, while the liquidity / capital deltas will better measure the "fragility" of the financial sector.The data can reveal risk and liquidity pockets in the economy. Currently, the absence of information about the risk exposures of the financial system mean that firms can be in "crowded trades" without knowing it. That is, their risk exposures may be viewed as small for their firm, but may be large if all other firms have a similar exposure. Data on risk pockets can trigger better private risk management as well as enhance regulatory risk assessment. The data can also detect trends in liquidity or risk imbalances in the economy. For example, the data may show the financial sector's reliance on the repo market grew over the 2000 to 2007 period and resulted in a significant liquidity imbalance for dealer banks. We discuss these types of uses of collected data in Section V.Second, current macroeconomic models, which for the most part do not incorporate a financial sector, would have the essential data to guide such an endeavor. Theorists do not need data, but their thinking and their models are strongly influenced by what is measurable.Solow (1970) is explicit that the stylized facts that were the outcome of the work of Kuznets and others were at the root of conceptualizing the neoclassical growth model, which is the current workhorse macro model. Burns and Mitchell's (1946) measurement of business cycles and Kuznets' work allowed Kaldor (1961) to state six "stylized facts" about the macroeconomy, which were instrumental in subsequent business cycle and growth research. The intellectual history is recounted by Lucas (1977) and Kydland and Prescott (1990). It seems clear that systematically collecting the relevant financial sector data will have an impact on the set of macro models that are developed.Such macro modeling is essential for understanding systemic risk and financial crises. While the triggers for crises are varied, the amplification mechanisms that play out in crises exhibit common patterns. These patterns may be direct due to contractual links or indirect through equilibrium feedbacks on asset prices and liquidity. The important question in assessing systemic risk is to ask, for example, if the commercial banking sector takes a $500bn loss next year, how will this spill over to other asset markets and players, and what will be the resulting system-wide or aggregate general equilibrium dislocations? Developing a data set on the actions and exposures of different parts of the financial sector in varying economic conditions can allow a researcher to develop quantitative models of common amplification mechanisms. For example, a key response indicator of the spillover effects is the liquidity mismatch index (LMI). We expect that firms with a very negative LMI will be forced to fire-sell assets and hence amplify the crisis and lead to excessive spillover effects. On the other hand, firms with positive (or moderately negative) LMI can ride out adverse effects and not cause any externalities. We discuss the use of data in modeling of systemic risk in Section VI.Like the construction of the National Income and Product Accounts, it will take a significant effort and time to build this risk topography, although financial firms already currently produce much of the data that we suggest gathering.3 We take advantage of the data and knowledge of the private sector internal risk models. Truth-telling can be ensured by cross-checking the various internal models across all market participants.There would be substantial benefits to making such measurements publicly available, just as with other government-collected data (e.g., National Accounts, Bank Call Reports, Federal Reserve Flow of Funds, etc.). The responses can be aggregated, suitably anonymized, and then made public. An important principle is that the data be made publicly available to all (in a form that protects some proprietary responses).Related Literature. Three strands of literature are related to the ideas in this paper: the first is on measurement, the second concerns stress testing. On measurement, in the United States the current systems include the bank Call Reports of Condition and Income and the Federal Reserve Flow of Funds data. Both of these data sets were explicitly developed to aid regulators to monitor banks. The Call Reports were mandated by the National Bank Act (1863) (Sec. 5211) and have continued (and been expanded) to this day. In essence, these reports contain fairly detailed balance sheet and income statement information of regulated banks, but fail to capture other financial institutions and risk sensitivity measures. Similar to the Call Reports, we emphasize eliciting the same scenarios repeatedly and regularly to develop a risk map. Over time, such data will become a large library of information that can be used to build and fine-tune models. Secondly, we emphasize that the data elicited (suitably anonymized) be made public so that academics, regulators, and industry participants will be in a position to build their own models of systemic risk.The Flow of Funds data was designed by Morris Copeland (1947, 1952) to characterize money flows in the economy. Notably, at first, economists did not see how to use the Flow of Funds; see, for example, Dawson (1958) and Taylor (1958).Central banks currently recognize that existing measurement systems are not up to the task and have begun to think about revisions and additions. See, for example, Eichner, Kohn, and Palumbo (2010) and European Central Bank (2010). Compared with these proposals and ideas, we suggest to fundamentally change the nature of the information that is collected by deviating from the accounting paradigm and operating under a measurement paradigm closer to risk management scenarios. We want to collect data that will, over time, be useful for developing macroeconomic models of crises. We argue that this requires data on risk. In addition, we emphasize that measuring "liquidity" is central to understanding crisis.The second related literature concerns bank stress testing. Bank stress testing is an evaluation of the impact of a particular scenario or event on a firm, the scenario usually being a movement in financial variables. Stress testing is an adjunct to statistical models, such as value-at-risk models. There are many papers that provide a general introduction to stress testing. Examples include Blaschke et al. (2001); Jones, Hilbers, and Slack (2004); Cihák (2007); and Drehmann (2008). Collections of articles that discuss stress testing include Quagliariello (2009). International organizations have developed stress testing procedures: the Bank for International Settlements (BIS 2009), the Committee on the Global Financial System (2005), and the International Monetary Fund, which started the Financial Sector Assessment Program in May 1999. Other articles include Haldane, Hall, and Pezzini (2007) and Hoggarth and Whitley (2003). Hirtle, Schuermann, and Stiroh (2009) discuss the US Supervisory Capital Assessment Program (SCAP)—these were the stress tests applied to the largest US bank holding companies from February to May 2009 (see Board of Governors of the Federal Reserve System 2009a, 2009b). The data that we would like to collect are akin to that collected in the stress tests.The third strand of literature is macroeconomic and banking theory, which guides our thinking concerning what data to collect. This strand is discussed in Section II.Following Section II the paper proceeds as follows. In Section III we present simple examples to illustrate the data issues that arise in practice, and to motivate our approach to measurement. In Section IV we more formally present our approach of eliciting risk and liquidity deltas. In Section V we first discuss certain simple risk indicators for fundamental risks and liquidity risk. Section VI discusses the use of the data for macro modeling of amplification effects within the financial sector and the economy as a whole. Section VII concludes.II. Guidance from Existing Theoretical ResearchWhat data should be collected in order to better understand the vulnerability of the economy to systemic risk? Existing research in macroeconomics and finance can guide us in answering this question. Macro models with financial frictions focus on leverage and the dynamics of net worth / capital, limiting the leverage ratio, while models in finance highlight in addition the important role of liquidity.The most influential macroeconomic models of financial market frictions are the works of Bernanke, Gertler, and Gilchrist (BGG) (1996) and Kiyotaki and Moore (KM) (1997). Technically, these models only feature a corporate sector that is subject to financial frictions rather than a financial sector subject to such frictions, but as Brunnermeier and Sannikov (2010) show, it is possible to rework these models so that the results are driven by frictions in the fi nancial sector. We henceforth discuss these models in such terms and dispense with this qualification.The BGG model emphasizes that the "net worth" of the financial sector is an important state variable in driving macroeconomic phenomena. Net worth is commonly thought of as the equity capital of the financial sector. Thus, in this model, when banks take losses that deplete their equity, they increase the rates charged on loans and / or cut back on lending, thus causing a credit crunch. The Kiyotaki-Moore model adds an important ingredient to this analysis. Agents in the model have collateral that they pledge to raise funds from lenders. Since the market value of agents' collateral is partly dependent on their financial health, it affects leverage in the system, which in turn affect the value of capital. With high leverage, losses deplete capital more dramatically and feedback to further reducing the market value of collateral, and so on.The roles of net worth and leverage in these models are suggestive, but the challenge is to determine what these correspond to in reality. Most notably, as we will show with some simple examples in the next section, reliance on cash measures to capture net worth or leverage misses the effects of derivatives.Work in the finance tradition emphasizes in addition the importance of "liquidity" for understanding financial crisis. Diamond and Dybvig (1983) is the canonical model in this literature. In this model, it is not just borrowing or leverage of the financial sector that is salient, but rather the proportion of debt that is comprised of short-term demandable deposits. More broadly, the literature describes that when the financial sector holds illiquid assets financed by short-term debt, the possibility of "counterparty run" behavior emerges that can precipitate a crisis. This literature also describes a feedback mechanism between capital problems and liquidity problems. See, for example, Allen and Gale (2004). When the financial sector runs into liquidity problems, triggered by runs by lenders, the sector sells assets whose prices then reflect an illiquidity discount. The lower asset prices lead to losses that deplete capital, further compromising liquidity. Brunnermeier and Pedersen (2009) model the interaction between funding liquidity and market liquidity for modern collateralized (wholesale) funding markets. Importantly, they model liquidity spirals and "collateral runs." An adverse shock heightens volatility, leading to higher margins / haircuts. This lowers funding liquidity and forces institutions to fire-sell their assets, thus depressing market liquidity of assets and increasing volatility further.In sum, the existing micro-founded literature points to net worth / leverage of the financial sector, and liquidity exposure, often expressed as maturity mismatch, as key state variables that drive systemic crises.III. Measurement Challenges—Four ExamplesIn this section we present some extremely simple examples to illustrate the measurement issues and to emphasize the weaknesses of traditional measures of leverage and maturity mismatch.Even though leverage is well-defined in simple stylized models, it is an ill-defined measure in practice in current financial markets. Given derivatives and off-balance sheet vehicles, the standard leverage measure (on-balance sheet debt / equity) is at best noisy, and more likely useless, as a measure of the fragility of the financial sector.Liquidity refers to many related concepts. Following the banking literature, liquidity mismatch in banks emerges when the market liquidity of assets is less than the funding liquidity on the liability side of banks' balance sheets. However, insurance of demandable deposits since 1934 make the textbook Diamond-Dybvig bank runs unlikely. On the other hand, it is widely understood that "collateral run" phenomena have been important in the asset-backed markets and the shadow banking sector in the 2007–2009 crisis (see Gorton and Metrick 2010). As another example, when a major financial institution—AIG is a good example here—is downgraded, its derivative counterparties will require that the institution post a large amount of collateral. This is a liquidity drain for the institution that is conceptually similar to the run by a number of short-term lenders.The measurement issues that arise in practice are best presented in a series of very simple examples. The examples are simplified in the extreme and so they are clearly not realistic, nor are they intended to be. All values should be thought of as market values.Benchmark:Consider a firm with $20 of equity and $80 of five-year debt with a coupon rate of 4.5%. The firm makes loans to two different firms, each for $50 for one year at an interest rate of 5%.This example is a benchmark; it is a plain vanilla firm that resembles a traditional bank, though it does not take deposits. Call Report-type data would record the income and balance sheet items from this bank, and in this example that might suffice. The debt-to-assets ratio for this firm is 80%.There are, however, some measurement issues even in this case. For example, the loans are one-year loans, but the debt is five-year debt. This bank is potentially facing a large loss if at the end of the year the term structure of interest rates were to change, resulting in a lower competitive rate for loans. For example, if the loans can only be made at 3% in one year's time, then this bank is facing a loss. Simple concepts like duration would capture this, but nothing that is currently reported would measure this interest rate sensitivity. Our measurement ideas involve asking what happens to firm value if, for example, the one-year loan rate in one year's time moves up by 100 basis points (bps), by 500 bps, down by 100 bps, or down by 500 bps, and so on?Liquidity Mismatch:Consider a firm with $20 of equity and $80 of debt as above, but now half the debt is overnight repo financing at 1% and the other half is five-year debt at 4.5%. The firm buys one Agency mortgage-backed security (MBS) for $50 (which is financed via repo at a zero haircut) and loans $50 to a firm for one year at an interest rate of 5%.This example complicates the benchmark case by making the bank sensitive to funding risk, in addition to interest rate risk. What if the firm cannot renew the repo financing, and is forced to liquidate some of its assets? Standard measures, such as leverage, will not pick up this funding risk. That is, they will treat the overnight debt and the five-year debt symmetrically. One could construct a leverage measure that focused on the maturity mismatch in this example—such as a short-term leverage measure—but this too may prove inadequate. For example, suppose that instead of the Agency MBS, the bank owned $50 of private-label MBS, which is less liquid than the Agency MBS. Now this bank has more of a liquidity mismatch, stemming from the asset side. Thus it is clear that a liquidity measure needs to incorporate information from both the asset side of the balance sheet and the liability side (market liquidity and funding liquidity).For this firm the Liquidity Mismatch Index (LMI), which roughly reflects the market liquidity on the asset side (price impact if sold at fire-sale prices) minus the funding liquidity on the liability side (effective maturity structure), we construct would be negative. Because the repo is overnight the firm is exposed to funding risk. In the next section, we discuss a liquidity index that measures funding and market liquidity risk. In this specific example, there is an MBS worth $50, which has a liquidity weight of, say, λABS = 0.9, so the asset liquidity is $45. (Cash or Treasuries have a liquidity weight of one.) On the liability side, the MBS bond is funded by repo of $50 with λRepo = 1, so the Liability Index is –$50, which gives a net liquidity index of $–5. What happens if repo haircuts suddenly increase to 20%? Then in renewing the repo financing, the firm can raise less money against the MBS. The asset is less liquid in that borrowing against it raises less cash—say, now λABS = 0.8. Then the Liability Index goes to $–10.There is currently no measuring system (accounting or regulatory) that detects the sensitivity of a firm to change in market and funding liquidity conditions. The Liquidity Mismatch Index is designed to understand such potential stresses. In this example, one could further ask what would happen if the securitization secondary market were to become less liquid. That is, we could ask the firm to report its LMI if the liquidity weight on MBS was λABS = 0.5, for example.Rehypothecation:The bank lends $100 to a hedge fund for three days and receives a bond with a market value of $100 as collateral (a reverse repo). The bank then uses the bond as collateral to borrow $100 in the overnight repo market. (Whatever else the bank is doing we ignore for purposes of the example.)The bank has a liquidity mismatch since the repo is overnight, but the reverse repo is for three days. If the repo does not roll over, then the bank must sell the bond or find some other funding. This sensitivity would be captured by our Liquidity Mismatch Index, discussed following. The liquidity weight on the three-day reverse repo loan is lower than on the overnight repo, entering negatively in the firm's liquidity index.The Liquidity Mismatch Index is designed to capture the sensitivities to these kinds of issues, which were particularly important in the recent crisis, but which, again, are not captured by any current reporting system.Synthetic Leverage:Consider a firm with $20 of equity and $80 of debt; half the debt is overnight repo financing at 1% and the other half is five-year debt at 4.5%. The firm buys $100 of US Treasury securities and writes protection (using credit default swaps [CDS]) on a diversified portfolio of 100 investment-grade US corporates, each with a notional amount of $10; so there is a total notional of $1,000. The weighted-average premium received on the CDS is 5%.This firm is sensitive to movements in the term structure of interest rates, and also to funding risk, as in the previous examples. But now it is also quite significantly exposed to a macro risk that could cause failures of investment-grade firms. The risk is not idiosyncratic because the CDS portfolio is diversified, but if there were a recession in which three or four firms failed there would be losses on this portfolio. If we ask what would happen if four investment-grade US firms in their portfolio failed, with 50% recovery, the answer would be that the firm would be bankrupt because a loss of (50%)(4)(10) = 20, which is the amount of equity in the firm. Thus, the CDS creates "leverage" in this firm, which any standard measure will miss.Note that the CDS position would be marked-to-market for accounting purposes. Thus, the marks would contain expectations about future defaults of the firms in the portfolio (and risk premia). However, we want to detect what would happen in specific events (e.g., four firms fail), rather than the probability weighted market price.Also, note that this firm has another complication. Derivatives trade under the International Swaps and Derivatives Association master agreement. This agreement usually has a Credit Support Annex (CSA), a legal document, which sets forth the conditions under which each party must post collateral. Suppose that in this example the CSA has collateral-posting requirements based on the market value of the CDS position. If the marks widen—that is, when it is more likely that a firm or firms in the portfolio will default—this firm will have to post collateral to the counterparty. It has a Treasury bond, which could be posted. The LMI calculation takes into account the CSA provisions. To see the issue note that if half the Treasury holdings are posted, then this falls out of the LMI. In the extreme, imagine that the entire Treasury holding is posted. Then the only remaining asset the firm has is the CDS portfolio. Measuring the liquidity index of this firm in the event that four firms fail will capture the liquidity risk of this firm.As another example of a liquidity event triggered by derivatives, consider the effect of a ratings downgrade. The CSA typically prescribes that if the bank is downgraded during the term of the derivative contract, it will have to post more collateral, which again uses liquidity. Moreover, if the firm had written many derivative contracts—that is, the CDS as in the example, plus interest rate derivatives—the need for liquidity will apply to all derivative contracts. Thus, the downgrade is potentially a significant liquidity risk that arises when firms use derivatives.Cross Scenarios:Consider a firm with $20 of equity and $80 of debt; half the debt is overnight repo financing at 1% and the other half is five-year debt at 4.5%. The firm buys a Spanish residential mortgage-backed security (MBS), denominated in Euros, for the equivalent of $50 and lends the other $50 to a US firm. The firm does not hedge its Euro exposure.This firm is sensitive to house prices in Spain and to the Dollar / Euro exchange rate (as well as other risks). The bank may be fine if (1) Spanish house prices go down, but the exchange rate stays the same; or (2) the Euro weakens against the dollar, but house prices do not decline. But, if Spanish house prices decline and the Euro weakens against the dollar, then the bank may be in trouble.This possible stress scenario would not be revealed by anything the firm would report to regulators or in Securities and Exchange Commission (SEC) filings, under the current system. The example would be a bit more complicated if the firm did hedge the exchange rate risk. Since the firm receives Euros on the MBS, but pays dollars on its debt (and equity dividends), it enters into a swap to receive dollars in exchange for Euros. But, this transaction is with a counterparty, which might weaken in some states of the world, introducing counterparty risk. Also, as seen earlier, the firm might have to post collateral.These examples are intended to illustrate the difficulties of measuring risks based on accounting measures. Many more such examples, increasingly complicated, can be produced. The point is that measurement systems based solely on accounting-type measures are inadequate.IV. Measurement MetricsIn this section we explain our ideas using simple notation. We then introduce the Liquidity Mismatch Index, and discuss reporting. Finally, we say a bit more about what the exact scenarios could be.A. Basic Set-upThere are two dates. Date 0 is the ex ante date at which each firm makes risk and liquidity decisions by choosing cash assets and cash liabilities, as well as derivative positions and off-balance sheet positions. Derivative positions may have a market value of 0 at date 0, but are sensitive to the risk factors. At date 1 a state ω ∈Ω is realized, some of which may be a systemic crisis, depending on what decisions firms have made. Firm i chooses assets Ai and liabilities Li. The assets are a mix of cash, repo lending to other firms, derivative exposure, and outright asset purchases. Liabilities include short-term debt, long-term debt, secured debt, equity, and so forth.The equity value of a firm i is given by where is the asset value in state ω and is the value of the total li
Referência(s)