Is the Current Credit Structure Conducive to Financially Stable Recovery?

Remarks of Richard Berner, Director, Office of Financial Research, to the25th Annual Hyman P. Minsky Conference on the State of the U.S. and World Economies, “Is the Current Credit Structure Conducive to Financially Stable Recovery?” April 12, 2016, Bard College, Annandale-on-Hudson, N.Y.

Thank you for that kind introduction, Jesse. Thanks especially to Dimitri, Jan, and others in the Levy Institute for inviting me here. It’s good to be back with you.

I’m honored and deeply grateful to be on a panel with Henry Kaufman, Marty Leibowitz, and Al Wojnilower. All are mentors and dear friends.

Henry was my boss at Salomon Brothers, starting 31 years ago. He taught me that our financial system is prone to instability, and that appropriate regulation is essential for its stable functioning.

Later, I worked for and with Marty at Salomon and Morgan Stanley. Marty taught me investment management and what happens to portfolios under stress.

At regular lunches with Al, Henry, and Marty, Al taught us all about financial instability.

Hopefully, I can add a bit to this group’s wisdom from what I’ve learned in pursuit of a stronger financial system.

To answer the question posed to the group, I’ll start by defining financial stability.

Financial stability occurs when the financial system can provide its basic functions even under stress. Financial stability is not about constraining market volatility. Nor can we predict or prevent financial shocks. Rather, financial stability is about resilience. We want to be sure that when shocks hit, the financial system will continue to provide its basic functions to facilitate economic activity.

In the OFR’s first annual report, we identified six such basic functions: (1) credit allocation and leverage, (2) maturity transformation, (3) risk transfer, (4) price discovery, (5) liquidity provision, and (6) facilitation of payments.

Threats to financial stability arise from vulnerabilities in the financial system — failures in these functions that are exposed by shocks. Resilience has two aspects:

  1. Does the system have enough shock-absorbing capacity so it can still function?, and
  2. Are incentives, such as market discipline or transparent pricing of risk, aligned to limit excessive risk taking?

Both aspects matter. Shock absorbers are needed to buffer hits, while what I call guard rails — or incentives that affect behavior — are needed to increase the cost of — and thereby constrain — the risk-taking that can create financial vulnerabilities.

Resilience, or conversely, threats to financial stability are systemwide concepts. To measure, assess, and monitor them, we must examine institutions and markets across the financial system to appreciate how threats propagate from one institution or market to others. And we need to evaluate ways to mitigate those risks.

The financial crisis exposed critical gaps in our analysis and understanding of the financial system, in the data used to measure and monitor financial activities, and in the policy tools available to mitigate potential threats to financial stability. These gaps — in analysis, data, and policy tools — contributed to the crisis and hampered efforts to contain it. Filling those gaps is crucial to assessing and monitoring threats, and to developing what we call the macroprudential toolkit to make the financial system resilient.

We at the OFR help promote financial stability by developing tools to assess and monitor threats to financial stability; by improving the scope, availability, and quality of financial data to measure threats; and by evaluating policies designed to mitigate risks.

In the five years since the crisis, federal financial regulators have taken important steps to make the financial system more resilient. They have put in place banks’ new capital requirements, and agreed on key components of liquidity regulation and minimum requirements for firms’ holdings of liquid assets. In addition, stress testing and a new regime to resolve large, complex, and troubled financial institutions in an orderly way have dramatically changed the approach to increasing resilience.

In my view, regular stress testing is one of the best tools for assessing potential sources of vulnerabilities and for calibrating microprudential requirements, such as for capital based on firms’ idiosyncratic risks. Stress tests might also be used to calibrate macroprudential tools, including those aimed at building resilience across the system.

As a result of these efforts and others globally, we know more now than before the crisis. But the fundamental uncertainty about the source of future threats requires that we be modest about our ability to judge them.

To monitor activity across the financial system, we know we need to improve the quality and scope of financial data. We also need to continue to improve our toolkit to assess the fundamental sources of instability in the financial system; to become more forward-looking; and to test the resilience of the system to a wide range of events and incentives. In addition, we must continue to promote parallel improvements in financial risk management.

So, can we conclude that the current credit structure is conducive to financially stable recovery?

A Minsky Theme: Volatility, Leverage, and the Credit Structure

Despite the progress, risks and vulnerabilities that are neither immediately evident nor easily monitored in markets make me less than sanguine.

By and large, signals from financial markets today are relatively benign, despite rumblings related to macroeconomic risks. It’s of course legitimate to think that periods of low market volatility and recovering risk appetite like this one may simply reflect recovery. More broadly, low volatility, interest rate spreads, credit default swap spreads, and repo haircuts are all traditionally viewed as signs of low financial market risks.

However, just the opposite may be true. As Henry taught me and Minsky taught us all, these developments often signal rising market vulnerabilities, because they give investors and risk managers incentives and wherewithal to take on leverage.

Although analysts traditionally view such indicators as exogenous barometers of risk, they more likely are endogenous indicators of risk appetite and investor sentiment. They are co-dependent: The capacity of intermediaries to take on risk exposures depends on the volatility of asset returns. In turn, volatility depends on the ability of intermediaries to take on risky exposures.

You might say that anyone who has spent a week on a trading desk could have told you that. But recognition of that dynamic in either academic or policy analysis is only starting to appear. A recent paper by Danielson, Shin, and Zigrand argues that leverage and volatility are endogenously co-determined, and that low volatility promotes increased leverage and risk.1

Similarly, Federal Reserve Governor Jeremy Stein observed in 2013 that low volatility gives market participants incentives to write deep, out-of-the-money puts to enhance returns, and in ways that hide risk.2 That’s because one can, and I quote, “beat the benchmark simply by holding [it] and stealthily writing puts against it, since this put-writing both raises the mean and lowers the measured variance of the portfolio.” By stealthily, Governor Stein meant that our measurement systems generally don’t adequately capture the low-probability future risks that such strategies introduce. Those gaps in measurement at the firm level are multiplied many times across the financial system.

This volatility paradox should change our thinking about early warning indicators, asset allocation, and our macroprudential toolkit. It should also change our thinking about risk management. As my former colleague Rick Bookstaber puts it, “[Treating such indicators as exogenous means that] higher leverage and risk taking in general will be apparently justified by the lower volatility of the market and by the greater ability to diversify as indicated by the lower correlations.”3

Let me touch on three implications of the volatility paradox.

First, leverage and volatility risk are procyclical. That’s because risk is often managed by looking to metrics like value at risk, or VAR. In other words, a risk manager gives a portfolio a risk budget by imposing a VAR constraint. When VAR rises, the manager must sell securities to reduce risk. But the security sales further depress prices, which amplifies incentives to sell.4 This phenomenon is true not only in banks, but in asset managers and other portfolio managers.

VAR has a variety of well-known shortcomings. It depends on contemporaneous volatility. It underestimates worst-case loss because it looks at historical — not forward-looking — correlations. It may not capture correlations across a portfolio. Low VAR, like low volatility, creates incentives for more leverage. Even stress tests may look deceptively good if the scenarios are selected from a low volatility regime. Nonetheless, VAR is widely used.

Second, Marty taught us that this leverage effect is stronger for indexes (the market) than for individual securities. That’s because the benefits of diversification vanish under stress. Portfolios typically involve normal correlations based on short-term risk horizons. Under stress, correlations with equities and among the other asset classes rise, increasing the volatility of the portfolio and its beta-sensitivity. This “de-diversification” usually occurs when investors have taken on more risk and leverage, and investors find themselves “selling what they can” rather than selling illiquid assets.

Finally, optionality means that distribution of outcomes is asymmetric. The pricing of all securities with embedded options will be affected by volatility. That’s a natural consequence of option pricing. The inherent asymmetry in options means that put writing — selling insurance — may have limited upside, but unlimited downside.

Tools to Monitor Risks

Consistent with our financial stability mandate, we at the OFR developed a tool for measuring and summarizing risks systemwide. Our Financial Stability Monitor depicts a framework with five categories of risk: macroeconomic, market, credit, funding and liquidity, and contagion. This risk-based approach aligns with the financial system’s basic functions and enables us to look across the financial system rather than focusing piecemeal on institutions or market segments. The monitor enables us to track and measure risks in banks, shadow banks, other nonbanks, and markets. We update it and its supporting data semiannually on our website.

This monitor is part of a larger suite of OFR monitors and risk assessment tools we are creating for each of the five risk categories. Taken together, these tools help us examine the interplay among risks and analyze related developments across asset classes.

We supplement our financial stability analysis at the OFR with market intelligence. Last year, we launched a Financial Markets Monitor that summarizes major developments and emerging trends in global capital markets. Like many of you, we derive enormous benefit from our conversations with market participants.

In our judgment, overall threats to financial stability remain at a moderate, or medium, level. Our Financial Stability Monitor shows that macroeconomic, market, funding, and credit and contagion risks are not excessive, though a number of risks within those categories have increased in the last six months.

For example, indicators of corporate credit risk in our monitoring tools have been flashing warning signs for some time. Corporate bond spreads were exceptionally narrow for much of 2014, promoting increased issuance and risk-taking. Since then, widening spreads have reduced those incentives and risks somewhat.

Risk-taking in general has diminished, but only somewhat. Fueled by highly accommodative credit and underwriting standards, credit continues to grow at a rapid pace. The interplay of credit risks with other risks, such as macroeconomic risks, is also important. The combination of higher corporate leverage, the sharp drop in prices of commodities and energy, and the slowdown in global growth — especially in emerging markets — has exposed corporate credit risks and the diminished capacity of corporations to service their debts at home and abroad.

Tools to Improve Resilience

Although the OFR does not make policy, we are required by statute to evaluate stress tests and similar tools, to conduct policy studies, and to provide advice on the impact of policies related to financial stability. We are working to obtain access to the data used to conduct stress tests and we are suggesting ways to improve them, including for nonbank institutions and systemwide risk assessment. Key areas of our research include finding better ways to gauge risk propagation or contagion in stress testing.

Policy changes since the crisis have made the banking system stronger, but vulnerabilities remain outside the banking perimeter. Analyzing and measuring these emerging vulnerabilities is essential to developing tools to address them. That becomes increasingly important as financial activity migrates to more opaque and potentially less resilient parts of the financial system.

To identify risks in nonbanks, we focus on activities that can cause vulnerabilities. Use of derivatives, secured funding, illiquid asset concentrations, counterparty credit concentrations, and obligations of membership in central counterparty — or CCP — clearinghouses may all contribute to the interconnectedness — and potential vulnerability — of these firms.

Improving the quality, scope, and accessibility of financial data

Good policymaking depends on good analysis and good data. Solid, reliable, granular, timely, and comprehensive data are the foundation for success in our work and for effective risk management in financial companies.

Three aspects of financial data are important. First, data must be high quality to underpin the integrity of our work. Second, they must be comprehensive for a broad view across the financial system, as well as granular to help us identify tail risks during periods of stress. Third, they must be accessible to those who need to look at risk systemwide.

To achieve our shared data goals, we must take an approach that is collaborative, cross-border, and global.

Data Quality

Quality is critical for making financial data usable and transparent. When Lehman Brothers failed in 2008, its counterparties could not assess their total exposures to Lehman. Financial regulators were also in the dark because there were few industry-wide standards for identifying and linking financial data representing entities or instruments.

Standards are needed to produce high-quality data and identifiers are building blocks for data standardization. For the OFR, any discussion about data identifiers begins with the legal entity identifier, or LEI, the standard for identifying parties to financial transactions.

The OFR has led the global, public-private collaboration that got the LEI system up, running, and growing in just a few short years. It is a model for other standards-setting initiatives. The OFR’s Chief Counsel has chaired the LEI Regulatory Oversight Committee.

Like any network, the LEI system has benefits that will grow as the system grows. To accelerate adoption, the OFR has been calling for regulators to require broader use of the LEI in regulatory reporting and regulators have begun to respond.

The OFR is also working with partners both domestic and global on a spectrum of other identifiers and catalogs.

A good example of the systemwide approach to data quality improvement is in derivatives markets. Financial reform sought to improve transparency in derivatives markets by requiring that data related to transactions in swaps be reported to swap data repositories. Swap data are critical to understand exposures and connections across the financial system, and the repositories are designed to be high-quality, low-cost data collection points.

Eighteen months ago, we began a joint project with the Commodity Futures Trading Commission to enhance the quality, types, and formats of data collected from registered swap data repositories. Together, we are aggressively moving forward.

We are also collaborating globally to achieve progress on related identifiers, such as the unique product identifier, or UPI, and the unique transaction identifier, or UTI.

Over the past year, we have begun to develop plans for a reference database for financial instruments, a catalog of data definitions and their identifiers that we are required by law to produce and publish.

Data Scope, Sharing and Accessibility

Filling data gaps and sharing data are essential — and closely related — elements of our data agenda. None of us — no one regulator or company alone — possesses or has access to all of the data needed to paint a complete picture of threats to financial stability. Filling out the full picture requires judicious collection and secure data sharing and collaboration on a global basis, so we can identify and fill the gaps, reduce or eliminate duplication and overlap, and make data appropriately accessible.

Data gaps persist in securities financing transactions, including repo and securities lending. The markets for these critical short-term funding instruments remain vulnerable to runs and asset fire sales. Yet comprehensive data on so-called bilateral repo and securities lending transactions are scant. We have mapped the sources and uses of such funds to better understand these markets, assess risks, and identify gaps in available data. We have also launched pilot projects to fill the gaps in data for bilateral repo and securities lending transactions.

We think Minsky would agree on this data priority. Data that directly measure systemwide leverage, such as the number of re-hypothecations or the rapidity of re-hypothecation, can help assess vulnerabilities better than what markets reveal through traditional channels, such as market prices.

Data inventories are useful tools to facilitate sharing and identify gaps. These inventories catalog existing data and define how they represent the activities or concepts that collaborators want to access. We are working with our domestic and global counterparts to link metadata catalogs across official institutions and across jurisdictions to provide a global picture of available data.

Summing up

The challenge of financial reform is about a fundamental obligation of government, to provide and enforce the system of incentives and constraints in our financial system so that it helps people save for retirement, borrow to buy a house or a car or pay for college, allows businesses to finance productive investments, protects people from predation and abuse, and does not leave the taxpayer responsible for paying for the mistakes of financial companies.

Since the financial crisis, we have improved our understanding of how the financial system functions, and our ability to measure financial activity and spot vulnerabilities. But we need to do more to understand how the financial system fails to function under stress, to spot vulnerabilities in the shadows, and to gather and standardize the data needed for analysis and policymakers’ responses to identified threats.

We know that financial innovation and the migration of financial activity create a moving target. Although our goal to eliminate gaps in data and analysis will always elude us, we will continue to fill the most important ones.

We have also made the financial system substantially more resilient since the crisis — but we still have more work to do. That work requires engagement and collaboration, and I welcome your engagement and collaboration to move forward.

Thank you for your attention. I look forward to the panel’s discussions.


  1. Jon Danielsson, Hyun Song Shin and Jean-Pierre Zigrand, “Procyclical Leverage and Endogenous Risk,” October 2012. 

  2. “Overheating in Credit Markets: Origins, Measurement, and Policy Responses,” at the “Restoring Household Financial Stability after the Great Recession:Why Household Balance Sheets Matter” research symposium sponsored by the Federal Reserve Bank of St. Louis, St. Louis, Missouri, February 7, 2013. 

  3. “The Volatility Paradox,” December 12, 2011. 

  4. See Tobias Adrian and Hyun Song Shin, “Procyclical Leverage and Value-at-Risk,” Federal Reserve Bank of New York Staff Reports, no. 338 July 2008; revised February 2013