199 min read

Absorption Ratio


Why?

It’s well documented that correlation spikes during financial crises. We can get a sense of this by comparing correlations of assets during bull and bear markets. It’s challenging to take in all the information of a correlation matrix changing over time, we have to consider \(\frac{n*(n-1)}{2}\) different correlations. A list of 10 assets results in 45 correlation to examine. Color coding the correlation matrix can help us with this task, but this data visualization technique only goes so far. What if we wanted to quantify how correlations are changing over time with one metric?

Enter the absorption ratio. It was first proposed by Kritzman, Li, Page, and Rigobon in their fantastic 2010 paper, Principal Components as a Measure of Systemic Risk. A high absorption ratio indicates assets are trading closely together and we shouldn’t expect much diversification benefits. An extremely high ratio doesn’t necessary mean the markets will crash, but if they do diversification won’t save our portfolio. Let’s continue by looking at a correlation matrix of assets during a bear and bull market cycle. We’ll use equity sector ETFs as our building blocks or assets.


Required packages

library(quantmod)  # to download historical price time-series
library(plotly)    # for interactive visualization
library(xts)       # for time-series manipulation
library(reshape2)  # for data mainpulation


Get sector price time-series

sector_id <- c("XLY", "XLP", "XLE", "XLF", "XLV", "XLI", "XLB", "XLK", "XLU")
sector_names <- c("Consumer Disretionary", "Consumer Staples", "Energy", 
                  "Financials", "Health Care", "Industrials", "Materials", 
                  "Technology",
                  "Utilities")
sector_dat <- lapply(sector_id, "getSymbols", from = "1970-01-01",
                       to = "2018-04-28", src = "yahoo", auto.assign = FALSE)
sector_mat <- do.call("cbind", sector_dat)
sector_price <- sector_mat[, seq(from = 6, to = ncol(sector_mat), by = 6)]
sector_ret <- sector_price <- sector_price / lag.xts(sector_price, k = 1) - 1
sector_ret <- sector_ret[2:NROW(sector_ret), ]
colnames(sector_ret) <- sector_names


Correlation comparison

corr_bull <- cor(sector_ret["2002-11-01/2007-09-01"]) %>% round(digits = 2)
corr_bear <- cor(sector_ret["2007-11-01/2009-03-01"]) %>% round(digits = 2)
dat <- melt(corr_bull)
g <- ggplot(dat, aes(x = Var1, y = Var2, fill = value)) + 
  geom_tile() + scale_fill_gradient2(low = "blue", mid = "white", high = "red", 
                                     midpoint = 0.5, lim = c(0, 1)) +
  xlab("") + ylab("") + labs(fill = "Correlation") + 
  ggtitle("Bull Market Correlation - Nov 2002 to Sep 2007") + 
  theme_minimal() + theme(axis.text.x = element_text(angle = 90), 
                          plot.margin = margin(l = 20, b = 20))
ggplotly(g)
dat <- melt(corr_bear)
g <- ggplot(dat, aes(x = Var1, y = Var2, fill = value)) + 
  geom_tile() + scale_fill_gradient2(low = "blue", mid = "white", high = "red", 
                                     midpoint = 0.5, lim = c(0, 1)) +
  xlab("") + ylab("") + labs(fill = "Correlation") + 
  ggtitle("Bear Market Correlation - Nov 2007 to Mar 2009") + 
  theme_minimal() + theme(axis.text.x = element_text(angle = 90), 
                          plot.margin = margin(l = 20, b = 20))
ggplotly(g)

Notice the higher correlations highlighted by the redder hues during the bear market. Looking at a rolling correlation matrix over time (perhaps on a risk dashboard) is useful, but we can take it a step further with using principal component analysis. One of the properties of PCA that makes it useful for investment portfolios is how PCA identifies the latent (or hidden) components that explain the most amount of variance possible. PCA will explain the variance of a system alliteratively: the first component will try to explain the most variance in the system of assets, the second component will try to explain the most of the leftover variance the first component couldn’t explain, and so on. When there’s a large potential for diversification in a system of assets (in our example, when the correlation matrix has more shades of blue then red) we’ll need more components to meaningfully explain it. Conversely when assets are trading together and correlations are spiking we only need a few components to explain the majority of its variance.

The absorption ratio will fix the number of components we use to explain our system of assets to a constant number, for example, \(2\). The ratio gets its name by referring to the cumulative amount of variance explained by the first two components as the portion of variance absorbed by the first two components. The absorption ratio was proposed as a monitor for systemic risk (or perhaps as a trading signal), but it can also be used to shape portfolio construction. Suppose we have a simple two asset portfolio of stocks and bonds. And let’s say we’re considering adding a third asset, real estate, for diversification. If in our three asset portfolio the first two components from the PCA explain 90% of our variance, then from a quantitative perspective we’ll consider real estate a poor diversifier. Conversely if the first two components only explained 60% of variance we’ll posit real estate has great diversification benefits and look to add it to our asset mix.

Let’s take a look at the risk monitoring use of the absorption ratio. We’ll need to build a function to calculate the absorption ratio and track its standardized change over time. The paper defines the absorption ratio as:

\[AR = \frac{\Sigma^n_{i=1}\sigma^2_{E_i}}{\Sigma^N_{j=1}\sigma^2_{A_j}}\] where, \(N=\) number of assets, \(n=\) number of eigenvalues used to calculate \(AR\), \(\sigma^2_{E_i}=\) variance of i-th eigenvalue, and \(\sigma^2_{A_j}=\) variance of j-th asset (total variance of system)

Furthermore, the paper suggests that we should track the standardized change of the \(AR\) over time.

\[\Delta AR = (AR_{15 day}-AR_{1 year}) / \sigma\] where \(AR_{15 day}\) = the 15 day moving average of the AR, \(AR_{1 year}\) = the 1 year moving average of the AR, and \(\sigma\) = the standard deviation of the AR over 1 year.

Finally, the last thing we need to discuss before setting up our function is the input into the PCA: covariance. Feel free to plug in your preferred way of estimating covariance (or correlation) here, I’ll be following the paper’s example of a 500 day exponential weighting framework with a half life decay of 250 days.

Alright, let’s set up a function to calculate the AR and the standardized change in AR, \(\Delta AR\).

fragility <- function(ret_mat, roll_window = 500, n_eigen = 2, 
                      method = c("delta", "raw"), 
                      delta_window = c(15, 250), half_life = TRUE)
{
  # FRAGILITY calculates the absorption ratio of a system of assets 
  #
  # ARGUMENTS:
  #   ret_mat = xts of asset returns, N x T
  #   roll_window = rolling window to calculate covariance matirx, default is
  #       504 days
  #   n_eigen = number of critical eigenvalues for AR calculation
  #   method = 'delta' for standardized change or 'raw' for the raw AR
  #   delta = window for standardized change calculation
  #   half_life = set to TRUE for exponential weighting in covariance estimation
  #
  # OUTPUT:
  #   xts of the AR
  
  if(n_eigen >= NCOL(ret_mat)) { 
    stop("n_eigen needs to be less than the number of asset columns")
  }
  
  date_array <- index(ret_mat)
  
  n <- NROW(ret_mat)
  raw_ar <- rep(NA, n - roll_window - 1)
  
  halfLife <- function(x) exp( -(x - 1:x) * log(2) / (x / 2))
  
  for (i in roll_window:n)
  {
    roll_ret <- ret_mat[(i - (roll_window - 1)):i, ]
    if (half_life == TRUE) roll_ret <- roll_ret * halfLife(roll_window)
    xcor <- cor(roll_ret)
    s <- svd(xcor)
    eigen_vec <- s$d
    raw_ar[i - (roll_window - 1)] <- cumsum(eigen_vec[1:n_eigen]) / 
      sum(eigen_vec)
  }
  
  raw_ar <- xts(raw_ar, date_array[roll_window:n])
  
  if (method[1] == "raw") {  
    return(raw_ar)
  } else {
    delta_ar <- array(dim = NROW(raw_ar) - delta_window[2] - 1)
    j <- 1
    for (i in delta_window[2]:NROW(raw_ar)) {
      raw_ar_long <- raw_ar[(i - (delta_window[2] - 1)):i, 1]
      raw_ar_short <- raw_ar[(i - (delta_window[1] - 1)):i, 1]
      delta_ar[j] <- (mean(raw_ar_short) - mean(raw_ar_long)) / sd(raw_ar_long)
      j <- j + 1
    }
    delta_ar <- xts(delta_ar, index(raw_ar)[delta_window[2]:NROW(raw_ar)])
    return(delta_ar)
  }
}


Let’s take our function for a spin.

delta_ar <- fragility(sector_ret) %>% round(2)
dat <- data.frame(delta_ar)
dat$t <- index(delta_ar)
g <- ggplot(dat, aes(x = t, y = delta_ar)) + 
  geom_line() + ylab("") + xlab("") + 
  geom_hline(yintercept = 2, col = "red") + 
  geom_hline(yintercept = -2, col = "green") + 
  ggtitle("Standardized Change of Absorption Ratio of US Equity") 
ggplotly(g)

Interesting. We’ve got our first major spike in August 2002, followed by some several breaches in 2005 and 2007 before the October 2008 meltdown. We have another breach in June 2010 and then the debt ceiling crisis spike of August 2011. The September 2015 drawdown created the largest \(\Delta AR\) measure, which signifies that the markets were trading with peaceful levels of low correlation in mid to late August before everything rapidly started trading together.

The bull signals came in early and late 2004, March 2011, October 2013, and most recently post election November 2016. There’s clearly some false positives here, not all spikes in the \(\Delta AR\) are followed by severe drawdowns. Kritzman, et al. recommend using the AR in combination with another risk measure they created called turbulence, which I’ll discuss in another experiment.

Now for the portfolio construction use of the \(AR\). Let’s grab some more returns.

asset_ticker_array <- c("SPY", "ACWI", "EDV", "BND", "VNQ", "AMLP", "GSG", 
                        "QAI", "EMB", "MBB", "JNK", "WIP", "TIP" )
asset_dat <- lapply(asset_ticker_array, "getSymbols", from = "1970-01-01",
                    to = "2018-04-28", src = "yahoo", auto.assign = FALSE)
asset_mat <- do.call("cbind", asset_dat)
asset_price <- asset_mat[, seq(from = 6, to = NCOL(asset_mat), by = 6)]
asset_price <- na.omit(asset_price)
asset_ret <- asset_price / lag.xts(asset_price, k = 1) - 1
asset_ret <- asset_ret[2:NROW(asset_ret), ]


Let’s form 3 pools of assets that we could make portfolios out of. Portfolio A will be a global mix of traditional equities and fixed income, Portfolio B will add real estate, and Portfolio C will expand into mid-stream energy, commodities, liquid alternatives, emerging market bonds, mortgage backed securities, high yield bonds, and global inflation-linked bonds. The raw absorption ratio will tell how much diversification benefits we get adding just real estate to our mix versus the aforementioned melange of Portfolio C.

port_a <- asset_ret[, 1:4] # stocks and bonds only
port_b <- asset_ret[, 1:5] # stocks, bonds, and real estate
port_c <- asset_ret        # all assets

a <- fragility(port_a, method = "raw") %>% round(3)
b <- fragility(port_b, method = "raw") %>% round(3)
c <- fragility(port_c, method = "raw") %>% round(3)

dat <- data.frame(index(a), a, b, c)
colnames(dat) <- c("t", "Port A", "Port B", "Port C")
plotdat <- melt(dat, id = "t")

g <- ggplot(plotdat, aes(x = t, y = value, color = variable)) + 
  geom_line() + labs(color = "") + xlab("") + ylab("Absorption Ratio") + 
  ggtitle("Absorption Ratio Comparison") + 
  theme_bw()
ggplotly(g)

Portfolio A, traditional fixed income and equity, has an absorption ratio near one, which makes sense - we only need two components to explain nearly all of our equity and fixed income portfolio. These two components would likely resemble equity and term risk. Adding real estate improves things, but not much. We still have over 90% of our variance eaten up by the first two components. One possible interpretation is real estate is largely a linear combination of equity and fixed income risks. Portfolio C, which adds eight assets, starts to get diversification benefits. Its absorption ratio flares up during the drawdown of Jan 2016 but still stays well below the less diversified Portfolios A and B. The \(AR\) can help us focus on assets that will make our portfolios more resilient to systemic shocks. Adding small cap, value, and growth equity asset classes to the portfolio mix would likely result in an \(AR\) line close to Portfolio A, while sourcing less correlated asset classes such as liquid alternatives results in \(AR\) lines closer to Portfolio C.

The next step of weighting the asset mixes is a really fun challenge and gets to the heart of portfolio management. Unfortunately it’s beyond the scope of this experiment. I hope you found the \(AR\) useful and will consider it as a tool to diagnose the diversification benefits of a group of assets and to keep an eye on the systemic market risk lurking out there.