© Copyright Quantopian Inc.
© Modifications Copyright QuantRocket LLC
Licensed under the Creative Commons Attribution 4.0.
By Dr. Michele Goe
In this lecture we seek to clarify transaction costs and how they impact algorithm performance. By the end of this lecture you should be able to:
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import time
Transaction costs fall into two categories
Slippage is when the price 'slips' before the trade is fully executed, leading to the fill price being different from the price at the time of the order. The attributes of a trade that our research shows have the most influence on slippage are:
Let’s consider a hypothetical mid-frequency statistical arbitrage portfolio. (A mid-freqency strategy refers roughly to a daily turnover between $0.05$ - $0.67$. This represents a holding period between a day and a week. Statistical arbitrage refers to the use of computational algorithms to simultaneously buy and sell stocks according to a statistical model.)
Algo Attribute | Qty |
---|---|
Holding Period (weeks) | 1 |
Leverage | 2 |
AUM (million) | 100 |
Trading Days per year | 252 |
Fraction of AUM traded per day | 0.4 |
This means we trade in and out of a new portfolio roughly 50 times a year. At 2 times leverage, on 100 million in AUM, we trade 20 billion dollars per year.
Q: For this level of churn what is the impact of 1 bps of execution cost to the fund’s returns?
This means for every basis point ($0.01\%$) of transaction cost we lose $2\%$ off algo performance.
def perf_impact(leverage, turnover , trading_days, txn_cost_bps):
p = leverage * turnover * trading_days * txn_cost_bps/10000.
return p
print(perf_impact(leverage=2, turnover=0.4, trading_days=252, txn_cost_bps=1))
0.02016
Quantitiative institutional trading teams typically utilize execution tactics that aim to complete parent orders fully, while minimizing the cost of execution. To achieve this goal, parent orders are often split into a number of child orders, which are routed to different execution venues, with the goal to capture all the available liquidity and minimize the bid-ask spread. The parent-level execution price can be expressed as the volume-weighted average price of all child orders.
Q: What benchmark(s) should we compare our execution price to ?
Example Benchmarks :
Key Ideas
The reversion metrics give us an indication of our temporary impact after the order has been executed. Generally, we'd expect the stock price to revert a bit, upon our order completion, as our contribution to the buy-sell imbalance is reflected in the market. The momentum metrics give us an indication of the direction of price drift prior to execution. Often, trading with significant momentum can affect our ability to minimize the bid-ask spread costs.
When executing an order, one of the primary tradeoffs to consider is timing risk vs. market impact:
Within this framework, neutral urgency of execution occurs at the intersection of market risk and market impact - in this case, each contributes the same to execution costs.
x = np.linspace(0,1,101)
risk = np.cos(x*np.pi)
impact = np.cos(x* np.pi+ np.pi)
fig,ax = plt.subplots(1)
# Make your plot, set your axes labels
ax.plot(x,risk)
ax.plot(x,impact)
ax.set_ylabel('Transaction Cost in bps', fontsize=15)
ax.set_xlabel('Order Interval', fontsize=15)
ax.set_yticklabels([])
ax.set_xticklabels([])
ax.grid(False)
ax.text(0.09, -0.6, 'Timing Risk', fontsize=15, fontname="serif")
ax.text(0.08, 0.6, 'Market Impact', fontsize=15, fontname="serif")
plt.title('Timing Risk vs Market Impact Affect on Transaction Cost', fontsize=15)
plt.show()
Liquidity can be viewed through several lenses. Within the context of execution management, we can think of it as activity, measured in shares and USD traded, as well as frequency and size of trades executed in the market. "Good" liquidity is also achieved through a diverse number of market participants on both sides of the market.
Assess Liquidity by:
In general, liquidity is highest as we approach the close, and second highest at the open. Mid day has the lowest liquidity. Liquidity should also be viewed relative to your order size and other securities in the same sector and class.
from quantrocket.master import get_securities
from quantrocket import get_prices
securities = get_securities(symbols='AAPL', vendors='usstock')
AAPL = securities.index[0]
data = get_prices('usstock-free-1min', sids=AAPL, data_frequency='minute', start_date='2016-01-01', end_date='2016-07-01', fields='Volume')
dat = data.loc['Volume'][AAPL]
# Combine separate Date and Time in index into datetime
dat.index = pd.to_datetime(dat.index.get_level_values('Date').astype(str) + ' ' + dat.index.get_level_values('Time'))
plt.subplot(211)
dat['2016-04-14'].plot(title='Intraday Volume Profile') # intraday volume profile plot
plt.subplot(212)
(dat['2016-04-14'].resample('10t', closed='right').sum()/\
dat['2016-04-14'].sum()).plot(); # percent volume plot
plt.title('Intraday Volume Profile, % Total Day');
df = pd.DataFrame(dat) # Apple minutely volume data
df.columns = ['interval_vlm']
df_daysum = df.resample('d').sum() # take sum of each day
df_daysum.columns = ['day_vlm']
df_daysum['day'] = df_daysum.index.date # add date index as column
df['min_of_day']=(df.index.hour-9)*60 + (df.index.minute-30) # calculate minutes from open
df['time']=df.index.time # add time index as column
conversion = {'interval_vlm':'sum', 'min_of_day':'last', 'time':'last'}
df = df.resample('10t', closed='right').apply(conversion) # apply conversions to columns at 10 min intervals
df['day'] = df.index.date
df = df.merge(df_daysum, how='left', on='day') # merge df and df_daysum dataframes
df['interval_pct'] = df['interval_vlm'] / df['day_vlm'] # calculate percent of days volume for each row
df.head()
interval_vlm | min_of_day | time | day | day_vlm | interval_pct | |
---|---|---|---|---|---|---|
0 | 1904677.0 | 0.0 | 09:30:00 | 2016-01-04 | 60971396.0 | 0.031239 |
1 | 3890513.0 | 10.0 | 09:40:00 | 2016-01-04 | 60971396.0 | 0.063809 |
2 | 2708620.0 | 20.0 | 09:50:00 | 2016-01-04 | 60971396.0 | 0.044424 |
3 | 1730624.0 | 30.0 | 10:00:00 | 2016-01-04 | 60971396.0 | 0.028384 |
4 | 1696133.0 | 40.0 | 10:10:00 | 2016-01-04 | 60971396.0 | 0.027819 |
plt.scatter(df.min_of_day, df.interval_pct)
plt.xlim(0,400)
plt.xlabel('Time from the Open (minutes)')
plt.ylabel('Percent Days Volume')
Text(0, 0.5, 'Percent Days Volume')
grouped = df.groupby(df.min_of_day)
grouped = df.groupby(df.time) # group by 10 minute interval times
m = grouped.median() # get median values of groupby
x = m.index
y = m['interval_pct']
ax1 = (100*y).plot(kind='bar', alpha=0.75) # plot percent daily volume grouped by 10 minute interval times
ax1.set_ylim(0,10);
plt.title('Intraday Volume Profile');
ax1.set_ylabel('% of Day\'s Volume in Bucket');
As we increase relative order size at a specified participation rate, the time to complete the order increases. Let's assume we execute an order using VWAP, a scheduling strategy, which executes orders over a pre-specified time window, according to the projections of volume distribution throughout that time window: At 3% participation rate for VWAP execution, we require the entire day to trade if our order represents 3% of average daily volume.
If we expect our algo to have high relative order sizes then we may want to switch to a liquidity management execution strategy when trading to ensure order completion by the end of the day. Liquidity management execution strategies have specific constraints for the urgency of execution, choice of execution venues and spread capture with the objective of order completion. Going back to our risk curves, we expect higher transaction costs the longer we trade. Therefore, the higher percent ADV of an order the more expensive to trade.
data = get_prices('usstock-free-1min', sids=AAPL, data_frequency='minute', start_date='2016-01-01', end_date='2018-01-02', fields='Volume')
dat = data.loc['Volume'][AAPL]
# Combine separate Date and Time in index into datetime
dat.index = pd.to_datetime(dat.index.get_level_values('Date').astype(str) + ' ' + dat.index.get_level_values('Time'))
def relative_order_size(participation_rate, pct_ADV):
fill_start = dat['2017-10-02'].index[0] # start order at 9:31
ADV20 = int(dat.resample("1d").sum()[-20:].mean()) # calculate 20 day ADV
order_size = int(pct_ADV * ADV20)
try :
ftime = dat['2017-10-02'][(order_size * 1.0 / participation_rate)<=dat['2017-10-02'].cumsum().values].index[0]
except:
ftime = dat['2017-10-02'].index[-1] # set fill time to 4p
fill_time = max(1,int((ftime - fill_start).total_seconds()/60.0))
return fill_time
def create_plots(participation_rate, ax):
df_pr = pd.DataFrame(data=np.linspace(0.0,0.1,100), columns = ['adv'] ) # create dataframe with intervals of ADV
df_pr['pr'] = participation_rate # add participation rate column
df_pr['fill_time'] = df_pr.apply(lambda row: relative_order_size(row['pr'],row['adv']), axis = 1) # get fill time
ax.plot(df_pr['adv'],df_pr['fill_time'], label=participation_rate) # generate plot line with ADV and fill time
fig, ax = plt.subplots()
for i in [0.01,0.02,0.03,0.04,0.05,0.06,0.07]: # for participation rate values
create_plots(i,ax) # generate plot line
plt.ylabel('Time from Open (minutes)')
plt.xlabel('Percent Average Daily Volume')
plt.title('Trade Completion Time as Function of Relative Order Size and Participation Rate')
plt.xlim(0.,0.04)
ax.legend()
<matplotlib.legend.Legend at 0x7f19f9018100>
Volatilty is a statistical measure of dispersion of returns for a security, calculated as the standard deviation of returns. The volatility of any given stock typically peaks at the open and therafter decreases until mid-day.The higher the volatility the more uncertainty in the returns. This uncertainty is an artifact of larger bid-ask spreads during the price discovery process at the start of the trading day. In contrast to liquidity, where we would prefer to trade at the open to take advantage of high volumes, to take advantage of low volatility we would trade at the close.
We use two methods to calculate volatility for demonstration purposes, OHLC and, the most common, close-to-close. OHLC uses the Garman-Klass Yang-Zhang volatilty estimate that employs open, high, low, and close data.
OHLC VOLATILITY ESTIMATION METHOD
$$\sigma^2 = \frac{Z}{n} \sum \left[\left(\ln \frac{O_i}{C_{i-1}} \right)^2 + \frac{1}{2} \left( \ln \frac{H_i}{L_i} \right)^2 - (2 \ln 2 -1) \left( \ln \frac{C_i}{O_i} \right)^2 \right]$$CLOSE TO CLOSE HISTORICAL VOLATILITY ESTIMATION METHOD
Volatility is calculated as the annualised standard deviation of log returns as detailed in the equation below.
$$ Log \thinspace return = x_1 = \ln (\frac{c_i + d_i}{c_i-1} ) $$where d_i = ordinary(not adjusted) dividend and ci is close price $$ Volatilty = \sigma_x \sqrt{ \frac{1}{N} \sum_{i=1}^{N} (x_i - \bar{x})^2 }$$
See end of notebook for references
data = get_prices('usstock-free-1min', sids=AAPL, data_frequency='minute', start_date='2016-01-01', end_date='2016-07-01')
df = data[AAPL].unstack(level='Field')
# Combine separate Date and Time in index into datetime
df.index = pd.to_datetime(df.index.get_level_values('Date').astype(str) + ' ' + df.index.get_level_values('Time'))
df.head()
Field | Close | High | Low | Open | Volume |
---|---|---|---|---|---|
2016-01-04 09:30:00 | 100.875 | 101.439 | 100.836 | 101.439 | 1904677.0 |
2016-01-04 09:31:00 | 101.173 | 101.251 | 100.845 | 100.895 | 340493.0 |
2016-01-04 09:32:00 | 101.547 | 101.646 | 101.142 | 101.162 | 410725.0 |
2016-01-04 09:33:00 | 101.221 | 101.607 | 101.201 | 101.557 | 302131.0 |
2016-01-04 09:34:00 | 101.073 | 101.231 | 100.925 | 101.201 | 340270.0 |
def gkyz_var(open, high, low, close, close_tm1): # Garman Klass Yang Zhang extension OHLC volatility estimate
return np.log(open/close_tm1)**2 + 0.5*(np.log(high/low)**2) \
- (2*np.log(2)-1)*(np.log(close/open)**2)
def historical_vol(close_ret, mean_ret): # close to close volatility estimate
return np.sqrt(np.sum((close_ret-mean_ret)**2)/390)
df['min_of_day'] = (df.index.hour-9)*60 + (df.index.minute-30) # calculate minute from the open
df['time'] = df.index.time # add column time index
df['day'] = df.index.date # add column date index
df.head()
Field | Close | High | Low | Open | Volume | min_of_day | time | day |
---|---|---|---|---|---|---|---|---|
2016-01-04 09:30:00 | 100.875 | 101.439 | 100.836 | 101.439 | 1904677.0 | 0 | 09:30:00 | 2016-01-04 |
2016-01-04 09:31:00 | 101.173 | 101.251 | 100.845 | 100.895 | 340493.0 | 1 | 09:31:00 | 2016-01-04 |
2016-01-04 09:32:00 | 101.547 | 101.646 | 101.142 | 101.162 | 410725.0 | 2 | 09:32:00 | 2016-01-04 |
2016-01-04 09:33:00 | 101.221 | 101.607 | 101.201 | 101.557 | 302131.0 | 3 | 09:33:00 | 2016-01-04 |
2016-01-04 09:34:00 | 101.073 | 101.231 | 100.925 | 101.201 | 340270.0 | 4 | 09:34:00 | 2016-01-04 |
df['close_tm1'] = df.groupby('day')['Close'].shift(1) # shift close value down one row
df.close_tm1 = df.close_tm1.fillna(df.Open)
df['min_close_ret'] = np.log( df['Close'] /df['close_tm1']) # log of close to close
close_returns = df.groupby('day')['min_close_ret'].mean() # daily mean of log of close to close
new_df = df.merge(pd.DataFrame(close_returns), left_on ='day', right_index = True)
# handle when index goes from 16:00 to 9:31:
new_df['variance'] = new_df.apply(
lambda row: historical_vol(row.min_close_ret_x, row.min_close_ret_y),
axis=1)
new_df.head()
Close | High | Low | Open | Volume | min_of_day | time | day | close_tm1 | min_close_ret_x | min_close_ret_y | variance | |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2016-01-04 09:30:00 | 100.875 | 101.439 | 100.836 | 101.439 | 1904677.0 | 0 | 09:30:00 | 2016-01-04 | 101.439 | -0.005576 | 0.000067 | 0.000286 |
2016-01-04 09:31:00 | 101.173 | 101.251 | 100.845 | 100.895 | 340493.0 | 1 | 09:31:00 | 2016-01-04 | 100.875 | 0.002950 | 0.000067 | 0.000146 |
2016-01-04 09:32:00 | 101.547 | 101.646 | 101.142 | 101.162 | 410725.0 | 2 | 09:32:00 | 2016-01-04 | 101.173 | 0.003690 | 0.000067 | 0.000183 |
2016-01-04 09:33:00 | 101.221 | 101.607 | 101.201 | 101.557 | 302131.0 | 3 | 09:33:00 | 2016-01-04 | 101.547 | -0.003216 | 0.000067 | 0.000166 |
2016-01-04 09:34:00 | 101.073 | 101.231 | 100.925 | 101.201 | 340270.0 | 4 | 09:34:00 | 2016-01-04 | 101.221 | -0.001463 | 0.000067 | 0.000077 |
df_daysum = pd.DataFrame(new_df['variance'].resample('d').sum()) # get sum of intraday variances daily
df_daysum.columns = ['day_variance']
df_daysum['day'] = df_daysum.index.date
df_daysum.head()
day_variance | day | |
---|---|---|
2016-01-04 | 0.013315 | 2016-01-04 |
2016-01-05 | 0.013274 | 2016-01-05 |
2016-01-06 | 0.013744 | 2016-01-06 |
2016-01-07 | 0.015753 | 2016-01-07 |
2016-01-08 | 0.015149 | 2016-01-08 |
conversion = {'variance':'sum', 'min_of_day':'last', 'time':'last'}
df = new_df.resample('10t', closed='right').apply(conversion)
df['day'] = df.index.date
df['time'] = df.index.time
df.head()
variance | min_of_day | time | day | |
---|---|---|---|---|
2016-01-04 09:20:00 | 0.000286 | 0.0 | 09:20:00 | 2016-01-04 |
2016-01-04 09:30:00 | 0.001258 | 10.0 | 09:30:00 | 2016-01-04 |
2016-01-04 09:40:00 | 0.000690 | 20.0 | 09:40:00 | 2016-01-04 |
2016-01-04 09:50:00 | 0.000316 | 30.0 | 09:50:00 | 2016-01-04 |
2016-01-04 10:00:00 | 0.000348 | 40.0 | 10:00:00 | 2016-01-04 |
df = df.merge(df_daysum, how='left', on='day') # merge daily and intraday volatilty dataframes
df['interval_pct'] = df['variance'] / df['day_variance'] # calculate percent of days volatility for each row
df.head()
variance | min_of_day | time | day | day_variance | interval_pct | |
---|---|---|---|---|---|---|
0 | 0.000286 | 0.0 | 09:20:00 | 2016-01-04 | 0.013315 | 0.021459 |
1 | 0.001258 | 10.0 | 09:30:00 | 2016-01-04 | 0.013315 | 0.094461 |
2 | 0.000690 | 20.0 | 09:40:00 | 2016-01-04 | 0.013315 | 0.051851 |
3 | 0.000316 | 30.0 | 09:50:00 | 2016-01-04 | 0.013315 | 0.023710 |
4 | 0.000348 | 40.0 | 10:00:00 | 2016-01-04 | 0.013315 | 0.026151 |
plt.scatter(df.min_of_day, df.interval_pct)
plt.xlim(0,400)
plt.ylim(0,)
plt.xlabel('Time from Open (minutes)')
plt.ylabel('Interval Contribution of Daily Volatility')
plt.title('Probabilty Distribution of Daily Volatility ')
Text(0.5, 1.0, 'Probabilty Distribution of Daily Volatility ')
import datetime
grouped = df.groupby(df.min_of_day)
grouped = df.groupby(df.time) # groupby time
m = grouped.median() # get median
x = m.index
y = m['interval_pct'][datetime.time(9,30):datetime.time(15,59)]
(100*y).plot(kind='bar', alpha=0.75);# plot interval percent of median daily volatility
plt.title('Intraday Volatility Profile')
ax1.set_ylabel('% of Day\'s Variance in Bucket');
The following relationships between bid-ask spread and order attributes are seen in our live trading data:
As market cap increases we expect spreads to decrease. Larger companies tend to exhibit lower bid-ask spreads.
As volatility increases we expect spreads to increase. Greater price uncertainty results in wider bid-ask spreads.
As average daily dollar volume increases, we expect spreads to decrease. Liquidity tends to be inversely proportional to spreads, due to larger number of participants and more frequent updates to quotes.
As price increases, we expect spreads to decrease (similar to market cap), although this relationship is not as strong.
As time of day progresses we expect spreads to decrease. During early stages of a trading day, price discovery takes place. in contrast, at market close order completion is the priority of most participants and activity is led by liquidity management, rather than price discovery.
The Trading Team developed a log-linear model fit to our live data that predicts the spread for a security with which we have the above listed attributes.
def model_spread(time, vol, mcap = 1.67 * 10 ** 10, adv = 84.5, px = 91.0159):
time_bins = np.array([0.0, 960.0, 2760.0, 5460.0, 21660.0]) #seconds from market open
time_coefs = pd.Series([0.0, -0.289, -0.487, -0.685, -0.952])
vol_bins = np.array([0.0, .1, .15, .2, .3, .4])
vol_coefs = pd.Series([0.0, 0.251, 0.426, 0.542, 0.642, 0.812])
mcap_bins = np.array([0.0, 2.0, 5.0, 10.0, 25.0, 50.0]) * 10 ** 9
mcap_coefs = pd.Series([0.291, 0.305, 0.0, -0.161, -0.287, -0.499])
adv_bins = np.array([0.0, 50.0, 100.0, 150.0, 250.0, 500.0]) * 10 ** 6
adv_coefs = pd.Series([0.303, 0.0, -0.054, -0.109, -0.242, -0.454])
px_bins = np.array([0.0, 28.0, 45.0, 62.0, 82.0, 132.0])
px_coefs = pd.Series([-0.077, -0.187, -0.272, -0.186, 0.0, 0.380])
return np.exp(1.736 +\
time_coefs[np.digitize(time, time_bins) - 1] +\
vol_coefs[np.digitize(vol, vol_bins) - 1] +\
mcap_coefs[np.digitize(mcap, mcap_bins) - 1] +\
adv_coefs[np.digitize(adv, adv_bins) - 1] +\
px_coefs[np.digitize(px, px_bins) - 1])
t = 10 * 60
vlty = 0.188
mcap = 1.67 * 10 ** 10
adv = 84.5 *10
price = 91.0159
print(model_spread(t, vlty, mcap, adv, price), 'bps')
10.014159084591371 bps
x = np.linspace(0,390*60) # seconds from open shape (50,)
y = np.linspace(.01,.7) # volatility shape(50,)
mcap = 1.67 * 10 ** 10
adv = 84.5
px = 91.0159
vlty_coefs = pd.Series([0.0, 0.251, 0.426, 0.542, 0.642, 0.812])
vlty_bins = np.array([0.0, .1, .15, .2, .3, .4])
time_bins = np.array([0.0, 960.0, 2760.0, 5460.0, 21660.0]) #seconds from market open
time_coefs = pd.Series([0.0, -0.289, -0.487, -0.685, -0.952])
mcap_bins = np.array([0.0, 2.0, 5.0, 10.0, 25.0, 50.0]) * 10 ** 9
mcap_coefs = pd.Series([0.291, 0.305, 0.0, -0.161, -0.287, -0.499])
adv_bins = np.array([0.0, 50.0, 100.0, 150.0, 250.0, 500.0]) * 10 ** 6
adv_coefs = pd.Series([0.303, 0.0, -0.054, -0.109, -0.242, -0.454])
px_bins = np.array([0.0, 28.0, 45.0, 62.0, 82.0, 132.0])
px_coefs = pd.Series([-0.077, -0.187, -0.272, -0.186, 0.0, 0.380])
# shape (1, 50)
time_contrib = np.take(time_coefs, np.digitize(x, time_bins) - 1).values.reshape((1, len(x)))
# shape (50, 1)
vlty_contrib = np.take(vlty_coefs, np.digitize(y, vlty_bins) - 1).values.reshape((len(y), 1))
# scalar
mcap_contrib = mcap_coefs[np.digitize((mcap,), mcap_bins)[0] - 1]
# scalar
adv_contrib = adv_coefs[np.digitize((adv,), adv_bins)[0] - 1]
# scalar
px_contrib = px_coefs[np.digitize((px,), px_bins)[0] - 1]
z_scalar_contrib = 1.736 + mcap_contrib + adv_contrib + px_contrib
Z = np.exp(z_scalar_contrib + time_contrib + vlty_contrib)
cmap=plt.get_cmap('jet')
X, Y = np.meshgrid(x,y)
CS = plt.contour(X/60,Y,Z, linewidths=3, cmap=cmap, alpha=0.8);
plt.clabel(CS)
plt.xlabel('Time from the Open (Minutes)')
plt.ylabel('Volatility')
plt.title('Spreads for varying Volatility and Trading Times (mcap = 16.7B, px = 91, adv = 84.5M)')
plt.show()
Theoritical Market Impact models attempt to estimate transaction costs of trading by utilizing order attributes. There are many published market impact models. Here are some examples:
The models have a few commonalities such as the inclusion of relative order size, volatility as well as custom parameters calculated from observed trades.There are also notable differences in the models such as (1) Almgren considers fraction of outstanding shares traded daily, (2) Q Slipplage Model does not consider volatility, and (3) Kissel explicit parameter to proportion temporary and permenant impact, to name a few.
The academic models have notions of temporary and permanant impact. Temporary Impact captures the impact on transaction costs due to urgency or aggressiveness of the trade. While Permanant Impact estimates with respect to information or short term alpha in a trade.
This model assumes the initial order, X, is completed at a uniform rate of trading over a volume time interval T. That is, the trade rate in volume units is v = X/T, and is held constant until the trade is completed. Constant rate in these units is equivalent to VWAP execution during the time of execution.
Almgren et al. model these two terms as
$$\text{tcost} = 0.5 \overbrace{\gamma \sigma \frac{X}{V}\left(\frac{\Theta}{V}\right)^{1/4}}^{\text{permanent}} + \overbrace{\eta \sigma \left| \frac{X}{VT} \right|^{3/5}}^{\text{temporary}} $$where $\gamma$ and $\eta$ are the "universal coefficients of market impact" and estimated by the authors using a large sample of institutional trades; $\sigma$ is the daily volatility of the stock; $\Theta$ is the total shares outstanding of the stock; $X$ is the number of shares you would like to trade (unsigned); $T$ is the time width in % of trading time over which you slice the trade; and $V$ is the average daily volume ("ADV") in shares of the stock. The interpretation of $\frac{\Theta}{V}$ is the inverse of daily "turnover", the fraction of the company's value traded each day.
For reference, FB has 2.3B shares outstanding, its average daily volume over 20 days is 18.8M therefore its inverse turnover is approximately 122, put another way, it trades less than 4% of outstanding shares daily.
Note that the Almgren et al (2005) and Kissell, Glantz and Malamut (2004) papers were released prior to the adoption and phased implementation of Reg NMS, prior to the "quant meltdown" of August 2007, prior to the financial crisis hitting markets in Q4 2008, and other numerous developments in market microstructure.
def perm_impact(pct_adv, annual_vol_pct = 0.25, inv_turnover = 200):
gamma = 0.314
return 10000 * gamma * (annual_vol_pct / 16) * pct_adv * (inv_turnover)**0.25
def temp_impact(pct_adv, minutes, annual_vol_pct = 0.25, minutes_in_day = 60*6.5):
eta = 0.142
day_frac = minutes / minutes_in_day
return 10000 * eta * (annual_vol_pct / 16) * abs(pct_adv/day_frac)**0.6
def tc_bps(pct_adv, minutes, annual_vol_pct = 0.25, inv_turnover = 200, minutes_in_day = 60*6.5):
perm = perm_impact(pct_adv, annual_vol_pct=annual_vol_pct, inv_turnover=inv_turnover)
temp = temp_impact(pct_adv, minutes, annual_vol_pct=annual_vol_pct, minutes_in_day=minutes_in_day)
return 0.5 * perm + temp
So if we are trading 10% of ADV of a stock with a daily vol of 1.57% and we plan to do this over half the day, we would expect 8bps of TC (which is the Almgren estimate of temporary impact cost in this scenario). From the paper, this is a sliver of the output at various trading speeds:
Variable | IBM |
---|---|
Inverse turnover ($\Theta/V$) | 263 |
Daily vol ($\sigma$) | 1.57% |
Trade % ADV (X/V) | 10% |
Item | Fast | Medium | Slow |
---|---|---|---|
Permanent Impact (bps) | 20 | 20 | 20 |
Trade duration (day fraction %) | 10% | 20% | 50% |
Temporary Impact (bps) | 22 | 15 | 8 |
Total Impact (bps) | 32 | 25 | 18 |
print('Cost to trade Fast (First 40 mins):', round(tc_bps(pct_adv=0.1, annual_vol_pct=16*0.0157, inv_turnover=263, minutes=0.1*60*6.5),2), 'bps')
print('Cost to trade Medium (First 90 mins):', round(tc_bps(pct_adv=0.1, annual_vol_pct=16*0.0157, inv_turnover=263, minutes=0.2*60*6.5),2), 'bps' )
print('Cost to trade Slow by Noon:', round(tc_bps(pct_adv=0.1, annual_vol_pct=16*0.0157, inv_turnover=263, minutes=0.5*60*6.5),2), 'bps')
Cost to trade Fast (First 40 mins): 32.22 bps Cost to trade Medium (First 90 mins): 24.63 bps Cost to trade Slow by Noon: 18.41 bps
Trading 0.50% of ADV of a stock with a daily vol of 1.57% and we plan to do this over 30 minutes...
print(round(tc_bps(pct_adv=0.005, minutes=30, annual_vol_pct=16*0.0157),2))
4.79
Let's say we wanted to trade \$2M notional of Facebook, and we are going to send the trade to an execution algo (e.g., VWAP) to be sliced over 15 minutes.
trade_notional = 2000000 # 2M notional
stock_price = 110.89 # dollars per share
shares_to_trade = trade_notional/stock_price
stock_adv_shares = 30e6 # 30 M
stock_shares_outstanding = 275e9/110.89
expected_tc = tc_bps(shares_to_trade/stock_adv_shares, minutes=15, annual_vol_pct=0.22)
print("Expected tc in bps: %0.2f" % expected_tc)
print("Expected tc in $ per share: %0.2f" % (expected_tc*stock_price / 10000))
Expected tc in bps: 1.66 Expected tc in $ per share: 0.02
And to motivate some intuition, at the total expected cost varies as a function of how much % ADV we want to trade in 30 minutes.
x = np.linspace(0.0001,0.03)
plt.plot(x*100,tc_bps(x,30,0.25), label=r"$\sigma$ = 25%");
plt.plot(x*100,tc_bps(x,30,0.40), label=r"$\sigma$ = 40%");
plt.ylabel('tcost in bps')
plt.xlabel('Trade as % of ADV')
plt.title(r'tcost in Basis Points of Trade Value; $\sigma$ = 25% and 40%; time = 15 minutes');
plt.legend();
And let's look a tcost as a function of trading time and % ADV.
x = np.linspace(0.001,0.03)
y = np.linspace(5,30)
X, Y = np.meshgrid(x,y)
Z = tc_bps(X,Y,0.20)
levels = np.linspace(0.0, 60, 30)
cmap=plt.get_cmap('Reds')
cmap=plt.get_cmap('hot')
cmap=plt.get_cmap('jet')
plt.subplot(1,2,1);
CS = plt.contour(X*100, Y, Z, levels, linewidths=3, cmap=cmap, alpha=0.55);
plt.clabel(CS);
plt.ylabel('Trading Time in Minutes');
plt.xlabel('Trade as % of ADV');
plt.title(r'tcost in Basis Points of Trade Value; $\sigma$ = 20%');
plt.subplot(1,2,2);
Z = tc_bps(X,Y,0.40)
CS = plt.contour(X*100, Y, Z, levels, linewidths=3, cmap=cmap, alpha=0.55);
plt.clabel(CS);
plt.ylabel('Trading Time in Minutes');
plt.xlabel('Trade as % of ADV');
plt.title(r'tcost in Basis Points of Trade Value; $\sigma$ = 40%');
Alternatively, we might want to get some intuition as to if we wanted to limit our cost, how does the trading time vary versus % of ADV.
x = np.linspace(0.001,0.03) # % ADV
y = np.linspace(1,60*6.5) # time to trade
X, Y = np.meshgrid(x, y)
levels = np.linspace(0.0, 390, 20)
cmap=plt.get_cmap('Reds')
cmap=plt.get_cmap('hot')
cmap=plt.get_cmap('jet')
plt.subplot(1,2,1);
Z = tc_bps(X,Y,0.20)
plt.contourf(X*100, Z, Y, levels, cmap=cmap, alpha=0.55);
plt.title(r'Trading Time in Minutes; $\sigma$ = 20%');
plt.xlabel('Trade as % of ADV');
plt.ylabel('tcost in Basis Points of Trade Value');
plt.ylim(5,20)
plt.colorbar();
plt.subplot(1,2,2);
Z = tc_bps(X,Y,0.40)
plt.contourf(X*100, Z, Y, levels, cmap=cmap, alpha=0.55);
plt.title(r'Trading Time in Minutes; $\sigma$ = 40%');
plt.xlabel('Trade as % of ADV');
plt.ylabel('tcost in Basis Points of Trade Value');
plt.ylim(5,20);
plt.colorbar();
For a typical stock, let's see how the tcost is broken down into permanent and temporary.
minutes = 30
x = np.linspace(0.0001,0.03)
f, (ax1, ax2) = plt.subplots(ncols=2, sharex=True, sharey=True)
f.subplots_adjust(hspace=0.15)
p = 0.5*perm_impact(x,0.20)
t = tc_bps(x,minutes,0.20)
ax1.fill_between(x*100, p, t, color='b', alpha=0.33);
ax1.fill_between(x*100, 0, p, color='k', alpha=0.66);
ax1.set_ylabel('tcost in bps')
ax1.set_xlabel('Trade as % of ADV')
ax1.set_title(r'tcost in bps of Trade Value; $\sigma$ = 20%; time = 15 minutes');
p = 0.5*perm_impact(x, 0.40)
t = tc_bps(x,minutes, 0.40)
ax2.fill_between(x*100, p, t, color='b', alpha=0.33);
ax2.fill_between(x*100, 0, p, color='k', alpha=0.66);
plt.xlabel('Trade as % of ADV')
plt.title(r'tcost in bps of Trade Value; $\sigma$ = 40%; time = 15 minutes');
This model assumes there is a theoretical instaenous impact cost $I^*$ incurred by the investor if all shares $Q$ were released to the market.
$$ MI_{bp} = b_1 I^* POV^{a_4} + (1-b_1)I^*$$$$ I^* = a_1 (\frac{Q}{ADV})^{a_2} \sigma^{a_3}$$$$POV = \frac{Q}{Q+V}$$Parameter | Fitted Values |
---|---|
$b_1$ | 0.80 |
$a_1$ | 750 |
$a_2$ | 0.50 |
$a_3$ | 0.75 |
$a_4$ | 0.50 |
def kissell(adv, annual_vol, interval_vol, order_size):
b1, a1, a2, a3, a4 = 0.9, 750., 0.2, 0.9, 0.5
i_star = a1 * ((order_size/adv)**a2) * annual_vol**a3
PoV = order_size/(order_size + adv)
return b1 * i_star * PoV**a4 + (1 - b1) * i_star
print(kissell(adv=5*10**6, annual_vol=0.2, interval_vol=adv * 0.06, order_size=0.01 * adv ), 'bps')
0.781932862914937 bps
x = np.linspace(0.0001,0.1)
plt.plot(x,kissell(5*10**6,0.1, 2000*10**3, x*2000*10**3), label=r"$\sigma$ = 10%");
plt.plot(x,kissell(5*10**6,0.25, 2000*10**3, x*2000*10**3), label=r"$\sigma$ = 25%");
plt.ylabel('tcost in bps')
plt.xlabel('Trade as % of ADV')
plt.title(r'tcost in Basis Points of Trade Value; $\sigma$ = 25% and 40%; time = 15 minutes');
plt.legend();
The Zipline VolumeShareSlippage
model (API Reference) expressed in the style of the equation below
where $X$ is the number of shares you would like to trade; $T$ is the time width of the bar in % of a day; $V$ is the ADV of the stock.
def tc_Zipline_vss_bps(pct_adv, minutes=1.0, minutes_in_day=60*6.5):
day_frac = minutes / minutes_in_day
tc_pct = 0.1 * abs(pct_adv/day_frac)**2
return tc_pct*10000
To reproduce the given examples, we trade over a bar
print(tc_Zipline_vss_bps(pct_adv=0.1/390, minutes=1))
print(tc_Zipline_vss_bps(pct_adv=0.25/390, minutes=1))
10.000000000000002 62.5
As this model is convex, it gives very high estimates for large trades.
print(tc_Zipline_vss_bps(pct_adv=0.1, minutes=0.1*60*6.5))
print(tc_Zipline_vss_bps(pct_adv=0.1, minutes=0.2*60*6.5))
print(tc_Zipline_vss_bps(pct_adv=0.1, minutes=0.5*60*6.5))
1000.0 250.0 40.00000000000001
Though for small trades, the results are comparable.
print(tc_bps(pct_adv=0.005, minutes=30, annual_vol_pct=0.2))
print(tc_Zipline_vss_bps(pct_adv=0.005, minutes=30))
3.81208329371434 4.2250000000000005
x = np.linspace(0.0001, 0.01)
plt.plot(x*100,tc_bps(x, 30, 0.20), label=r"Almgren $\sigma$ = 20%");
plt.plot(x*100,tc_bps(x, 30, 0.40), label=r"Almgren $\sigma$ = 40%");
plt.plot(x*100,tc_Zipline_vss_bps(x, minutes=30),label="Zipline VSS");
plt.plot(x*100,kissell(5*10**6,0.20, 2000*10**3, x*2000*10**3), label=r"Kissell $\sigma$ = 20%");
plt.plot(x*100,kissell(5*10**6,0.40, 2000*10**3, x*2000*10**3), label=r"Kissell $\sigma$ = 40%", color='black');
plt.ylabel('tcost in bps')
plt.xlabel('Trade as % of ADV')
plt.title('tcost in Basis Points of Trade Value; time = 30 minutes');
plt.legend();
Almgren, R., Thum, C., Hauptmann, E., & Li, H. (2005). Direct estimation of equity market impact. Risk, 18(7), 5862.
Bennett, C. and Gil, M.A. (2012, Februrary) Measuring Historic Volatility, Santander Equity Derivatives Europe Retreived from: (http://www.todaysgroep.nl/media/236846/measuring_historic_volatility.pdf)
Garman, M. B., & Klass, M. J. (1980). On the estimation of security price volatilities from historical data. Journal of business, 67-78.
Kissell, R., Glantz, M., & Malamut, R. (2004). A practical framework for estimating transaction costs and developing optimal trading strategies to achieve best execution. Finance Research Letters, 1(1), 35-46.
Zipline Slippage Model see: https://www.quantrocket.com/docs/api/#zipline.finance.slippage.VolumeShareSlippage
This presentation is for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation for any security; nor does it constitute an offer to provide investment advisory or other services by QuantRocket LLC ("QuantRocket"). Nothing contained herein constitutes investment advice or offers any opinion with respect to the suitability of any security, and any views expressed herein should not be taken as advice to buy, sell, or hold any security or as an endorsement of any security or company. In preparing the information contained herein, the authors have not taken into account the investment needs, objectives, and financial circumstances of any particular investor. Any views expressed and data illustrated herein were prepared based upon information believed to be reliable at the time of publication. QuantRocket makes no guarantees as to their accuracy or completeness. All information is subject to change and may quickly become unreliable for various reasons, including changes in market conditions or economic circumstances.