import pandas as pd
import numpy as np
import sqlite3
import statsmodels.formula.api as smf
Replicating Fama and French Factors
You are reading the work-in-progress edition of Tidy Finance with Python. Code chunks and text might change over the next couple of months. We are always looking for feedback via contact@tidy-finance.org. Meanwhile, you can find the complete R version here.
In this chapter, we provide a replication of the famous Fama and French factor portfolios. The Fama and French three-factor model (see Fama and French 1993) is a cornerstone of asset pricing. On top of the market factor represented by the traditional CAPM beta, the model includes the size and value factors to explain the cross section of returns. We introduce both factors in Chapter 9, and their definition remains the same. Size is the SMB factor (small-minus-big) that is long small firms and short large firms. The value factor is HML (high-minus-low) and is long in high book-to-market firms and short in low book-to-market counterparts.
The current chapter relies on this set of packages.
Data Preparation
We use CRSP and Compustat as data sources, as we need exactly the same variables to compute the size and value factors in the way Fama and French do it. Hence, there is nothing new below and we only load data from our database introduced in Chapters 2-4.
= sqlite3.connect("data/tidy_finance.sqlite")
tidy_finance
= (pd.read_sql_query(
data_ff ="SELECT permno, gvkey, month, ret_excess, mktcap, mktcap_lag, exchange FROM crsp_monthly",
sql=tidy_finance,
con={"month": {"unit": "D", "origin": "unix"}})
parse_dates
.dropna()
)
= (pd.read_sql_query(
book_equity ="SELECT gvkey, datadate, be FROM compustat",
sql=tidy_finance,
con={"datadate": {"unit": "D", "origin": "unix"}})
parse_dates
.dropna()
)
= (pd.read_sql_query(
factors_ff_monthly ="SELECT month, smb, hml FROM factors_ff_monthly",
sql=tidy_finance,
con={"month": {"unit": "D", "origin": "unix"}})
parse_dates
.dropna() )
Yet when we start merging our data set for computing the premiums, there are a few differences to Chapter 9. First, Fama and French form their portfolios in June of year \(t\), whereby the returns of July are the first monthly return for the respective portfolio. For firm size, they consequently use the market capitalization recorded for June. It is then held constant until June of year \(t+1\).
Second, Fama and French also have a different protocol for computing the book-to-market ratio. They use market equity as of the end of year \(t - 1\) and the book equity reported in year \(t-1\), i.e., the datadate
is within the last year. Hence, the book-to-market ratio can be based on accounting information that is up to 18 months old. Market equity also does not necessarily reflect the same time point as book equity.
To implement all these time lags, we again employ the temporary sorting_date
-column. Notice that when we combine the information, we want to have a single observation per year and stock since we are only interested in computing the breakpoints held constant for the entire year. We ensure this by a call of drop_duplicates()
at the end of the chunk below.
= (data_ff
me_ff "month.dt.month == 6")
.query(= lambda x: x["month"] + pd.DateOffset(months=1))
.assign(sorting_date "permno", "sorting_date", "mktcap"])
.get([={"mktcap": "me_ff"})
.rename(columns
)
= (data_ff
me_ff_dec "month.dt.month == 12")
.query(= lambda x: x["month"] + pd.DateOffset(months=7))
.assign(sorting_date "permno", "gvkey", "sorting_date", "mktcap"])
.get([={"mktcap": "bm_me"})
.rename(columns
)
= (book_equity
bm_ff
.assign(= lambda x: pd.to_datetime((x["datadate"].dt.year + 1).astype(str) + "0701", format="%Y%m%d")
sorting_date
)={"be": "bm_be"})
.rename(columns
.merge(me_ff_dec, ="inner",
how=["gvkey", "sorting_date"])
on= lambda x: x["bm_be"] / x["bm_me"])
.assign(bm_ff "permno", "sorting_date", "bm_ff"])
.get([
)
= (me_ff
variables_ff
.merge(bm_ff, ="inner",
how=["permno", "sorting_date"])
on )
Portfolio Sorts
Next, we construct our portfolios with an adjusted assign_portfolio()
function. Fama and French rely on NYSE-specific breakpoints, they form two portfolios in the size dimension at the median and three portfolios in the dimension of book-to-market at the 30%- and 70%-percentiles, and they use independent sorts. The sorts for book-to-market require an adjustment to the function in Chapter 9 because the seq()
we would produce does not produce the right breakpoints. Instead of n_portfolios
, we now specify percentiles
, which take the breakpoint-sequence as an object specified in the function’s call. Specifically, we give percentiles = c(0, 0.3, 0.7, 1)
to the function. Additionally, we perform a join with our return data to ensure that we only use traded stocks when computing the breakpoints as a first step.
def assign_portfolio(data, sorting_variable, percentiles):
= (data
breakpoints "exchange == 'NYSE'")
.query(
.get(sorting_variable)= "linear")
.quantile(percentiles, interpolation
)0] = -np.Inf
breakpoints.iloc[-1] = np.Inf
breakpoints.iloc[breakpoints.size= pd.cut(data[sorting_variable],
assigned_portfolios =breakpoints,
bins=pd.Series(range(1, breakpoints.size)),
labels=True)
include_lowestreturn assigned_portfolios
= (variables_ff
portfolios_ff
.merge(data_ff,="inner",
how=["permno", "sorting_date"], right_on=["permno", "month"])
left_on"sorting_date", group_keys=False)
.groupby(apply(lambda x: x
.= assign_portfolio(x, 'me_ff', [0, 0.5, 1]),
.assign(portfolio_me = assign_portfolio(x, 'bm_ff', [0, 0.3, 0.7, 1])))
portfolio_bm =True)
.reset_index(drop"permno", "sorting_date", "portfolio_me", "portfolio_bm"])
.get([ )
Next, we merge the portfolios to the return data for the rest of the year. To implement this step, we create a new column sorting_date
in our return data by setting the date to sort on to July of \(t-1\) if the month is June (of year \(t\)) or earlier or to July of year \(t\) if the month is July or later.
= (data_ff
portfolios_ff = lambda x: pd.to_datetime(
.assign(sorting_date "month"].apply(lambda x: str(x.year - 1) + "0701" if x.month <= 6 else str(x.year) + "0701")))
x[="inner", on=["permno", "sorting_date"])
.merge(portfolios_ff, how )
Fama and French Factor Returns
Equipped with the return data and the assigned portfolios, we can now compute the value-weighted average return for each of the six portfolios. Then, we form the Fama and French factors. For the size factor (i.e., SMB), we go long in the three small portfolios and short the three large portfolios by taking an average across either group. For the value factor (i.e., HML), we go long in the two high book-to-market portfolios and short the two low book-to-market portfolios, again weighting them equally.
= (portfolios_ff
factors_ff_monthly_replicated "portfolio_me", "portfolio_bm", "month"])
.groupby([apply(lambda x: pd.Series({"ret": np.average(x["ret_excess"], weights=x["mktcap_lag"])}))
.
.reset_index()"month")
.groupby(apply(lambda x: pd.Series({
."smb_replicated": x["ret"][x["portfolio_me"] == 1].mean() - x["ret"][x["portfolio_me"] == 2].mean(),
"hml_replicated": x["ret"][x["portfolio_bm"] == 3].mean() - x["ret"][x["portfolio_bm"] == 1].mean()
}))round(decimals=4)
. )
Replication Evaluation
In the previous section, we replicated the size and value premiums following the procedure outlined by Fama and French. However, we did not follow their procedure strictly. The final question is then: how close did we get? We answer this question by looking at the two time-series estimates in a regression analysis using smf.ols()
. If we did a good job, then we should see a non-significant intercept (rejecting the notion of systematic error), a coefficient close to 1 (indicating a high correlation), and an adjusted R-squared close to 1 (indicating a high proportion of explained variance).
= (factors_ff_monthly
test
.merge(factors_ff_monthly_replicated, ="inner",
how="month")
on )
The results for the SMB factor are quite convincing as all three criteria outlined above are met and the coefficient and R-squared are at 99%.
= (smf.ols(formula="smb ~ smb_replicated", data=test)
model_smb
.fit()=True)
.summary(slim
)for table in model_smb.tables:
print(table)
OLS Regression Results
==============================================================================
Dep. Variable: smb R-squared: 0.986
Model: OLS Adj. R-squared: 0.986
No. Observations: 726 F-statistic: 5.223e+04
Covariance Type: nonrobust Prob (F-statistic): 0.00
==============================================================================
==================================================================================
coef std err t P>|t| [0.025 0.975]
----------------------------------------------------------------------------------
Intercept -0.0001 0.000 -0.993 0.321 -0.000 0.000
smb_replicated 0.9957 0.004 228.540 0.000 0.987 1.004
==================================================================================
The replication of the HML factor is also a success, although at a slightly lower level with coefficient and R-squared around 96%.
= (smf.ols(formula="hml ~ hml_replicated", data=test)
model_hml
.fit()=True)
.summary(slim
)for table in model_hml.tables:
print(table)
OLS Regression Results
==============================================================================
Dep. Variable: hml R-squared: 0.958
Model: OLS Adj. R-squared: 0.958
No. Observations: 726 F-statistic: 1.671e+04
Covariance Type: nonrobust Prob (F-statistic): 0.00
==============================================================================
==================================================================================
coef std err t P>|t| [0.025 0.975]
----------------------------------------------------------------------------------
Intercept 0.0003 0.000 1.431 0.153 -0.000 0.001
hml_replicated 0.9656 0.007 129.258 0.000 0.951 0.980
==================================================================================
The evidence hence allows us to conclude that we did a relatively good job in replicating the original Fama-French premiums, although we cannot see their underlying code. From our perspective, a perfect match is only possible with additional information from the maintainers of the original data.
Exercises
- Fama and French (1993) claim that their sample excludes firms until they have appeared in Compustat for two years. Implement this additional filter and compare the improvements of your replication effort.
- On his homepage, Kenneth French provides instructions on how to construct the most common variables used for portfolio sorts. Pick one of them, e.g.
OP
(operating profitability) and try to replicate the portfolio sort return time series provided on his homepage.