import pandas as pd
import numpy as np
import sqlite3
import statsmodels.formula.api as smf
from regtabletotext import prettify_result
Replicating Fama and French Factors
You are reading the work-in-progress edition of Tidy Finance with Python. Code chunks and text might change over the next couple of months. We are always looking for feedback via contact@tidy-finance.org. Meanwhile, you can find the complete R version here.
In this chapter, we provide a replication of the famous Fama and French factor portfolios. The Fama and French factor models are a cornerstone of empirical asset pricing Fama and French (2015). On top of the market factor represented by the traditional CAPM beta, the three factor model includes the size and value factors to explain the cross section of returns. Its successor, the five factor model, additionally includes profitability and investment as explanatory factors.
We start with the three factor model. We already introduced the size and value factors in Value and Bivariate Sorts, and their definition remains the same: size is the SMB factor (small-minus-big) that is long small firms and short large firms. The value factor is HML (high-minus-low) and is long in high book-to-market firms and short in low book-to-market counterparts.
After the replication of the three factor model, we move to the five factors by constructing the profitability factor RMW (robust-minus-weak) as the difference between the returns of firms with high and low operating profitability and the investment factor CMA (conservative-minus-aggressive) as the difference between firms with high versus low investment rates.
The current chapter relies on this set of Python packages.
Data Preparation
We use CRSP and Compustat as data sources, as we need exactly the same variables to compute the size and value factors in the way Fama and French do it. Hence, there is nothing new below and we only load data from our database introduced in Accessing and Managing Financial Data and WRDS, CRSP, and Compustat.
= sqlite3.connect(
tidy_finance ="data/tidy_finance_python.sqlite"
database
)
= (pd.read_sql_query(
crsp_monthly =("SELECT permno, gvkey, month, ret_excess, mktcap, "
sql"mktcap_lag, exchange FROM crsp_monthly"),
=tidy_finance,
con={"month"})
parse_dates
.dropna()
)
= (pd.read_sql_query(
compustat ="SELECT gvkey, datadate, be, op, inv FROM compustat",
sql=tidy_finance,
con={"datadate"})
parse_dates
.dropna()
)
= (pd.read_sql_query(
factors_ff3_monthly ="SELECT month, smb, hml FROM factors_ff3_monthly",
sql=tidy_finance,
con={"month"})
parse_dates
.dropna()
)
= (pd.read_sql_query(
factors_ff5_monthly =("SELECT month, smb, hml, rmw, cma "
sql"FROM factors_ff5_monthly"),
=tidy_finance,
con={"month"})
parse_dates
.dropna() )
Yet when we start merging our data set for computing the premiums, there are a few differences to Value and Bivariate Sorts. First, Fama and French form their portfolios in June of year \(t\), whereby the returns of July are the first monthly return for the respective portfolio. For firm size, they consequently use the market capitalization recorded for June. It is then held constant until June of year \(t+1\).
Second, Fama and French also have a different protocol for computing the book-to-market ratio. They use market equity as of the end of year \(t - 1\) and the book equity reported in year \(t-1\), i.e., the datadate
is within the last year. Hence, the book-to-market ratio can be based on accounting information that is up to 18 months old. Market equity also does not necessarily reflect the same time point as book equity.
To implement all these time lags, we again employ the temporary sorting_date
-column. Notice that when we combine the information, we want to have a single observation per year and stock since we are only interested in computing the breakpoints held constant for the entire year. We ensure this by a call of drop_duplicates()
at the end of the chunk below.
= (crsp_monthly
size "month.dt.month == 6")
.query(
.assign(= lambda x: (x["month"] +
sorting_date =1))
pd.DateOffset(months
)"permno", "exchange", "sorting_date", "mktcap"])
.get([={"mktcap": "size"})
.rename(columns
)
= (crsp_monthly
market_equity "month.dt.month == 12")
.query(
.assign(= lambda x: (x["month"] +
sorting_date =7))
pd.DateOffset(months
)"permno", "gvkey", "sorting_date", "mktcap"])
.get([={"mktcap": "me"})
.rename(columns
)
= (compustat
book_to_market
.assign(= lambda x: (pd.to_datetime(
sorting_date "datadate"].dt.year + 1).astype(str) +
(x["0701", format="%Y%m%d"))
)
.merge(market_equity,="inner",
how=["gvkey", "sorting_date"])
on= lambda x: x["be"] / x["me"])
.assign(bm "permno", "sorting_date", "me", "bm"])
.get([
)
= (size
sorting_variables
.merge(book_to_market, ="inner",
how=["permno", "sorting_date"])
on
.dropna()=["permno", "sorting_date"])
.drop_duplicates(subset )
Portfolio Sorts
Next, we construct our portfolios with an adjusted assign_portfolio()
function. Fama and French rely on NYSE-specific breakpoints, they form two portfolios in the size dimension at the median and three portfolios in the dimension of book-to-market at the 30%- and 70%-percentiles, and they use independent sorts. The sorts for book-to-market require an adjustment to the function in Value and Bivariate Sorts because it would not produce the right breakpoints. Instead of n_portfolios
, we now specify percentiles
, which take the breakpoint-sequence as an object specified in the function’s call. Specifically, we give percentiles = [0, 0.3, 0.7, 1]
to the function. Additionally, we perform a join with our return data to ensure that we only use traded stocks when computing the breakpoints as a first step.
def assign_portfolio(data, sorting_variable, percentiles):
= (data
breakpoints "exchange == 'NYSE'")
.query(
.get(sorting_variable)= "linear")
.quantile(percentiles, interpolation
)0] = -np.Inf
breakpoints.iloc[-1] = np.Inf
breakpoints.iloc[breakpoints.size= pd.cut(
assigned_portfolios
data[sorting_variable],=breakpoints,
bins=pd.Series(range(1, breakpoints.size)),
labels=True
include_lowest
)return assigned_portfolios
= (sorting_variables
portfolios "sorting_date", group_keys=False)
.groupby(apply(lambda x: x
.= assign_portfolio(
.assign(portfolio_size "size", [0, 0.5, 1]
x,
),= assign_portfolio(
portfolio_bm "bm", [0, 0.3, 0.7, 1]))
x,
)=True)
.reset_index(drop"permno", "sorting_date",
.get(["portfolio_size", "portfolio_bm"])
)
Next, we merge the portfolios to the return data for the rest of the year. To implement this step, we create a new column sorting_date
in our return data by setting the date to sort on to July of \(t-1\) if the month is June (of year \(t\)) or earlier or to July of year \(t\) if the month is July or later.
= (crsp_monthly
portfolios
.assign(= lambda x: (pd.to_datetime(
sorting_date "month"].apply(lambda x: str(x.year - 1) +
x["0701" if x.month <= 6 else str(x.year) + "0701")))
)
.merge(portfolios,="inner",
how=["permno", "sorting_date"])
on )
Fama and French Three Factor Model
Equipped with the return data and the assigned portfolios, we can now compute the value-weighted average return for each of the six portfolios. Then, we form the Fama and French factors. For the size factor (i.e., SMB), we go long in the three small portfolios and short the three large portfolios by taking an average across either group. For the value factor (i.e., HML), we go long in the two high book-to-market portfolios and short the two low book-to-market portfolios, again weighting them equally.
= (portfolios
factors_replicated "portfolio_size", "portfolio_bm", "month"])
.groupby([apply(lambda x: pd.Series(
."ret": np.average(x["ret_excess"],
{=x["mktcap_lag"])})
weights
)
.reset_index()"month")
.groupby(apply(lambda x: pd.Series({
."smb_replicated": (
"ret"][x["portfolio_size"] == 1].mean() -
x["ret"][x["portfolio_size"] == 2].mean()),
x["hml_replicated": (
"ret"][x["portfolio_bm"] == 3].mean() -
x["ret"][x["portfolio_bm"] == 1].mean())
x[
})) )
Replication Evaluation
In the previous section, we replicated the size and value premiums following the procedure outlined by Fama and French. The final question is then: how close did we get? We answer this question by looking at the two time-series estimates in a regression analysis using smf.ols()
. If we did a good job, then we should see a non-significant intercept (rejecting the notion of systematic error), a coefficient close to 1 (indicating a high correlation), and an adjusted R-squared close to 1 (indicating a high proportion of explained variance).
= (factors_ff3_monthly
test
.merge(factors_replicated, ="inner",
how="month")
onround(decimals=4)
. )
To test the success of the SMB factor, we hence run the following regression:
= (smf.ols(
model_smb ="smb ~ smb_replicated",
formula=test
data
)
.fit()
) prettify_result(model_smb)
OLS Model:
smb ~ smb_replicated
Coefficients:
Estimate Std. Error Statistic p-Value
Intercept -0.000 0.000 -1.371 0.171
smb_replicated 0.994 0.004 229.549 0.000
Summary statistics:
- Number of observations: 726
- Multiple R-squared: 0.986, Adjusted R-squared: 0.986
- F-statistic: 52692.771 on 1 and 724 DF, p-value: 0.000
The results for the SMB factor are quite convincing as all three criteria outlined above are met and the coefficient is 0.99 and R-squared are at 99%.
= (smf.ols(
model_hml ="hml ~ hml_replicated",
formula=test
data
)
.fit()
) prettify_result(model_hml)
OLS Model:
hml ~ hml_replicated
Coefficients:
Estimate Std. Error Statistic p-Value
Intercept 0.000 0.000 1.655 0.098
hml_replicated 0.963 0.007 133.449 0.000
Summary statistics:
- Number of observations: 726
- Multiple R-squared: 0.961, Adjusted R-squared: 0.961
- F-statistic: 17808.572 on 1 and 724 DF, p-value: 0.000
The replication of the HML factor is also a success, although at a slightly lower level with coefficient of 0.96 and R-squared around 96%.
The evidence hence allows us to conclude that we did a relatively good job in replicating the original Fama-French size and value premiums, although we do not know their underlying code. From our perspective, a perfect match is only possible with additional information from the maintainers of the original data.
Fama and French Five Factor Model
Now, let us move to the replication of the five factor model. We extend the other_sorting_variables
table from above with the additional characteristics operating profitability op
and investment inv
. Note that the dropna()
statement yields different sample sizes as some firms with be
values might not have op
or inv
values.
= (compustat
other_sorting_variables
.assign(= lambda x: (pd.to_datetime(
sorting_date "datadate"].dt.year + 1).astype(str) +
(x["0701", format="%Y%m%d"))
)
.merge(market_equity,="inner",
how=["gvkey", "sorting_date"])
on= lambda x: x["be"] / x["me"])
.assign(bm "permno", "sorting_date", "me", "bm", "op", "inv"])
.get([
)
= (size
sorting_variables
.merge(other_sorting_variables, ="inner",
how=["permno", "sorting_date"])
on
.dropna()=["permno", "sorting_date"])
.drop_duplicates(subset )
In each month, we independently sort all stocks into the two size portfolios. The value, profitability, and investment portfolios, on the other hand, are the results of dependent sorts based on the size portfolios. We then merge the portfolios to the return data for the rest of the year just as above.
= (sorting_variables
portfolios "sorting_date", as_index=False)
.groupby(apply(lambda x: x
.= assign_portfolio(
.assign(portfolio_size "size", [0, 0.5, 1]))
x,
)"sorting_date", "portfolio_size"],
.groupby([=False)
as_indexapply(lambda x: x
.= assign_portfolio(
.assign(portfolio_bm "bm", [0, 0.3, 0.7, 1]),
x, = assign_portfolio(
portfolio_op "op", [0, 0.3, 0.7, 1]),
x, = assign_portfolio(
portfolio_inv "inv", [0, 0.3, 0.7, 1]))
x,
)"permno", "sorting_date",
.get(["portfolio_size", "portfolio_bm",
"portfolio_op", "portfolio_inv"])
)
= (crsp_monthly
portfolios
.assign(= lambda x: (pd.to_datetime(
sorting_date "month"].apply(lambda x: str(x.year - 1) +
x["0701" if x.month <= 6 else str(x.year) + "0701")))
)
.merge(portfolios,="inner",
how=["permno", "sorting_date"])
on )
Now, we want to construct each of the factors, but this time the size factor actually comes last because it is the result of averaging across all other factor portfolios. This dependency is the reason why we keep the table with value-weighted portfolio returns as a separate object that we reuse later. We construct the value factor, HML, as above by going long the two portfolios with high book-to-market ratios and shorting the two portfolios with low book-to-market.
= (portfolios
portfolios_value "portfolio_size", "portfolio_bm", "month"],
.groupby([=False)
as_indexapply(lambda x: pd.Series({
."ret": np.average(x["ret_excess"],
=x["mktcap_lag"])})
weights
)
)
= (portfolios_value
factors_value "month", as_index=False)
.groupby(apply(lambda x: pd.Series({
."hml_replicated": (
"ret"][x["portfolio_bm"] == 3].mean() -
x["ret"][x["portfolio_bm"] == 1].mean())})
x[
) )
For the profitability factor, RMW, we take a long position in the two high profitability portfolios and a short position in the two low profitability portfolios.
= (portfolios
portfolios_profitability "portfolio_size", "portfolio_op", "month"],
.groupby([=False)
as_indexapply(lambda x: pd.Series({
."ret": np.average(x["ret_excess"],
=x["mktcap_lag"])})
weights
)
)
= (portfolios_profitability
factors_profitability "month", as_index=False)
.groupby(apply(lambda x: pd.Series({
."rmw_replicated": (
"ret"][x["portfolio_op"] == 3].mean() -
x["ret"][x["portfolio_op"] == 1].mean())})
x[
) )
For the investment factor, CMA, we go long the two low investment portfolios and short the two high investment portfolios.
= (portfolios
portfolios_investment "portfolio_size", "portfolio_inv", "month"],
.groupby([=False)
as_indexapply(lambda x: pd.Series({
."ret": np.average(x["ret_excess"],
=x["mktcap_lag"])})
weights
)
)
= (portfolios_investment
factors_investment "month", as_index=False)
.groupby(apply(lambda x: pd.Series({
."cma_replicated": (
"ret"][x["portfolio_inv"] == 1].mean() -
x["ret"][x["portfolio_inv"] == 3].mean())})
x[
) )
Finally, the size factor, SMB, is constructed by going long the six small portfolios and short the six large portfolios.
= (
factors_size
pd.concat([portfolios_value, portfolios_profitability, =True)
portfolios_investment], ignore_index"month", as_index=False)
.groupby(apply(lambda x: pd.Series({
."smb_replicated": (
"ret"][x["portfolio_size"] == 1].mean() -
x["ret"][x["portfolio_size"] == 2].mean())})
x[
) )
We then join all factors together into one data frame and construct again a suitable table to run tests for evaluating our replication.
= (factors_size
factors_replicated ="outer", on="month")
.merge(factors_value, how="outer", on="month")
.merge(factors_profitability, how="outer", on="month")
.merge(factors_investment, how
)
= (factors_ff5_monthly
test ="inner", on="month")
.merge(factors_replicated, howround(decimals=4)
. )
Let us start the replication evaluation again with the size factor:
= (smf.ols(
model_smb ="smb ~ smb_replicated",
formula=test
data
)
.fit()
) prettify_result(model_smb)
OLS Model:
smb ~ smb_replicated
Coefficients:
Estimate Std. Error Statistic p-Value
Intercept -0.00 0.000 -1.541 0.124
smb_replicated 0.97 0.004 222.007 0.000
Summary statistics:
- Number of observations: 714
- Multiple R-squared: 0.986, Adjusted R-squared: 0.986
- F-statistic: 49287.319 on 1 and 712 DF, p-value: 0.000
The results for the SMB factor are quite convincing as all three criteria outlined above are met and the coefficient is 0.97 and the R-squared is at 99%.
= (smf.ols(
model_hml ="hml ~ hml_replicated",
formula=test
data
)
.fit()
) prettify_result(model_hml)
OLS Model:
hml ~ hml_replicated
Coefficients:
Estimate Std. Error Statistic p-Value
Intercept 0.000 0.00 1.653 0.099
hml_replicated 0.992 0.01 96.798 0.000
Summary statistics:
- Number of observations: 714
- Multiple R-squared: 0.929, Adjusted R-squared: 0.929
- F-statistic: 9369.905 on 1 and 712 DF, p-value: 0.000
The replication of the HML factor is also a success, although at a slightly higher coefficient of 0.99 and an R-squared around 93%.
= (smf.ols(
model_rmw ="rmw ~ rmw_replicated",
formula=test
data
)
.fit()
) prettify_result(model_rmw)
OLS Model:
rmw ~ rmw_replicated
Coefficients:
Estimate Std. Error Statistic p-Value
Intercept 0.000 0.000 0.327 0.744
rmw_replicated 0.954 0.009 107.444 0.000
Summary statistics:
- Number of observations: 714
- Multiple R-squared: 0.942, Adjusted R-squared: 0.942
- F-statistic: 11544.147 on 1 and 712 DF, p-value: 0.000
We are also able to replicate the RMW factor quite well with a coefficient of 0.95 and an R-squared around 94%.
= (smf.ols(
model_cma ="cma ~ cma_replicated",
formula=test
data
)
.fit()
) prettify_result(model_cma)
OLS Model:
cma ~ cma_replicated
Coefficients:
Estimate Std. Error Statistic p-Value
Intercept 0.001 0.000 3.887 0.0
cma_replicated 0.965 0.008 118.368 0.0
Summary statistics:
- Number of observations: 714
- Multiple R-squared: 0.952, Adjusted R-squared: 0.952
- F-statistic: 14011.077 on 1 and 712 DF, p-value: 0.000
Finally, the CMA factor also replicates well with a coefficient of 0.97 and an R-squared around 95%.
Overall, our approach seems to replicate the Fama-French three and five factor models just as well as the three factors.
Exercises
- Fama and French (1993) claim that their sample excludes firms until they have appeared in Compustat for two years. Implement this additional filter and compare the improvements of your replication effort.
- On his homepage, Kenneth French provides instructions on how to construct the most common variables used for portfolio sorts. Try to replicate the univariate portfolio sort return time series for
E/P
(earnings / price) provided on his homepage and evaluate your replication effort using regressions.