Replicating Fama and French Factors

Note

You are reading the work-in-progress edition of Tidy Finance with Python. Code chunks and text might change over the next couple of months. We are always looking for feedback via contact@tidy-finance.org. Meanwhile, you can find the complete R version here.

In this chapter, we provide a replication of the famous Fama and French factor portfolios. The Fama and French three-factor model (see Fama and French 1993) is a cornerstone of asset pricing. On top of the market factor represented by the traditional CAPM beta, the model includes the size and value factors to explain the cross section of returns. We introduce both factors in Chapter 9, and their definition remains the same. Size is the SMB factor (small-minus-big) that is long small firms and short large firms. The value factor is HML (high-minus-low) and is long in high book-to-market firms and short in low book-to-market counterparts.

The current chapter relies on this set of packages.

import pandas as pd
import numpy as np
import sqlite3

import statsmodels.formula.api as smf

Data Preparation

We use CRSP and Compustat as data sources, as we need exactly the same variables to compute the size and value factors in the way Fama and French do it. Hence, there is nothing new below and we only load data from our database introduced in Chapters 2-4.

tidy_finance = sqlite3.connect("data/tidy_finance.sqlite")

data_ff = (pd.read_sql_query(
    sql="SELECT permno, gvkey, month, ret_excess, mktcap, mktcap_lag, exchange FROM crsp_monthly",
    con=tidy_finance,
    parse_dates={"month": {"unit": "D", "origin": "unix"}})
  .dropna()
)

book_equity = (pd.read_sql_query(
    sql="SELECT gvkey, datadate, be FROM compustat",
    con=tidy_finance,
    parse_dates={"datadate": {"unit": "D", "origin": "unix"}})
  .dropna()
)

factors_ff_monthly = (pd.read_sql_query(
    sql="SELECT month, smb, hml FROM factors_ff_monthly",
    con=tidy_finance,
    parse_dates={"month": {"unit": "D", "origin": "unix"}})
  .dropna()
)

Yet when we start merging our data set for computing the premiums, there are a few differences to Chapter 9. First, Fama and French form their portfolios in June of year \(t\), whereby the returns of July are the first monthly return for the respective portfolio. For firm size, they consequently use the market capitalization recorded for June. It is then held constant until June of year \(t+1\).

Second, Fama and French also have a different protocol for computing the book-to-market ratio. They use market equity as of the end of year \(t - 1\) and the book equity reported in year \(t-1\), i.e., the datadate is within the last year. Hence, the book-to-market ratio can be based on accounting information that is up to 18 months old. Market equity also does not necessarily reflect the same time point as book equity.

To implement all these time lags, we again employ the temporary sorting_date-column. Notice that when we combine the information, we want to have a single observation per year and stock since we are only interested in computing the breakpoints held constant for the entire year. We ensure this by a call of drop_duplicates() at the end of the chunk below.

me_ff = (data_ff
  .query("month.dt.month == 6")
  .assign(sorting_date = lambda x: x["month"] + pd.DateOffset(months=1))
  .get(["permno", "sorting_date", "mktcap"])
  .rename(columns={"mktcap": "me_ff"})
)

me_ff_dec = (data_ff
  .query("month.dt.month == 12")
  .assign(sorting_date = lambda x: x["month"] + pd.DateOffset(months=7))
  .get(["permno", "gvkey", "sorting_date", "mktcap"])
  .rename(columns={"mktcap": "bm_me"})
)

bm_ff = (book_equity
  .assign(
    sorting_date = lambda x: pd.to_datetime((x["datadate"].dt.year + 1).astype(str) + "0701", format="%Y%m%d")
    )
  .rename(columns={"be": "bm_be"})
  .merge(me_ff_dec, 
         how="inner", 
         on=["gvkey", "sorting_date"])
  .assign(bm_ff = lambda x: x["bm_be"] / x["bm_me"])
  .get(["permno", "sorting_date", "bm_ff"])
)

variables_ff = (me_ff
  .merge(bm_ff, 
         how="inner", 
         on=["permno", "sorting_date"])
 )

Portfolio Sorts

Next, we construct our portfolios with an adjusted assign_portfolio() function. Fama and French rely on NYSE-specific breakpoints, they form two portfolios in the size dimension at the median and three portfolios in the dimension of book-to-market at the 30%- and 70%-percentiles, and they use independent sorts. The sorts for book-to-market require an adjustment to the function in Chapter 9 because the seq() we would produce does not produce the right breakpoints. Instead of n_portfolios, we now specify percentiles, which take the breakpoint-sequence as an object specified in the function’s call. Specifically, we give percentiles = c(0, 0.3, 0.7, 1) to the function. Additionally, we perform a join with our return data to ensure that we only use traded stocks when computing the breakpoints as a first step.

def assign_portfolio(data, sorting_variable, percentiles):
    breakpoints = (data
        .query("exchange == 'NYSE'")
        .get(sorting_variable)
        .quantile(percentiles, interpolation = "linear")
        )
    breakpoints.iloc[0] = -np.Inf
    breakpoints.iloc[breakpoints.size-1] = np.Inf
    assigned_portfolios = pd.cut(data[sorting_variable],
                                 bins=breakpoints,
                                 labels=pd.Series(range(1, breakpoints.size)),
                                 include_lowest=True)
    return assigned_portfolios

portfolios_ff = (variables_ff
  .merge(data_ff,
         how="inner",
         left_on=["permno", "sorting_date"], right_on=["permno", "month"])
  .groupby("sorting_date", group_keys=False)
  .apply(lambda x: x
         .assign(portfolio_me = assign_portfolio(x, 'me_ff', [0, 0.5, 1]),
                 portfolio_bm = assign_portfolio(x, 'bm_ff', [0, 0.3, 0.7, 1])))
  .reset_index(drop=True)
  .get(["permno", "sorting_date", "portfolio_me", "portfolio_bm"])
)

Next, we merge the portfolios to the return data for the rest of the year. To implement this step, we create a new column sorting_date in our return data by setting the date to sort on to July of \(t-1\) if the month is June (of year \(t\)) or earlier or to July of year \(t\) if the month is July or later.

portfolios_ff = (data_ff
  .assign(sorting_date = lambda x: pd.to_datetime(
            x["month"].apply(lambda x: str(x.year - 1) + "0701" if x.month <= 6 else str(x.year) + "0701")))
  .merge(portfolios_ff, how="inner", on=["permno", "sorting_date"])
)

Fama and French Factor Returns

Equipped with the return data and the assigned portfolios, we can now compute the value-weighted average return for each of the six portfolios. Then, we form the Fama and French factors. For the size factor (i.e., SMB), we go long in the three small portfolios and short the three large portfolios by taking an average across either group. For the value factor (i.e., HML), we go long in the two high book-to-market portfolios and short the two low book-to-market portfolios, again weighting them equally.

factors_ff_monthly_replicated = (portfolios_ff
  .groupby(["portfolio_me", "portfolio_bm", "month"])
  .apply(lambda x: pd.Series({"ret": np.average(x["ret_excess"], weights=x["mktcap_lag"])}))
  .reset_index()
  .groupby("month")
  .apply(lambda x: pd.Series({
    "smb_replicated": x["ret"][x["portfolio_me"] == 1].mean() - x["ret"][x["portfolio_me"] == 2].mean(),
    "hml_replicated": x["ret"][x["portfolio_bm"] == 3].mean() - x["ret"][x["portfolio_bm"] == 1].mean()
    }))
  .round(decimals=4)
)

Replication Evaluation

In the previous section, we replicated the size and value premiums following the procedure outlined by Fama and French. However, we did not follow their procedure strictly. The final question is then: how close did we get? We answer this question by looking at the two time-series estimates in a regression analysis using smf.ols(). If we did a good job, then we should see a non-significant intercept (rejecting the notion of systematic error), a coefficient close to 1 (indicating a high correlation), and an adjusted R-squared close to 1 (indicating a high proportion of explained variance).

test = (factors_ff_monthly
  .merge(factors_ff_monthly_replicated, 
         how="inner", 
         on="month")
)

The results for the SMB factor are quite convincing as all three criteria outlined above are met and the coefficient and R-squared are at 99%.

model_smb = (smf.ols(formula="smb ~ smb_replicated", data=test)
  .fit()
  .summary(slim=True)
)
for table in model_smb.tables:
  print(table)
                            OLS Regression Results                            
==============================================================================
Dep. Variable:                    smb   R-squared:                       0.986
Model:                            OLS   Adj. R-squared:                  0.986
No. Observations:                 726   F-statistic:                 5.223e+04
Covariance Type:            nonrobust   Prob (F-statistic):               0.00
==============================================================================
==================================================================================
                     coef    std err          t      P>|t|      [0.025      0.975]
----------------------------------------------------------------------------------
Intercept         -0.0001      0.000     -0.993      0.321      -0.000       0.000
smb_replicated     0.9957      0.004    228.540      0.000       0.987       1.004
==================================================================================

The replication of the HML factor is also a success, although at a slightly lower level with coefficient and R-squared around 96%.

model_hml = (smf.ols(formula="hml ~ hml_replicated", data=test)
  .fit()
  .summary(slim=True)
)
for table in model_hml.tables:
  print(table)
                            OLS Regression Results                            
==============================================================================
Dep. Variable:                    hml   R-squared:                       0.958
Model:                            OLS   Adj. R-squared:                  0.958
No. Observations:                 726   F-statistic:                 1.671e+04
Covariance Type:            nonrobust   Prob (F-statistic):               0.00
==============================================================================
==================================================================================
                     coef    std err          t      P>|t|      [0.025      0.975]
----------------------------------------------------------------------------------
Intercept          0.0003      0.000      1.431      0.153      -0.000       0.001
hml_replicated     0.9656      0.007    129.258      0.000       0.951       0.980
==================================================================================

The evidence hence allows us to conclude that we did a relatively good job in replicating the original Fama-French premiums, although we cannot see their underlying code. From our perspective, a perfect match is only possible with additional information from the maintainers of the original data.

Exercises

  1. Fama and French (1993) claim that their sample excludes firms until they have appeared in Compustat for two years. Implement this additional filter and compare the improvements of your replication effort.
  2. On his homepage, Kenneth French provides instructions on how to construct the most common variables used for portfolio sorts. Pick one of them, e.g. OP (operating profitability) and try to replicate the portfolio sort return time series provided on his homepage.

References

Fama, Eugene F., and Kenneth R. French. 1993. Common risk factors in the returns on stocks and bonds.” Journal of Financial Economics 33 (1): 3–56. https://doi.org/10.1016/0304-405X(93)90023-5.