import pandas as pd
import numpy as np
import sqlite3
import statsmodels.formula.api as smf
from regtabletotext import prettify_result
from statsmodels.regression.rolling import RollingOLS
from plotnine import *
from mizani.breaks import date_breaks
from mizani.formatters import percent_format, date_format
from joblib import Parallel, delayed, cpu_count
Beta Estimation
You are reading the work-in-progress edition of Tidy Finance with Python. Code chunks and text might change over the next couple of months. We are always looking for feedback via contact@tidy-finance.org. Meanwhile, you can find the complete R version here.
In this chapter, we introduce an important concept in financial economics: the exposure of an individual stock to changes in the market portfolio. According to the Capital Asset Pricing Model (CAPM) of Sharpe (1964), Lintner (1965), and Mossin (1966), cross-sectional variation in expected asset returns should be a function of the covariance between the excess return of the asset and the excess return on the market portfolio. The regression coefficient of excess market returns on excess stock returns is usually called the market beta. We show an estimation procedure for the market betas. We do not go into details about the foundations of market beta but simply refer to any treatment of the CAPM for further information. Instead, we provide details about all the functions that we use to compute the results. In particular, we leverage useful computational concepts: rolling-window estimation and parallelization.
We use the following Python packages throughout this chapter:
Compared to previous chapters, we introduce statsmodels
(Seabold and Perktold 2010) for regression analysis and for sliding window regressions, and joblib
(Team 2023) for parallelization.
Estimating Beta using Monthly Returns
The estimation procedure is based on a rolling-window estimation, where we may use either monthly or daily returns and different window lengths. First, let us start with loading the monthly CRSP data from our SQLite
-database introduced in Accessing and Managing Financial Data and WRDS, CRSP, and Compustat.
= sqlite3.connect(
tidy_finance ="data/tidy_finance_python.sqlite"
database
)
= (pd.read_sql_query(
crsp_monthly =("SELECT permno, month, industry, ret_excess "
sql"FROM crsp_monthly"),
=tidy_finance,
con={"month"})
parse_dates
.dropna()
)
= (pd.read_sql_query(
factors_ff3_monthly ="SELECT month, mkt_excess FROM factors_ff3_monthly",
sql=tidy_finance,
con={"month"})
parse_dates
.dropna()
)
= (crsp_monthly
crsp_monthly
.merge(factors_ff3_monthly, ="left",
how="month")
on )
To estimate the CAPM regression coefficients
\[
r_{i, t} - r_{f, t} = \alpha_i + \beta_i(r_{m, t}-r_{f,t})+\varepsilon_{i, t}
\] we regress stock excess returns ret_excess
on excess returns of the market portfolio mkt_excess
. Python provides a simple solution to estimate (linear) models with the function smf.ols()
. smf.ols()
requires a formula as input that is specified in a compact symbolic form. An expression of the form y ~ model
is interpreted as a specification that the response y
is modeled by a linear predictor specified symbolically by model
. Such a model consists of a series of terms separated by +
operators. In addition to standard linear models, smf.ols()
provides a lot of flexibility. You should check out the documentation for more information. To start, we restrict the data only to the time series of observations in CRSP that correspond to Apple’s stock (i.e., to permno
14593 for Apple) and compute \(\hat\alpha_i\) as well as \(\hat\beta_i\).
= (smf.ols(
model_beta ="ret_excess ~ mkt_excess",
formula=crsp_monthly.query("permno == 14593"))
data
.fit()
) prettify_result(model_beta)
OLS Model:
ret_excess ~ mkt_excess
Coefficients:
Estimate Std. Error Statistic p-Value
Intercept 0.010 0.005 2.003 0.046
mkt_excess 1.389 0.111 12.467 0.000
Summary statistics:
- Number of observations: 504
- R-squared: 0.236, Adjusted R-squared: 0.235
- F-statistic: 155.422 on 1 and 502 DF, p-value: 0.000
smf.ols()
returns an object of class RegressionModel
which contains all information we usually care about with linear models. prettify_result()
returns an overview of the estimated parameters. The output above indicates that Apple moves excessively with the market as the estimated \(\hat\beta_i\) is above one (\(\hat\beta_i \approx 1.4\)).
Rolling-Window Estimation
After we estimated the regression coefficients on an example, we scale the estimation of \(\beta_i\) to a whole different level and perform rolling-window estimations for the entire CRSP sample. The following function implements the CAPM regression for a data frame (or a part thereof) containing at least min_obs
observations to avoid huge fluctuations if the time series is too short. If the condition is violated, that is, the time series is too short, the function returns a missing value.
def roll_capm_estimation(data, window_size, min_obs):
= data.sort_values("month")
data
= (RollingOLS.from_formula(
result ="ret_excess ~ mkt_excess",
formula=data,
data=window_size,
window=min_obs
min_nobs
)
.fit()"mkt_excess"]
.params[
)
= data.index
result.index
return result
Before we attack the whole CRSP sample, let us focus on a couple of examples for well-known firms.
= pd.DataFrame({
examples "permno": [14593, 10107, 93436, 17778],
"company": ["Apple", "Microsoft",
"Tesla", "Berkshire Hathaway"]
})
= 60
window_size = 48 min_obs
We take a total of 5 years of data and require at least 48 months with return data to compute our betas. Check out the exercises if you want to compute beta for different time periods. It is actually quite simple to perform the rolling-window estimation for an arbitrary number of stocks, which we visualize in the following code chunk and the resulting Figure 1.
= (crsp_monthly
beta_example
.merge(examples, ="inner",
how="permno")
on"permno"], group_keys=False)
.groupby([apply(
.lambda x: x.assign(
=roll_capm_estimation(x, window_size, min_obs)
beta
)
)
.dropna() )
= (
plot_beta
ggplot(beta_example, ="month", y="beta",
aes(x="company", linetype="company")) +
color+
geom_line() =date_breaks("5 year"),
scale_x_datetime(breaks=date_format("%Y")) +
labels="", y="", color="", linetype="",
labs(x=("Monthly beta estimates for example stocks "
title"using 5 years of data"))
) plot_beta.draw()

Estimating Beta using Monthly Returns
Next, we perform the rolling window estimation for the entire cross-section of stocks in the CRSP sample. For that purpose, we first identify firm identifiers (permno
) for which CRSP contains sufficiently many records.
= (crsp_monthly
valid_permnos "permno")["permno"]
.groupby(
.count()="counts")
.reset_index(name"counts > @window_size + 1")
.query( )
Next, we can apply the code snippet from the example above to compute rolling window regression coefficients for all stocks. This is how to do it with the joblib
package to use multiple cores. Note that we use cpu_count()
to determine the number of cores available for parallelization, but keep one core free for other tasks. Some machines might freeze if all cores are busy with Python jobs.
def roll_capm_estimation_for_joblib(permno, group):
if "date" in group.columns:
= group.sort_values(by="date")
group else:
= group.sort_values(by="month")
group
= (RollingOLS.from_formula(
beta_values ="ret_excess ~ mkt_excess",
formula=group,
data=window_size,
window=min_obs
min_nobs
)
.fit()"mkt_excess"]
.params[
)
= pd.DataFrame(beta_values)
result = ["beta"]
result.columns "month"] = group["month"].values
result["permno"] = permno
result[try:
"date"] = group["date"].values
result[= result[
result "month")["date"]
(result.groupby("max")) == result["date"]]
.transform(except(KeyError):
pass
return result
= (crsp_monthly
permno_groups ="inner", on="permno")
.merge(valid_permnos, how
.dropna()"permno", group_keys=False)
.groupby(
)
= cpu_count() - 1
n_cores
= (
beta_monthly
pd.concat(=n_cores)
Parallel(n_jobs
(delayed(roll_capm_estimation_for_joblib)(name, group)for name, group in permno_groups)
)
.dropna()={"beta": "beta_monthly"})
.rename(columns )
Estimating Beta using Daily Returns
Before we provide some descriptive statistics of our beta estimates, we implement the estimation for the daily CRSP sample as well. Depending on the application, you might either use longer horizon beta estimates based on monthly data or shorter horizon estimates based on daily returns.
First, we load daily CRSP data. Note that the sample is large compared to the monthly data, so make sure to have enough memory available.
= (pd.read_sql_query(
crsp_daily =("SELECT permno, month, date, ret_excess "
sql"FROM crsp_daily"),
=tidy_finance,
con={"month", "date"})
parse_dates
.dropna() )
We also need the daily Fama-French market excess returns.
= (pd.read_sql_query(
factors_ff3_daily ="SELECT date, mkt_excess FROM factors_ff3_daily",
sql=tidy_finance,
con={"date"})
parse_dates
.dropna() )
For the daily data, we consider around 3 months of data (i.e., 60 trading days) and require at least 50 observations. We make again sure to keep only relevant data to save memory space. However, note that your machine might not have enough memory to read the whole daily CRSP sample. In this case, we refer you to the exercises and try working with loops.
= 60
window_size = 50
min_obs
= (crsp_daily
valid_permnos "permno")["permno"]
.groupby(
.count()="counts")
.reset_index(name"counts > @window_size + 1")
.query(= "counts")
.drop(columns
)
= (crsp_daily
crsp_daily
.merge(factors_ff3_daily, ="inner",
how="date")
on
.merge(valid_permnos, ="inner",
how="permno")
on )
Even though we could now just apply the function using groupby()
on the whole CRSP sample, we advise against doing it as it is computationally quite expensive. Remember that we have to perform rolling-window estimations across all stocks and time periods. However, this estimation problem is an ideal scenario to employ the power of parallelization. Parallelization means that we split the tasks which perform rolling-window estimations across different workers (or cores on your local machine).
= (crsp_daily
permno_groups ="inner", on="permno")
.merge(valid_permnos, how
.dropna()"permno", group_keys=False)
.groupby(
)
= (
beta_daily
pd.concat(=n_cores)
Parallel(n_jobs
(delayed(roll_capm_estimation_for_joblib)(name, group)for name, group in permno_groups)
)
.dropna()={"beta": "beta_daily"})
.rename(columns )
Comparing Beta Estimates
What is a typical value for stock betas? To get some feeling, we illustrate the dispersion of the estimated \(\hat\beta_i\) across different industries and across time below. Figure 2 shows that typical business models across industries imply different exposure to the general market economy. However, there are barely any firms that exhibit a negative exposure to the market factor.
= (beta_monthly
beta_industries ="inner", on=["permno", "month"])
.merge(crsp_monthly, how="beta_monthly")
.dropna(subset"industry","permno"])["beta_monthly"]
.groupby(["mean")
.aggregate(
.reset_index()
)
= (beta_industries
industry_order "industry")["beta_monthly"]
.groupby("median").sort_values()
.aggregate(
.index.tolist()
)
= (
plot_beta_industries
ggplot(beta_industries, ="industry", y="beta_monthly")) +
aes(x+
geom_boxplot() +
coord_flip() =industry_order) +
scale_x_discrete(limits="", y="",
labs(x="Firm-specific beta distributions by industry")
title
) plot_beta_industries.draw()

Next, we illustrate the time-variation in the cross-section of estimated betas. Figure 3 shows the monthly deciles of estimated betas (based on monthly data) and indicates an interesting pattern: First, betas seem to vary over time in the sense that during some periods, there is a clear trend across all deciles. Second, the sample exhibits periods where the dispersion across stocks increases in the sense that the lower decile decreases and the upper decile increases, which indicates that for some stocks the correlation with the market increases while for others it decreases. Note also here: stocks with negative betas are a rare exception.
= (beta_monthly
beta_quantiles "month")["beta_monthly"]
.groupby(=np.arange(0.1, 1.0, 0.1))
.quantile(q
.reset_index()={"level_1": "quantile"})
.rename(columns
.assign(=lambda x: (x["quantile"] * 100).astype(int)
quantile
)
.dropna()
)
= (
plot_beta_quantiles
ggplot(beta_quantiles, ="month", y="beta_monthly",
aes(x="factor(quantile)")) +
color+
geom_line() =date_breaks("10 year"),
scale_x_datetime(breaks=date_format("%Y")) +
labels="", y="", color="",
labs(x="Monthly deciles of estimated betas")
title
) plot_beta_quantiles.draw()

To compare the difference between daily and monthly data, we combine beta estimates to a single table. Then, we use the table to plot a comparison of beta estimates for our example stocks in Figure 4.
= (beta_monthly
beta "permno", "month", "beta_monthly"])
.get(["permno", "month", "beta_daily"]),
.merge(beta_daily.get([="outer",
how=["permno", "month"])
on
)
= (beta
beta_comparison ="permno")
.merge(examples, on=["permno", "month", "company"],
.melt(id_vars=["beta_monthly", "beta_daily"],
value_vars="name", value_name="value")
var_name
.dropna()
)
= (
plot_beta_comparison
ggplot(beta_comparison,="month", y="value", color="name")) +
aes(x+
geom_line() "~company", ncol=1) +
facet_wrap(=date_breaks("10 year"),
scale_x_datetime(breaks=date_format("%Y")) +
labels="", y="", color="",
labs(x=("Comparison of beta estimates using monthly "
title"and daily data"))
) plot_beta_comparison.draw()

The estimates in Figure 4 look as expected. As you can see, it really depends on the estimation window and data frequency how your beta estimates turn out.
Finally, we write the estimates to our database such that we can use them in later chapters.
(beta
.to_sql(="beta",
name=tidy_finance,
con="replace",
if_exists=False
index
) )
Whenever you perform some kind of estimation, it also makes sense to do rough plausibility tests. A possible check is to plot the share of stocks with beta estimates over time. This descriptive helps us discover potential errors in our data preparation or estimation procedure. For instance, suppose there was a gap in our output where we do not have any betas. In this case, we would have to go back and check all previous steps to find out what went wrong.
= (crsp_monthly
beta_long ="left", on=["permno", "month"])
.merge(beta, how=["permno", "month"],
.melt(id_vars=["beta_monthly", "beta_daily"],
value_vars="name", value_name="value")
var_name"month", "name"])
.groupby([
.aggregate(=("value", lambda x: sum(~x.isna()) / len(x))
share
)
.reset_index()
)
= (
plot_beta_long
ggplot(beta_long, ="month", y="share", color="name",
aes(x="name")) +
linetype+
geom_line() =percent_format()) +
scale_y_continuous(labels=date_breaks("10 year"),
scale_x_datetime(breaks=date_format("%Y")) +
labels=None, y=None, color=None, linetype=None,
labs(x=("End-of-month share of securities with beta "
title"estimates"))
) plot_beta_long.draw()

Figure 5 does not indicate any troubles, so let us move on to the next check.
We also encourage everyone to always look at the distributional summary statistics of variables. You can easily spot outliers or weird distributions when looking at such tables.
"name")["share"].describe() beta_long.groupby(
count | mean | std | min | 25% | 50% | 75% | max | |
---|---|---|---|---|---|---|---|---|
name | ||||||||
beta_daily | 755.0 | 0.985748 | 0.047797 | 0.0 | 0.985802 | 0.990375 | 0.993690 | 0.999573 |
beta_monthly | 755.0 | 0.606504 | 0.213694 | 0.0 | 0.546678 | 0.668273 | 0.753688 | 0.845352 |
The summary statistics also look plausible for the two estimation procedures.
Finally, since we have two different estimators for the same theoretical object, we expect the estimators should be at least positively correlated (although not perfectly as the estimators are based on different sample periods and frequencies).
"beta_monthly", "beta_daily"]).corr() beta.get([
beta_monthly | beta_daily | |
---|---|---|
beta_monthly | 1.000000 | 0.314997 |
beta_daily | 0.314997 | 1.000000 |
Indeed, we find a positive correlation between our beta estimates. In the subsequent chapters, we mainly use the estimates based on monthly data as most readers should be able to replicate them due to potential memory limitations that might arise with the daily data.
Exercises
- Compute beta estimates based on monthly data using 1, 3, and 5 years of data and impose a minimum number of observations of 10, 28, and 48 months with return data, respectively. How strongly correlated are the estimated betas?
- Compute beta estimates based on monthly data using 5 years of data and impose different numbers of minimum observations. How does the share of
permno
-month
observations with successful beta estimates vary across the different requirements? Do you find a high correlation across the estimated betas? - Instead of using
joblib
, perform the beta estimation in a loop (using either monthly or daily data) for a subset of 100 permnos of your choice. Verify that you get the same results as with the parallelized code from above. - Filter out the stocks with negative betas. Do these stocks frequently exhibit negative betas, or do they resemble estimation errors?
- Compute beta estimates for multi-factor models such as the Fama-French 3 factor model. For that purpose, you extend your regression to \[ r_{i, t} - r_{f, t} = \alpha_i + \sum\limits_{j=1}^k\beta_{i,k}(r_{j, t}-r_{f,t})+\varepsilon_{i, t} \] where \(r_{j, t}\) are the \(k\) factor returns. Thus, you estimate 4 parameters (\(\alpha_i\) and the slope coefficients). Provide some summary statistics of the cross-section of firms and their exposure to the different factors.