Wolfram Function Repository
Instantuse addon functions for the Wolfram Language
Function Repository Resource:
Perform Bayesian linear regression with conjugate priors
ResourceFunction["BayesianLinearRegression"][{{var_{11},var_{12},…,y_{1}},{var_{21},var_{22},…,y_{2}},…},{f_{1},f_{2},…},{var_{1},var_{2},…}] performs a Bayesian linear model fit on the data using basis functions f_{i} with independent variables var_{i}. 

ResourceFunction["BayesianLinearRegression"][{{var_{11},var_{12},…}→y_{1},{var_{21},var_{22},…}→y_{2},…},…] generates the same result. 

ResourceFunction["BayesianLinearRegression"][{{var_{11},var_{12},…},{var_{21},var_{22},…},…}→{y_{1},y_{2},…},…] generates the same result. 
“PriorParameters”  Values of the hyperparameters corresponding to the prior distribution of the fit coefficients. 
“PosteriorParameters”  Values of the hyperparameters corresponding to the posterior distribution of the fit coefficients. 
“LogEvidence”  The natural log of the marginal likelihood , which is the Bayesian goodnessoffit measure of the model. 
“Prior”  Prior distributions. 
“Posterior”  Posterior distributions. 
“Basis”  The basis functions supplied by the user. 
“IndependentVariables”  The independent variables supplied by the user. 
“Functions”  Functions that yield distributions when applied arbitrary hyperparameters. Only included when the "IncludeFunctions"→ True option is set. 
“PredictiveDistribution”  Predictive distribution of the data. 
“UnderlyingValueDistribution”  The distribution of of the fit line of the data. 
“RegressionCoefficientDistribution”  Marginal distribution of the fit coefficients. 
“ErrorDistribution”  Marginal distribution of the variance (for 1D regression) or covariance matrix. 
Generate a Bayesian linear regression on random data:
In[1]:= 

Out[1]= 

Generate test data:
In[2]:= 

Out[3]= 

Fit the data with a 1^{st} order polynomial:
In[4]:= 

Out[4]= 

Show the predictive distribution of the model and the distribution of the fit line (dashed):
In[5]:= 

Out[5]= 

Plot the joint distribution of the coefficients a and b in the regression equation y==ax+b:
In[6]:= 

Out[6]= 

Plot the pdf of the posterior variance of the residuals:
In[7]:= 

Out[7]= 

Fit the data with a polynomial of arbitrary degree and compare the prediction bands and logevidence of fits up to degree 4:
In[8]:= 

Out[8]= 

BayesianLinearRegression can do multivariate regression. For example, generate some 3D data:
In[9]:= 

Out[10]= 

Do a regression where the x coordinate is treated as the independent variable and y and z are both dependent on x:
In[11]:= 

Out[12]= 

The predictive joint distribution for y and z is a MultivariateTDistribution:
In[13]:= 

Out[13]= 

Visualize the joint distribution as an animation:
In[14]:= 

Out[14]= 

Just like LinearModelFit, BayesianLinearRegression constructs a model with bias term by default:
In[15]:= 

Out[16]= 

Force the intercept to be zero:
In[17]:= 

Out[18]= 

The prior can be specified in the same format as the parameter outputs of the Bayesian linear regression. This can be used to update a model with new observations. To illustrate this, generate some test data and divide the dataset into 2 parts:
In[19]:= 

Out[20]= 

Fit the first half of the data:
In[21]:= 

Out[22]= 

Use the posterior parameters of model1 as the prior to update the model with the second half of the dataset:
In[23]:= 

Out[24]= 

The result is exactly the same as when you would fit all of the data in a single step:
In[25]:= 

Out[26]= 

When "IncludeFunctions"→True, the returned association contains the key "Functions". This key holds the functions that can be used to generate distributions from arbitrary values of hyperparameters:
In[27]:= 

Out[28]= 

The posterior distribution is obtained from the predictive distribution function:
In[29]:= 

Out[29]= 

Apply it to the posterior parameters:
In[30]:= 

Out[30]= 

Here is a dataset where it is unclear if the fit should be first or second order:
In[31]:= 

Out[32]= 

Fit the data with polynomials up to fourth degree, rank the logevidences and inspect them:
In[33]:= 

Out[34]= 

Calculate the weights for each fit:
In[35]:= 

Out[36]= 

Because the weights for the first and second order fits are so close, the correct Bayesian treatment is to include both in the final regression. Define a mixture of posterior predictive distributions:
In[37]:= 

Out[38]= 

Compare the mixture fit to the individual polynomial regressions:
In[39]:= 

Out[39]= 

Symbolically calculate the predictive posterior after a single observation for any prior:
In[40]:= 

Out[41]= 

Visualize the prediction bands:
In[42]:= 

Out[42]= 
