Function Repository Resource:

MonteCarloSample

Source Notebook

Sample from a probability density function using the Markov chain Monte Carlo (MCMC) method

Contributed by: Brian Lu

ResourceFunction["MonteCarloSample"][f,p,n]

generates n samples from probability density function f starting from point p.

ResourceFunction["MonteCarloSample"][f,{p1,,pm},n]

generates samples starting from an ensemble consisting of m points.

ResourceFunction["MonteCarloSample"][f,{dist,m},n]

uses probability distribution dist as the prior and samples m points from dist as the initial points.

ResourceFunction["MonteCarloSample"][f,p,{n,burn}]

generates samples but discards the first burn samples.

ResourceFunction["MonteCarloSample"][f,p,{n,burn,i}]

generates samples and takes every ith sample.

Details and Options

The number of arguments that the probability function f takes should match the dimension of the initial points.
The probability function f does not need to be normalized.
When the initial points are specified as {dist,m}, the domain of the prior distribution dist should have the same dimension as the number of arguments of f and should have a probability distribution function (PDF) in symbolic closed form.
The burn specifies the number of samples to be discarded at the beginning of the chain, where thermalization has not been reached, often referred to as "burn-in". The burn can be specified as Automatic, where the length of thermalization steps are automatically determined from the list of probabilities.
The i is a positive integer step size, in which samples are taken. The i can be specified as Automatic, where it is set to be the inverse of the overall acceptance rate with an upper limit of 10.
The following options are supported:
MethodAutomaticthe MCMC algorithm to use
"LogProbability"Falsewhether the probability function f is specified on a log scale
"TargetAcceptanceRate"0.25the target rate at which proposed points are accepted for certain methods
“ListableFunction"Falsewhether the probability function f is listable
ParallelizationFalsewhether parallelization is used in sampling
Possible settings for Method include:
"RandomWalk"uses a fixed probability distribution for drawing proposals
"AdaptiveMetropolis"use the method of Haario et al.
"EMCEE"use the method of Foreman-Mackey et al.
"DifferentialEvolution"use the method of ter Braak
For the method "RandomWalk", the following additional suboption is supported:
"JumpDistribution"the symmetrical probability distribution for drawing proposals
For the method "AdaptiveMetropolis", the following additional suboptions are supported:
"InitialScaleMultiplier"1the initial scale of the step relative to the default one
"RandomWalkScale"0.1the absolute scale of steps when random walk is adopted
For the method "EMCEE", the following additional suboptions are supported:
"StretchScale"2the maximum scale of the stretch moves
For the method "DifferentialEvolution", the following additional suboptions are supported:
"InitialScaleMultiplier"2the initial scale of the step relative to the default one
"RandomnessScale"0.001the relative scale of randomness of the steps

Examples

Basic Examples (5) 

Generate 10 samples from a Student t distribution with 3 degrees of freedom:

In[1]:=
ResourceFunction["MonteCarloSample"][(3 + #^2)^-2 &, {0.}, 10]
Out[1]=

Generate 1000 samples, discard the first 200 samples, and take every fifth sample:

In[2]:=
Short[data = ResourceFunction[
   "MonteCarloSample"][(3 + #^2)^-2 &, {0.}, {1000, 200, 5}]]
Out[2]=
In[3]:=
Histogram[Flatten[data], {-5, 5, 0.5}, ImageSize -> Small]
Out[3]=

Do automatic burn-in removal and thinning instead:

In[4]:=
Short[data = ResourceFunction[
   "MonteCarloSample"][(3 + #^2)^-2 &, {0.}, {1000, Automatic, Automatic}]]
Out[4]=
In[5]:=
Histogram[Flatten[data], {-5, 5, 0.5}, ImageSize -> Small]
Out[5]=

Sample from a 2D normal distribution starting from an ensemble consisting of 10 random points:

In[6]:=
Short[data = ResourceFunction["MonteCarloSample"][Exp[-(#1^2 + #2^2)/2] &, RandomReal[{-1, 1}, {10, 2}], 20000]]
Out[6]=
In[7]:=
Histogram3D[data, {{-2, 2, 0.5}, {-2, 2, 0.5}}, ImageSize -> Small]
Out[7]=

Specify a prior in addition to the likelihood function:

In[8]:=
likelihood = Exp[-(#1^2 + #2^2)/2] &
prior = UniformDistribution[{{0, 2}, {-2, 2}}]
Out[8]=
Out[9]=
In[10]:=
Short[data = ResourceFunction["MonteCarloSample"][likelihood, {prior, 10}, 20000]]
Out[10]=
In[11]:=
Histogram3D[data, {{-2, 2, 0.5}, {-2, 2, 0.5}}, PlotRange -> {{-2, 2}, {-2, 2}, All}, ImageSize -> Small]
Out[11]=

Options (8) 

Method (3) 

The random walk method takes jumps following a normal distribution by default. A custom distribution can also be specified:

In[12]:=
ResourceFunction["MonteCarloSample"][Exp[-#^2/2] &, {0.}, 10, Method -> {"RandomWalk", "JumpDistribution" -> UniformDistribution[{{-0.5, 0.5}}]}]
Out[12]=

The adaptive Metropolis method takes jumps following an adaptive multinormal distribution that is derived from existing samples:

In[13]:=
Short[data = ResourceFunction["MonteCarloSample"][Exp[-#^2/2] &, RandomReal[{-0.05, 0.05}, {10, 1}], 200, Method -> "AdaptiveMetropolis"]]
Out[13]=
In[14]:=
ListLinePlot[Partition[Flatten[data], 20], ImageSize -> Small]
Out[14]=

The EMCEE and differential evolution methods take jumps being the displacement between two points multiplied by a scale factor. Increasing the number of initial points may improve the performance of these two methods:

In[15]:=
Short[data = ResourceFunction["MonteCarloSample"][Exp[-#^2/2] &, RandomReal[{-1, 1}, {100, 1}], 4000, Method -> "EMCEE"]]
Out[15]=
In[16]:=
Histogram[Flatten[data], {-5, 5, 0.5}, "PDF", ImageSize -> Small]
Out[16]=

LogProbability (2) 

When the probability function evaluates to a tiny number, machine numbers may not be able to represent them accurately and the samples do not converge:

In[17]:=
Short[data = ResourceFunction["MonteCarloSample"][Exp[-(# - 50.)^4] &, {0.}, 100]]
Out[17]=
In[18]:=
ListLinePlot[Flatten[data], ImageSize -> Small]
Out[18]=

"LogProbability"True indicates that the probability function is on a log scale, and the samples converge to x≈50:

In[19]:=
Short[data = ResourceFunction["MonteCarloSample"][-(# - 50.)^4 &, {0.}, 100, "LogProbability" -> True]]
Out[19]=
In[20]:=
ListLinePlot[Flatten[data], ImageSize -> Small]
Out[20]=

TargetAcceptanceRate (1) 

Setting the target acceptance rate is only effective for adaptive Metropolis or differential evolution methods. A higher acceptance rate decreases the step size and vice versa:

In[21]:=
Short[data1 = ResourceFunction["MonteCarloSample"][-#^2/2 &, RandomReal[{-1, 1}, {10, 1}], 400, Method -> "AdaptiveMetropolis", "LogProbability" -> True, "TargetAcceptanceRate" -> 0.1]]
Out[21]=
In[22]:=
Short[data2 = ResourceFunction["MonteCarloSample"][-#^2/2 &, RandomReal[{-1, 1}, {10, 1}], 400, Method -> "AdaptiveMetropolis", "LogProbability" -> True, "TargetAcceptanceRate" -> 0.9]]
Out[22]=
In[23]:=
{ListLinePlot[Partition[Flatten[data1], 40]], ListLinePlot[Partition[Flatten[data2], 40]]}
Out[23]=

ListableFunction (1) 

If the probability function is listable, specifying "ListableFunction"True and employing a large number of initial points may significantly improve the performance:

In[24]:=
AbsoluteTiming[
 ResourceFunction["MonteCarloSample"][Exp[-#^2/2] &, RandomReal[{-1, 1}, {100, 1}], 10000];]
Out[24]=
In[25]:=
AbsoluteTiming[
 ResourceFunction["MonteCarloSample"][Exp[-#^2/2] &, RandomReal[{-1, 1}, {100, 1}], 10000, "ListableFunction" -> True];]
Out[25]=

Parallelization (1) 

Enabling parallelization is only beneficial when the probability function can be run in parallel and each evaluation takes a long time:

In[26]:=
prob[x_] := (Pause[0.1]; Exp[-x^2])
AbsoluteTiming[
 ResourceFunction["MonteCarloSample"][prob, RandomReal[{3, 4}, {10, 1}], 40];]
Out[27]=
In[28]:=
AbsoluteTiming[
 ResourceFunction["MonteCarloSample"][prob, RandomReal[{3, 4}, {10, 1}], 40, Parallelization -> True];]
Out[28]=

Applications (8) 

Fit a distribution (4) 

Get the posterior distribution of two parameters in a Student t distribution:

In[29]:=
points = RandomVariate[StudentTDistribution[0.5, 1.2, 3], 20]
Out[29]=

Choose a uniform prior over μ[-2,2], σ[0,3]:

In[30]:=
loglikelihood[\[Mu]_, \[Sigma]_] := Quiet@Log@
    Likelihood[StudentTDistribution[\[Mu], \[Sigma], 3], points];
prior = UniformDistribution[{{-2, 2}, {0, 3}}]
Out[31]=

Run MCMC:

In[32]:=
Short[data = ResourceFunction["MonteCarloSample"][
   loglikelihood, {prior, 20}, {10000, Automatic, Automatic}, "LogProbability" -> True]]
Out[32]=
In[33]:=
DensityHistogram[data, {{-1, 2, 0.1}, {0.5, 3, 0.1}}, PlotRange -> {{-1, 2}, {0.5, 3}, All}, FrameLabel -> {"\[Mu]", "\[Sigma]"}, ImageSize -> Small]
Out[33]=

The location of the peak in the posterior distribution matches the result given by EstimatedDistribution:

In[34]:=
EstimatedDistribution[points, StudentTDistribution[\[Mu], \[Sigma], 3], ParameterEstimator -> "MaximumLikelihood"]
Out[34]=

Fit a sinusoidal function (4) 

Get the posterior distribution of amplitude, frequency, phase, and mean of a sinusoidal function:

In[35]:=
f[x_] := 1.5 Sin[3.0 x + 1.0] - 0.6
noise := RandomVariate[NormalDistribution[0, 0.25]]
Short[points = Table[{x, Around[f[x] + noise, 0.25]}, {x, RandomReal[{0, 5}, 40]}]]
Out[37]=
In[38]:=
Show[Plot[f[x], {x, 0, 5}, PlotRange -> {-3, 2}], ListPlot[points], Frame -> True]
Out[38]=

The likelihood is derived based on the assumption that each point has a normally distributed error. Define the likelihood, and select the initial points for MCMC around the peak of the likelihood function:

In[39]:=
loglikelihood[a_, \[Omega]_, \[Phi]_, b_] := Evaluate@
  Total[-(((a Sin[\[Omega] #1 + \[Phi]] - b) - #2[[1]])^2/(
      2 #2[[2]]^2)) & @@@ points]
Short[initial = # + {1.5, 3.0, 1.0, -0.6} & /@ RandomReal[{-0.1, 0.1}, {50, 4}]]
Out[40]=

Run MCMC:

In[41]:=
Short[data = ResourceFunction["MonteCarloSample"][loglikelihood, initial, {50000, Automatic, Automatic}, Method -> "AdaptiveMetropolis", "LogProbability" -> True, "ListableFunction" -> True]]
Out[41]=

The marginal distribution of all four parameters compared to their input values:

In[42]:=
MapThread[
 Histogram[#, 20, "PDF", Frame -> True, FrameLabel -> {#2, "PDF"}, Epilog -> {Red, InfiniteLine[{#3, 0}, {0, 1}]}] &, {Transpose@
   data, {"amplitude", "frequency", "phase", "mean"}, {1.5, 3.0, 1.0, 0.6}}]
Out[42]=

Properties and Relations (1) 

The normalization of the probability function does not affect the result:

In[43]:=
normalized[x_] := 1/(\[Pi] (1 + x^2))
unnormalized[x_] := 1/(1 + x^2)
Histogram[Flatten[#], {-3, 3, 0.5}] & /@
 {ResourceFunction["MonteCarloSample"][normalized, RandomReal[{-1, 1}, {10, 1}], 8000],
  ResourceFunction["MonteCarloSample"][unnormalized, RandomReal[{-1, 1}, {10, 1}], 8000]}
Out[44]=

Possible Issues (2) 

If all of the initial points are on an hyperplane, all samples will be on the same hyperplane when the EMCEE or differential evolution method is used:

In[45]:=
Short[initial = RandomReal[{-1, 1}, {50, 2}] . {{1, 0, 0.5}, {0, 1, 0.5}}]
Out[45]=
In[46]:=
Short[data = ResourceFunction["MonteCarloSample"][-(#1^2 + #2^2 + #3^2) &, initial, 500, Method -> "EMCEE", "LogProbability" -> True]]
Out[46]=
In[47]:=
{ListPointPlot3D[initial, BoxRatios -> Automatic], ListPointPlot3D[data, BoxRatios -> Automatic]}
Out[47]=

Automatic burn-in removal is done by comparing the log-posteriors with the median log-posterior. In some cases, it may keep samples where thermalization has not been reached:

In[48]:=
Short[data1 = Flatten@ResourceFunction["MonteCarloSample"][Exp[-#^2/2] &, RandomReal[{0, 0.0001}, {200, 1}], {20000, Automatic}, Method -> "AdaptiveMetropolis"]]
Out[48]=
In[49]:=
Short[normal = RandomVariate[NormalDistribution[], 20000]]
Out[49]=
In[50]:=
Histogram[{data1, normal}, {-3.1, 3.1, 0.2}, "PDF", ImageSize -> Small, ChartLegends -> {"MCMC", "Normal"}]
Out[50]=

To mitigate this problem, place the initial points on a wider range:

In[51]:=
Short[data2 = Flatten@ResourceFunction["MonteCarloSample"][Exp[-#^2/2] &, RandomReal[{-2, 2}, {100, 1}], {20000, Automatic}, Method -> "AdaptiveMetropolis"]]
Out[51]=
In[52]:=
Histogram[{data2, normal}, {-3.1, 3.1, 0.2}, "PDF", ImageSize -> Small, ChartLegends -> {"MCMC", "Normal"}]
Out[52]=

Publisher

Brian Lu

Version History

  • 1.0.0 – 05 July 2022

Source Metadata

Related Resources

License Information