Function Repository Resource:

TensorTrainDecomposition

Source Notebook

Decompose a multidimensional numerical array into a tensor train (TT) of low-rank cores

Contributed by: Ruben Ranval

ResourceFunction["TensorTrainDecomposition"][tensor]

decomposes tensor into a list of low-rank cores.

Details and Options

TensorTrainDecomposition decomposes a numerical array of any rank into a Tensor Train (also known as a Matrix Product State) representation.
The result is a list of rank-3 arrays {A1,A2,An} where n is the number of dimensions of the input tensor.
The bond dimension refers to the size of the internal contracted indices connecting adjacent tensor cores. It effectively dictates the rank and compression level of the Tensor Train.
Each core Ak has dimensions {χk-1,dk,χk} where dk is the size of the kth dimension of the input tensor, and χk is the bond dimension between cores k and k+1.
The first core has χ0=1 and the last core has χn=1.
TensorTrainDecomposition accepts the following options:
Method"SVD"decomposition method
"MaxBondDimension"Infinitymaximum bond dimension χ at each core
Tolerance0truncation threshold
With Method → "QR", the function ignores "MaxBondDimension" and Tolerance and returns the exact decomposition. QRDecomposition does not provide singular values, so truncation is not available.
With the default settings, TensorTrainDecomposition returns an exact decomposition with no truncation.
Setting only "MaxBondDimension" controls memory: bond dimensions are capped at the specified value.
Setting only Tolerance controls accuracy: bond dimensions adapt automatically to meet the error target.
With Method → "SVD", singular values are discarded until the sum of the squared discarded values reaches ε2, where ε is the value specified by Tolerance.
For more mathematical background on this algorithm, see the Wikipedia articles on Tensor Network and Matrix Product State.

Examples

Basic Examples (2) 

Decompose a simple 2×3×4 tensor into a Tensor Train with exact mathematical precision:

In[1]:=
tensor = RandomReal[1, {2, 3, 4}];
tensor // MatrixForm
Out[2]=
In[3]:=
ResourceFunction["TensorTrainDecomposition"]@tensor
Out[3]=

Inspect the dimensions of the underlying cores forming the Tensor Train:

In[4]:=
Dimensions /@ %
Out[4]=

Options (4) 

MaxBondDimension (2) 

Force a maximum bond dimension (rank limit) to compress a large tensor and save memory:

In[5]:=
largeTensor = RandomReal[{-1, 1}, {4, 4, 4, 4}];
In[6]:=
ResourceFunction["TensorTrainDecomposition"][largeTensor, "MaxBondDimension" -> 2]
Out[6]=

Notice how the internal connecting ranks are strictly bounded to 2:

In[7]:=
Dimensions /@ %
Out[7]=

Tolerance (1) 

Discard singular values dynamically based on a truncation error threshold, naturally isolating the true signal from noise:

In[8]:=
noisyTensor = RandomReal[1, {5, 5, 5}];
In[9]:=
ResourceFunction["TensorTrainDecomposition"][noisyTensor, Tolerance -> 0.1]
Out[9]=

Method (1) 

Use "QR" instead of "SVD" to perform an exact transformation without truncation (note that "MaxBondDimension" and Tolerance will be ignored):

In[10]:=
ResourceFunction["TensorTrainDecomposition"][RandomReal[1, {2, 3, 4}], Method -> "QR"]
Out[10]=

Possible Issues (2) 

TensorTrainDecomposition respects exact inputs (like integers or fractions), which causes the underlying SingularValueDecomposition to attempt symbolic root-finding. For large matrices, this scales exponentially:

In[11]:=
exactTensor = RandomInteger[1, {2, 3, 4}];
In[12]:=
AbsoluteTiming[
 ttSlow = ResourceFunction["TensorTrainDecomposition"]@exactTensor;
 ]
Out[12]=

Use N to convert exact inputs to machine precision for faster execution:

In[13]:=
AbsoluteTiming[
 ttFast = ResourceFunction["TensorTrainDecomposition"]@N@exactTensor;
 ]
Out[13]=

Publisher

Ruben Ranval

Requirements

Wolfram Language 13.0 (December 2021) or above

Version History

  • 1.0.0 – 08 May 2026

Source Metadata

Related Resources

Author Notes

MPSDecompose respects the data type of the given tensor. If provided with exact integers, fractions, or variables, the underlying Singular Value Decomposition will attempt an exact symbolic factorization. While this is incredibly useful for small theoretical proofs, calculating exact roots of characteristic polynomials for large matrices scales exponentially and will likely freeze the kernel. For real-world data, large tensors, or when execution speed is critical, you must explicitly convert your tensor to machine-precision floating-point numbers using N

License Information