Wolfram Language Paclet Repository

Community-contributed installable additions to the Wolfram Language

Primary Navigation

    • Cloud & Deployment
    • Core Language & Structure
    • Data Manipulation & Analysis
    • Engineering Data & Computation
    • External Interfaces & Connections
    • Financial Data & Computation
    • Geographic Data & Computation
    • Geometry
    • Graphs & Networks
    • Higher Mathematical Computation
    • Images
    • Knowledge Representation & Natural Language
    • Machine Learning
    • Notebook Documents & Presentation
    • Scientific and Medical Data & Computation
    • Social, Cultural & Linguistic Data
    • Strings & Text
    • Symbolic & Numeric Computation
    • System Operation & Setup
    • Time-Related Computation
    • User Interface Construction
    • Visualization & Graphics
    • Random Paclet
    • Alphabetical List
  • Using Paclets
    • Get Started
    • Download Definition Notebook
  • Learn More about Wolfram Language

QuantumFramework

Tutorials

  • Getting Started

Guides

  • QuantumPhaseSpaceTranform
  • Wolfram Quantum Computation Framework

Tech Notes

  • Bell's Theorem
  • Circuit Diagram
  • Quantum Optimization
  • Exploring Fundamentals of Quantum Theory
  • QPU Service Connection
  • Quantum object abstraction
  • Quantum Optimization
  • Second Quantization Functions
  • Tensor Network
  • Quantum Computation

Symbols

  • QuantumBasis
  • QuantumChannel
  • QuantumCircuitMultiwayGraph [EXPERIMENTAL]
  • QuantumCircuitOperator
  • QuantumDistance
  • QuantumEntangledQ
  • QuantumEntanglementMonotone
  • QuantumEvolve
  • QuantumMeasurement
  • QuantumMeasurementOperator
  • QuantumMeasurementSimulation
  • QuantumMPS [EXPERIMENTAL]
  • QuantumOperator
  • QuantumPartialTrace
  • QuantumPhaseSpaceTransform
  • QuantumShortcut [EXPERIMENTAL]
  • QuantumStateEstimate [EXPERIMENTAL]
  • QuantumState
  • QuantumTensorProduct
  • QuantumWignerMICTransform [EXPERIMENTAL]
  • QuantumWignerTransform [EXPERIMENTAL]
  • QuditBasis
  • QuditName
Quantum Optimization
Quantum Natural Gradient Descent
Examples Custom Functions
Quantum Linear Solver
​
In this Tech Note, we document the implementation and utilization of essential functions used in the
Wolfram Language Example Repository
for Quantum Optimization algorithms.
By providing a comprehensive overview and usage guidelines for these functions, we aim to introduce new and experienced users into quantum optimization techniques and quantum computing research.
In[92]:=
<<Wolfram`QuantumFramework`ExampleRepository`
Quantum Natural Gradient Descent
Gradient-based optimization methods represent a cornerstone in the field of numerical optimization, offering powerful techniques to minimize or maximize objective functions in various domains, ranging from machine learning and deep learning to physics and engineering. These methods utilize the gradient, or derivative, of the objective function with respect to its parameters to iteratively update them in a direction that reduces the function's value.
This Tech Note will include a brief introduction to the Fubini-Study metric tensor and Quantum Natural Gradient Descent. We will provide illustrative examples and compare these methods to regular gradient descent algorithms.

FubinyStudyMetricTensor

FubiniStudyMetricTensor[QuantumState[...] ,opts]
calculates the Fubini–Study metric tensor as defined by the VQE approach from a QuantumState with defined parameters
The Fubini-Study metric tensor plays a crucial role in Quantum Natural Gradient Descent. Originating from the field of differential geometry, the Fubini-Study metric tensor enables the quantification of distances between quantum states on the complex projective Hilbert space.
In the context of Quantum Natural Gradient Descent, this metric tensor serves as a fundamental tool for characterizing the geometry of the parameter space associated with parameterized quantum circuits. By incorporating the Fubini-Study metric tensor into the optimization process, this algorithm effectively accounts for the curvature of the quantum state manifold, enabling more precise and efficient updates of circuit parameters.
Considering a variational state ϕ(θ), their correspondant Fubiny-Study metric follows:
g
ij
=Re
∂
i
ϕ

∂
j
ϕ
-
∂
i
ϕ

ϕ

ϕ

∂
j
ϕ
​​
where
|
∂
i
ϕ(θ)〉=
∂
i
|ϕ(θ)〉/∂
θ
i
.

Example

Implement a simple single qubit state:
In[3]:=
state=
QuantumState
[{Cos[θ1],Exp[2*I*θ2]*Sin[θ1]},"Parameters"{θ1,θ2}]
Out[3]=
QuantumState
Pure state
Qudits: 1
Type: Vector
Dimension: 2

In[4]:=
state["Formula"]
Out[4]=
Cos[θ1]|0〉+
2θ2

Sin[θ1]|1〉
Apply the FubiniStudyMetricTensor function to obtain the calculated QuantumOperator:
In[5]:=
FubiniStudyMetricTensor[state]
Out[5]=
QuantumOperator
Pure map
​
Dimension: 2→2
Order: {1}→{1}

In[6]:=
FubiniStudyMetricTensor[state,"MatrixForm"]
Out[6]//MatrixForm=
1
0
0
2
Sin[2θ1]

Properties

"Matrix"
obtain the correspondant Fubini-Study metric tensor matrix in as a list.
"MatrixForm"
obtain the correspondant Fubini-Study metric tensor matrix in MatrixForm.
"Parameters"
obtain parameters used for the differentiation process.
"SparseArray"
obtain the correspondant Fubini-Study metric tensor matrix as a SparseArray.
Use "Matrix" and "MatrixForm" to obtain a simplified matrix expression which assumes Real-valued parameters:
In[8]:=
FubiniStudyMetricTensor[state,"Matrix"]
Out[8]=
{{1,0},{0,
2
Sin[2θ1]
}}
In[9]:=
FubiniStudyMetricTensor[state,"MatrixForm"]
Out[9]//MatrixForm=
1
0
0
2
Sin[2θ1]
Use "Parameters" to obtain the parameters pre-defined in the QuantumState used as input:
In[10]:=
FubiniStudyMetricTensor[state,"Parameters"]
Out[10]=
{θ1,θ2}
Use "SparseArray" for a non-simplified result:
In[20]:=
FubiniStudyMetricTensor[state,"SparseArray"]
Out[20]=
SparseArray
Specified elements: 4
Dimensions: {2,2}

In[21]:=
%//Normal//Short
Out[21]//Short=
Re
2θ2-2Conjugate[θ2]

Conjugate[Cos[θ1]]Cos[θ1]+Conjugate[Sin[θ1]]Sin[θ1]-
2θ2-21

Conjugate[Sin[θ1]]Cos[θ1]-11,1,1
Request them all using
All
as third argument:
In[12]:=
FubiniStudyMetricTensor[state,All]
Out[12]=
ResultQuantumOperator
Pure map
​
Dimension: 2→2
Order: {1}→{1}
,Matrix{{1,0},{0,
2
Sin[2θ1]
}},MatrixForm
1
0
0
2
Sin[2θ1]
,Parameters{θ1,θ2},SparseArraySparseArray
Specified elements: 4
Dimensions: {2,2}


QuantumNaturalGradientDescent

Quantum Natural Gradient-Based Optimization Methods represent a cutting-edge approach to optimizing parameterized quantum circuits in the field of quantum computing. Unlike classical gradient-based methods, Quantum Natural Gradient techniques account for the unique geometry of the quantum state manifold, mitigating issues such as barren plateaus and enabling faster convergence rates.
An optimization step in a gradient descent is given by:
θ
t+1
=
θ
t
-η∇ℒ(θ)
where ℒ(θ) is the cost function with parameters θ, and η is the step rate. Each step in this approach assumes a flat Euclidean space, which would lead us to not–unique parametrizations that can disrupt our optimization, specially in singular points. In order to avoid this problem we must use non-Euclidean parameter spaces, performing what is called a natural gradient descent.
The quantum state space features an invariant metric tensor referred to as the Fubini–Study metric tensor
g
ij
, which can be used to develop a quantum version of a natural gradient descent. Then each optimization step is given by:
θ
t+1
=
θ
t
-η
+
g
(θ)∇ℒ(θ)
where
+
g
the Fubini-Study metric tensor pseudo-inverse.
QuantumNaturalGradientDescent[f,metric ,opts]
calculates the gradient descent of f using the defined metric tensor for the parameters space.
We will briefly demonstrate how to apply all these functions in Wolfram quantum framework.
QuantumNaturalGradientDescent function follows the fundamental process of the Variational Quantum Eigensolver (VQE) which involves using a brief quantum circuit U(θ), characterized by parameters
θ={
θ
1
,…
θ
m
}
to iteratively adjust θ to minimize the average energy or cost function:
f(θ)=〈ϕ(θ)|H|ϕ(θ)〉
for the ansatz
|ϕ(θ)〉=U(θ)|0〉
.

Example

Implement a simple single qubit state:
In[3]:=
state=
QuantumState
[{Cos[θ1],Exp[2*I*θ2]*Sin[θ1]},"Parameters"{θ1,θ2}];
In[4]:=
state["Formula"]
Out[4]=
Cos[θ1]|0〉+
2θ2

Sin[θ1]|1〉
Calculate the Fubiny-Study metric tensor:
In[4]:=
metric=FubiniStudyMetricTensor[state]
Out[4]=
QuantumOperator
Pure map
​
Dimension: 2→2
Order: {1}→{1}

In[6]:=
FubiniStudyMetricTensor[state,"Matrix"]
Out[6]=
{{1,0},{0,
2
Sin[2θ1]
}}
Calculate a
|ϕ(θ)〉
state vector:
In[5]:=
|ϕ〉=state["StateVector"];〈ϕ|=
†
state
["StateVector"];
Implement a cost function 〈ϕ(θ)|H |ϕ(θ)〉 :
In[6]:=
cost[θ1_,θ2_]=〈ϕ|.PauliMatrix[1].|ϕ〉;
In[7]:=
FullSimplify[cost[θ1,θ2],{θ1,θ2}∈Reals]
Out[7]=
Cos[2θ2]Sin[2θ1]
Use QuantumNaturalGradientDescent to minimize the function:
In[12]:=
qp=
QuantumNaturalGradientDescent
[cost,metric,"InitialPoint"{π/12,π/12},"LearningRate"0.05];
QuantumNaturalGradientDescent returns all the parameters obtained until convergence. We can check how the minimization evolved:
In[13]:=
ListLinePlot[cost@@@qp,FrameLabel{"Optimization steps","f"},​​FrameTrue,GridLinesAutomatic]
Out[13]=
Check optimization results:

Options

Gradient
If the gradient is not specified, it is calculated numerically using a center finite differnece algorithm.
InitialPoint
Specify the initial point to start the iteration, compare the following gradient descent setups:
Analyze the parameters evolution with different starting point:
LearningRate
Specify step size (η) taken during each iteration, compare the following gradient descent setups:
In this case, the optimization is done almost instantly by "LearningRate"  0.1:
Verify the trajectory generated by "LearningRate"  0.5 and "LearningRate"  0.1:
MaxIterations
Specify the maximum number of iterations to use:

Results Overview

Verify the evolution of the paremeters for each case:

GradientDescent

Gradient Descent aims to minimize a given objective function by iteratively updating the parameters in the direction of the steepest descent of the function. This simplicity and effectiveness render Gradient Descent indispensable in the realm of optimization, serving as the basis for numerous advanced optimization algorithms.

Example

Implement a simple function
Set initial parameters and use GradientDescent to minimize the function:
GradientDescent returns all the parameters obtained until convergence. We can check how the minimization evolved:
The last parameters obtained correspond to the minimized function:
We can contrast the result using NMinimize:

Options

Gradient
If the gradient is not specified, it is calculated numerically using a center finite differnece algorithm.
MaxIterations
Specify the maximum number of iterations to use:
LearningRate
Specify step size (η) taken during each iteration, compare the following gradient descent setups:
In this case, the optimization is done almost instantly by "LearningRate"  0.5:
We can visually verify that η = 0.9 was not efficent to find the solution:
Quantum Linear Solver
We will briefly demonstrate how to apply all these functions in Wolfram quantum framework.

Details and Options

QuantumLinearSolve utilizes a multiplexer-based variational quantum linear solver algorithm. The primary distinctions from a conventional variational quantum linear solver algorithm are as follows:
◼
  • The solution to the linear system is encoded directly in the amplitudes of the resultant quantum state.
  • This approach simplifies the standard variational quantum linear solver by reducing the need for multiple circuits used in the real-imaginary decomposition of the solution and the term-by-term computation within quantum circuits. A detailed step-by-step implementation is outlined in this documentation.
    The basic steps to implement the algorithm are the following:

    Algorithm Implementation

    Ansatz
    Generate a variational state in 4D:
    Show its formula:
    Represent it directly in a quantum circuit:
    Request "Ansatz" from QuantumLinearSolve function:
    All parameters are real in above equation.
    Multiplexer
    Use QuantumOperator property to obtain the Pauli decomposition of matrix m:
    It is possible to apply the tensor product of Pauli matrices as controlled operations:
    Request "CircuitOperator" from QuantumLinearSolve function to obtain the variational circuit including the multiplexer and the variational ansatz:
    Cost Function
    In other words, this cost function does not ensure that the amplitudes of both states are identical, but rather that they are proportional by a global phase:
    In order to correct our result, we need to find the global phase and apply it along with the normalization terms for both b and m once our calculation is done such that:

    Options

    Ansatz
    Compare the result with the classical method:
    GlobalPhaseAccuracy
    Let's indicate a high accuracy:
    AccuracyGoal & PrecisionGoal
    We can change them in order to get less precision but faster timing:
    Compare both approximate results with the original result:
    Method
    Heuristic methods include:
    Some methods may give suboptimal results for certain problems:
    WorkingPrecision

    Example

    Simple example
    Generate a random 4×4 real matrix:
    Generate a random complex vector of the length 4:
    When running above code, you may see a progress box describing steps and estimated time.
    Compare the results:
    Properties
    You can request the components used during the calculation using a third property argument:
    Show the ansatz quantum state used for parameter optimization"
    Show the quantum circuit used for parameter optimization:
    Request more than one property:
    Examples Custom Functions

    Fubini-Study Metric Tensor (Alternative Methods)

    The Fubini-Study metric tensor plays a crucial role in Quantum Natural Gradient Descent. Originating from the field of differential geometry, the Fubini-Study metric tensor enables the quantification of distances between quantum states on the complex projective Hilbert space. In the context of Quantum Natural Gradient Descent, this metric tensor serves as a fundamental tool for characterizing the geometry of the parameter space associated with parameterized quantum circuits. By incorporating the Fubini-Study metric tensor into the optimization process, this algorithm effectively accounts for the curvature of the quantum state manifold, enabling more precise and efficient updates of circuit parameters.

    Example

    Considering the following variational circuit
    Obtain parameterized layers indicating the variational quantum circuit and the parameters it depends on:
    Calculate the Fubini-Study metric tensor using the layers, the parameters and the numerical values to be used for each parameter:

    Stochastic Parameter Shift–Rule

    The Stochastic Parameter Shift Rule (SPSR) represents a powerful optimization technique specifically tailored for parameterized quantum circuits. Unlike conventional gradient-based methods, SPSR offers a stochastic approach to computing gradients, making it particularly well-suited for scenarios involving noisy quantum devices or large-scale quantum circuits. At its core, SPSR leverages the principles of quantum calculus to estimate gradients by probabilistically sampling from parameter space and evaluating the quantum circuit's expectation values. By incorporating random perturbations in parameter values and exploiting symmetry properties, SPSR effectively mitigates the detrimental effects of noise and provides robust optimization solutions.
    By the other hand, the Approximate Stochastic Parameter Shift Rule (ASPSR) emerges as a pragmatic solution to address the computational complexity associated with exact Stochastic Parameter Shift Rule (SPSR) calculations, particularly in scenarios involving high-dimensional parameter spaces or resource-constrained environments. ASPSR strategically balances computational efficiency with optimization accuracy by employing approximation techniques to streamline gradient estimation procedures. By leveraging simplified or truncated calculations while preserving the essential characteristics of SPSR, ASPSR facilitates faster gradient computations without significantly compromising optimization quality.

    Example using SPSRGradientValues

    For this example we will use the following generator:
    We will differentiate θ1. Implement the correspondant matrix fixing other parameters:
    Calculate the gradient using SPSRGradientValues indicating the Pauli matrix associated with θ1:
    Check the result and find a suitable fit:

    Example using ASPSRGradientValues

    For this example we will use the following generator:
    We will differentiate θ1. Implement the correspondant matrix fixing other parameters:
    For this algorithm we need the operator not associated with θ1:
    Calculate the gradient using SPSRGradientValues indicating the Pauli matrix associated with θ1 and the H opeator defined before:
    Check the result and find a suitable fit:

    © 2025 Wolfram. All rights reserved.

    • Legal & Privacy Policy
    • Contact Us
    • WolframAlpha.com
    • WolframCloud.com