Wolfram Language Paclet Repository

Community-contributed installable additions to the Wolfram Language

Primary Navigation

    • Cloud & Deployment
    • Core Language & Structure
    • Data Manipulation & Analysis
    • Engineering Data & Computation
    • External Interfaces & Connections
    • Financial Data & Computation
    • Geographic Data & Computation
    • Geometry
    • Graphs & Networks
    • Higher Mathematical Computation
    • Images
    • Knowledge Representation & Natural Language
    • Machine Learning
    • Notebook Documents & Presentation
    • Scientific and Medical Data & Computation
    • Social, Cultural & Linguistic Data
    • Strings & Text
    • Symbolic & Numeric Computation
    • System Operation & Setup
    • Time-Related Computation
    • User Interface Construction
    • Visualization & Graphics
    • Random Paclet
    • Alphabetical List
  • Using Paclets
    • Get Started
    • Download Definition Notebook
  • Learn More about Wolfram Language

QuantumFramework

Tutorials

  • Getting Started

Guides

  • Wolfram Quantum Computation Framework

Tech Notes

  • Bell's Theorem
  • Circuit Diagram
  • Exploring Fundamentals of Quantum Theory
  • QPU Service Connection
  • Quantum object abstraction
  • Quantum Optimization
  • Second Quantization Functions
  • Tensor Network
  • Quantum Computation

Symbols

  • QuantumBasis
  • QuantumChannel
  • QuantumCircuitMultiwayGraph [EXPERIMENTAL]
  • QuantumCircuitOperator
  • QuantumDistance
  • QuantumEntangledQ
  • QuantumEntanglementMonotone
  • QuantumEvolve
  • QuantumMeasurement
  • QuantumMeasurementOperator
  • QuantumMeasurementSimulation
  • QuantumMPS [EXPERIMENTAL]
  • QuantumOperator
  • QuantumPartialTrace
  • QuantumPhaseSpaceTransform
  • QuantumShortcut [EXPERIMENTAL]
  • QuantumStateEstimate [EXPERIMENTAL]
  • QuantumState
  • QuantumTensorProduct
  • QuantumWignerMICTransform [EXPERIMENTAL]
  • QuantumWignerTransform [EXPERIMENTAL]
  • QuditBasis
  • QuditName
Quantum Optimization
Variational Circuits
Quantum Natural Gradient Descent
Ansatz Circuits
Quantum Linear Solver
Variational Quantum Eigensolver (VQE)
Examples Custom Functions
Quantum Approximate Optimization Algorithm (QAOA)
​
This technical note presents documentation for the functionalities utilized in the implementation of quantum optimization algorithms. The document systematically outlines the core features, methodologies, and application contexts of the framework, offering insights into its integration within the quantum computational paradigm.
By providing a comprehensive overview and usage guidelines for these functions, we aim to introduce new and experienced users into quantum optimization techniques and quantum computing research.
In[19]:=
<<Wolfram`QuantumFramework`​​<<Wolfram`QuantumFramework`QuantumOptimization`
Variational Circuits
The area of Quantum Optimization depends heavily in variational algorithms. This algorithms work by exploring and comparing variational states
|ϕ(θ)〉
that depends on a set of
{θ}
n
parameters.
You can build this variational states using a fixed initial state as
|0〉
and a parametrized or variational quantum circuit
V(θ)
:
|ϕ(θ)〉=V(θ)|0〉
We aim to train these variational quantum circuits using an optimizer in conjunction with a cost function that reflects the objective of our optimization. Many of the algorithms presented here are Hybrid Algorithms, meaning they combine a quantum circuit with a classical optimizer.
Wolfram Quantum Framework supports "free parameters" in their quantum circuits. In order to implement a variational quantum circuit, you only need to specify this tunable parameters by using the "Parameters" option:
In[3]:=
vqc=
QuantumCircuitOperator
[{"00","RX"[θ1]{1},"RY"[θ2]{2}},"Parameters"{θ1,θ2}];
In[100]:=
vqc["Diagram"]
Out[100]=
The parameters defined for a Wolfram Quantum Framework Operator are heritable, so you do not need to redefine it in later steps. For example we can calculate the resultant
|ϕ(θ)〉
state from our defined variational circuit:
In[4]:=
vqs=vqc[]
Out[4]=
QuantumState
Pure state
Qudits: 2
Type: Vector
Dimension: 4

We can verify that the "Parameters" option values:
In[5]:=
vqs["Parameters"]
Out[5]=
{θ1,θ2}
This option simplifies the repeated execution of the circuit by using the symbolic computation capabilities provided by the Wolfram Quantum Framework:
In[6]:=
vqc["Formula"]
Out[6]=
Cos
θ1
2
Cos
θ2
2
|00〉+Cos
θ1
2
Sin
θ2
2
|01〉-Cos
θ2
2
Sin
θ1
2
|10〉-Sin
θ1
2
Sin
θ2
2
|11〉
Replace θ1 value as follows:
In[7]:=
vqc[θ10]["Formula"]
Out[7]=
Cos
θ2
2
|00〉+Sin
θ2
2
|01〉
As mentioned earlier, you can use the parameters at any stage of your algorithm, as they are carried through each step:
In[91]:=
vqs["Formula"]
Out[91]=
Cos
θ1
2
Cos
θ2
2
|00〉+Cos
θ1
2
Sin
θ2
2
|01〉-Cos
θ2
2
Sin
θ1
2
|10〉-Sin
θ1
2
Sin
θ2
2
|11〉
In[8]:=
vqs[θ20,θ10]["Formula"]
Out[8]=
|00〉
You can directly replace the values for each parameter following the same order you defined them:
In[95]:=
vqs[0,0]["Formula"]
Out[95]=
|00〉
In[99]:=
vqs[a,b]["Formula"]
Out[99]=
Cos
a
2
Cos
b
2
|00〉+Cos
a
2
Sin
b
2
|01〉-Cos
b
2
Sin
a
2
|10〉-Sin
a
2
Sin
b
2
|11〉
In the following sections, we will explore variational algorithms, which will require an understanding of how to properly design (ansatz circuits) and train (classical optimizers) these variational circuits.
Ansatz Circuits
In the context of quantum optimization within the Wolfram Quantum Framework, an ansatz refers to a variational quantum circuit designed to generate a trial state, serving as an initial approximation for solving or optimizing a given problem. This trial state represents an informed initial guess, forming the foundation for iterative refinement.
Typically, the ansatz is characterized by a subroutine that consists of a predefined sequence of quantum gates applied to selected qubits. While the ansatz establishes the foundational structure of the circuit, the specific types of gates and their associated free parameters are progressively optimized through a variational process. This adaptive refinement is integral to achieving optimal solutions within the quantum computational framework.
Hardware-Efficient ansatz (HEA) as an example:
In[17]:=
HEA=
QuantumCircuitOperator
[{"00",​​"RY"[2θ1]{1},"RY"[2θ2]{2},​​"CNOT"{1,2},​​"RY"[2θ3]{1},"RY"[2θ4]{2}​​},​​"Parameters"{θ1,θ2,θ3,θ4}​​];
In[18]:=
HEA["Diagram"]
Out[18]=
In order to generate multiple layers of parametrized gates or controlled gates you can use the following functionalities:
GenerateParameters[n,m,opts]
generate n x m parameters, indicating the n number of qubits and m number of layers. The parameters are named
θ
i
for i ∈ {1,…, n x m}
ParametrizedLayer[gate,qubits,index, opts]​​ParametrizedLayer[gate,index, opts]​​
generates a layer of parameterized gates on the specified qubits. The parameters are denoted as
θ
i
where i ∈ index. If only index is provided (with no
qubits
), a layer of gates with the same length is generated for all qubits.
EntanglementLayer[cgate,qubits]
generate a layer of controlled cgate connecting each qubits

ParametrizedLayer & GenerateParameters

ParametrizedLayer
The ParametrizedLayer function reduces redundant code in layered quantum circuit architectures by applying parameterized gates and automatically generating the necessary parameters.
It is possible to specify both the qubits on which the parameterized gates will be applied and the index for each parameter:
In[344]:=
ParametrizedLayer["RY",Range[1,3],{a,b,c}]
Out[344]=
Sequence[RY[θa]{1},RY[θb]{2},RY[θc]{3}]
In[365]:=
ParametrizedLayer["RY",Range[97,99],Range[3]]
Out[365]=
Sequence[RY[θ1]{97},RY[θ2]{98},RY[θ3]{99}]
Alternatively, you can specify only the parameter indices, and the gates will be generated sequentially for each qubit, starting from qubit 1:
In[371]:=
ParametrizedLayer["RY",Range[4,6]]
Out[371]=
Sequence[RY[θ4]{1},RY[θ5]{2},RY[θ6]{3}]
In[372]:=
QuantumCircuitOperator
[{%}]["Diagram",ImageSizeSmall]
Out[372]=
GenerateParameters
The GenerateParameters function generates a list of symbols; you only need to specify the number of qubits and layers in the quantum circuit:
In[210]:=
GenerateParameters[3,3]
Out[210]=
{θ1,θ2,θ3,θ4,θ5,θ6,θ7,θ8,θ9}
Example
In[382]:=
basicentaglement=
QuantumCircuitOperator
[{"00000",​​ParametrizedLayer["RY",Range[5]],​​EntanglementLayer["CNOT",Range[5],"Entanglement""Linear"],​​"Barrier",​​ParametrizedLayer["RY",Range[6,10]],​​EntanglementLayer["CNOT",Range[5],"Entanglement""Linear"],​​"Barrier",​​ParametrizedLayer["RY",Range[11,15]]},​​"Parameters"GenerateParameters[5,3]​​];
Options
GenerateParameters and ParametrizedLayer functions use "θ" as default symbol:
Use "Symbol" option to change the default symbol:

EntanglementLayer

The EntanglementLayer function is designed to reduce redundant code in layered quantum circuit architectures by facilitating controlled operations between qubits.
The EntanglementLayer function returns a Sequence of multiple Rules, each representing a controlled gate applied to connected qubits according to the chosen entanglement strategy. Currently, the function supports only named controlled gates between two qubits.
Entanglement
The "Entanglement" option specifies the strategy used for entanglement. It supports the following values:
The variational quantum eigensolver (VQE) is a hybrid algorithm that combines classical and quantum computing to determine the ground state energy of a Hamiltonian. It utilizes quantum algorithms to calculate expected energy values and classical optimization techniques for minimizing that energy.
VQE plays a crucial role in quantum optimization, particularly in solving complex problems in quantum chemistry and materials science. By enabling efficient simulations of intricate systems, VQE can address optimization tasks that are challenging for classical methods alone.
Given a Hamiltonian operator H, this method consist of two main components:
The method involves iteratively adjusting θ to minimize the average energy or cost function .

Application

Define the ansatz circuit V(θ):
Implement the cost function ℒ as stated before:
Calculate the ground state eigenvalue of H by minimizing :

Visualizations

In order to visualize the optimization process, we need to get all the parameter values during the evolution, in this case we will use a simple gradient descent method:
It would be also useful to calculate all the cost function values during each step of the parameter evolution:
Cost curve
We can visualize the evolution of our optimization during time in an Cost Curve, also known as Loss Curve:
Parameter Space
We can visualize the evolution of our optimization in the parameter space in a Contour Plot:
Gradient Vector Plot
The stream plot of the gradient of our cost function can provide insight into the behavior of the evolution as observed in its parameter space:
Quantum Approximate Optimization Algorithm (QAOA)
The Quantum Approximate Optimization Algorithm (QAOA) is a well-studied approach for solving combinatorial optimization problems.
The QAOA process involves the following steps:
◼
  • Defining the Oracles:
  • ◼
  • Applying the Oracles alternately in layers, forming the unitary:
  • Quantum Combinational Optimization
    The combinatorial optimization problem involves finding an optimal solution from a finite set of possibilities. This problem can be formulated as maximizing an objective function, which is expressed as a sum of Boolean functions.
    For a problem with n bits and m clauses, the goal is to find a n-bit string z that maximizes the function:
    Approximate optimization aims to find a near-optimal solution to this problem, which is often NP-hard. The approximate solution is an n-bit string z that closely maximizes the objective function 𝐶(t).

    Max-Cut Problem

    The Max–Cut problem is a well-known optimization problem in graph theory. The Maximum Cut problem involves partitioning the set of vertices V of a graph into two disjoint subsets, such that the number of edges connecting nodes in different subsets is maximized.
    Formulating Max-Cut with Classical Binary Variables
    Formulating Max-Cut with Quantum Mechanics
    Exploring Ansatz Intuition
    To implement the Max-Cut problem on a quantum computer, we start by putting the nodes in superposition. This ensures that each node (qubit) has an equal probability of being in either partition (A or B).
    We apply a Hadamard gate (H) to each qubit, This transforms the initial state |0〉 of all qubits into a superposition of all possible cuts.
    Some initial ideas involve searching for a periodic pattern, such as 0101 or 1010, within the bitstring. To achieve this, we need to establish connections between nodes or qubits, resembling the structure of the cost Hamiltonian. For this first idea we can evaluate the sequence of qubit gates CNOT + RZ + CNOT:
    We can verify that it distinguish the possible correct solutions from the non-periodic bitstrings in order to solve the problem:
    This set of gates are also called a RZZ Gate:
    Now that we understand this, we can apply it to every pair of qubits connected in the graph. In other words, we apply the RZZ gate to every edge in the graph. This ensures that the quantum state evolves according to the problem constraints.
    The previous step does not explore all possible cuts because some amplitudes remain unchanged, so we need to mix things a little bit!
    Exploring the cost and mix operators:
    The final resulting state obtained when the initial state is applied:
    Example: 4-Edge Graph
    Find the Max-Cut for the following graph:
    We already implemented the Cost Hamiltonian in the previous section:
    Initial state preparation for superposition:
    We can use Wolfram Mathematica's Graph functionalities to establish connections between qubits within the Cost-Circuit:
    This layer of gates actually corresponds to the operator generated by the Cost Hamiltonian we defined earlier:
    To ensure full exploration, we need a mixer Hamiltonian that allows transitions between different states:
    Define the variational states to calculate the Cost function:
    Let's try out the NMaximize optimizer:
    Visualize the most probable solutions:
    We obtain an initial approximate result. We detect that the solutions with higher probability are the correct answers. However, the exact solution for this problem is C=4, so we haven't reached the desired outcome yet.
    We can enhance the precision of our solution by increasing the number of QAOA layers:
    We can utilize the Parameters option from the Wolfram Quantum Framework to re-name the gate parameters, allowing us to distinguish one layer from another.
    Implement the cost function:
    Optimimize:
    Now, we obtain the exact result.
    This solution represents the cut shown in the first image of this section:
    Example: 6-Edge Graph
    Find the Max-Cut for the following graph:
    In this more challenging example, we have a graph with six nodes, which results in a much larger cost Hamiltonian:
    Implement directly the QAOA circuit using the Graph functionalities:
    Implement the symbolic cost function using the Cost Hamiltonian and the resulting parametrized state from the previous circuit:
    Use NMaximize for the optimization:
    We obtain multiple possible solutions; however, with just a single QAOA layer, we observe four solutions with higher probability:
    Solve it clasically:
    The quantum algorithm does not achieve the maximum value compared to the classical solution, but we can analyze the four most probable solutions indicated by the ProbabilityPlot:
    Quantum Natural Gradient Descent
    Gradient-based optimization methods represent a cornerstone in the field of numerical optimization, offering powerful techniques to minimize or maximize objective functions in various domains, ranging from machine learning and deep learning to physics and engineering. These methods utilize the gradient, or derivative, of the objective function with respect to its parameters to iteratively update them in a direction that reduces the function's value.
    This section will include a brief introduction to the Gradient Descent methods, the Fubini-Study metric tensor and Quantum Natural Gradient Descent. We will provide illustrative examples and compare these methods to regular gradient descent algorithms.

    Gradient Descent

    Gradient Descent aims to minimize a given objective function by iteratively updating the parameters in the direction of the steepest descent of the function. This simplicity and effectiveness render Gradient Descent indispensable in the realm of optimization, serving as the basis for numerous advanced optimization algorithms.
    An optimization step in a gradient descent is given by:
    where ℒ(θ) is the cost function with parameters θ, and η is the step rate.

    Example

    Implement a simple function
    Set initial parameters and use GradientDescent to minimize the function:
    GradientDescent returns all the parameters obtained until convergence. We can check how the minimization evolved:
    The last parameters obtained correspond to the minimized function:
    We can contrast the result using NMinimize:

    Options

    Gradient
    If the gradient is not specified, it is calculated numerically using a center finite differnece algorithm.
    MaxIterations
    Specify the maximum number of iterations to use:
    LearningRate
    Specify step size (η) taken during each iteration, compare the following gradient descent setups:
    In this case, the optimization is done almost instantly by "LearningRate"  0.5:
    We can visually verify that η = 0.9 was not efficent to find the solution:

    Natural Gradient Descent

    When using a regular gradient descent, we assume a flat or Euclidean parameter space. The algorithm does not consider the intrinsic geometry of the cost function, which can result in not–unique parametrizations which would lead to inefficient search for the minimum value, specially near to singular points.
    To account for the geometry of our cost function, we can generalize the Gradient Descent to the Natural Gradient Descent by incorporating a metric tensor ℱ. Specifically, we utilize the inverse of the corresponding metric tensor to adjust the gradient direction. This approach can lead to improved optimization results.
    The step of the Natural Gradient Descent is given by:
    where η is the learning rate, ∇ℒ(θ) the gradient cost function and ℱ(θ) the metric tensor that inform us about the geometry of parameter space.

    The Fubini–Study Metric Tensor

    The Fubini-Study metric tensor plays a crucial role in Quantum Natural Gradient Descent. Originating from the field of differential geometry, the Fubini-Study metric tensor enables the quantification of distances between quantum states on the complex projective Hilbert space.
    In the context of Quantum Natural Gradient Descent, this metric tensor serves as a fundamental tool for characterizing the geometry of the parameter space associated with parameterized quantum circuits. By incorporating the Fubini-Study metric tensor into the optimization process, this algorithm effectively accounts for the curvature of the quantum state manifold, enabling more precise and efficient updates of circuit parameters.
    Considering a variational state ϕ(θ), their Fubiny-Study metric follows:

    Example

    Implement a simple single qubit state:
    Apply the FubiniStudyMetricTensor function to obtain the calculated QuantumOperator:

    Properties

    Use "Matrix" and "MatrixForm" to obtain a simplified matrix expression which assumes Real-valued parameters:
    Use "Parameters" to obtain the parameters pre-defined in the QuantumState used as input:
    Use "SparseArray" for a non-simplified result:

    Quantum Natural Gradient Descent

    Quantum Natural Gradient Optimization Method represent a cutting-edge approach to optimizing parameterized quantum circuits in the field of quantum computing. Unlike classical gradient-based methods, Quantum Natural Gradient techniques account for the unique geometry of the quantum state manifold, mitigating issues such as barren plateaus and enabling faster convergence rates.
    We will briefly demonstrate how to apply all these functions in Wolfram quantum framework.

    Example

    Implement a simple single qubit state:
    Calculate the Fubiny-Study metric tensor:
    Implement a cost function 〈ϕ(θ)|H |ϕ(θ)〉 :
    Use QuantumNaturalGradientDescent to minimize the function:
    QuantumNaturalGradientDescent returns all the parameters obtained until convergence. We can check how the minimization evolved:
    Check optimization results:

    Options

    Gradient
    If the gradient is not specified, it is calculated numerically using a center finite differnece algorithm.
    InitialPoint
    Specify the initial point to start the iteration, compare the following gradient descent setups:
    Analyze the parameters evolution with different starting point:
    LearningRate
    Specify step size (η) taken during each iteration, compare the following gradient descent setups:
    In this case, the optimization is done almost instantly by "LearningRate"  0.1:
    Verify the trajectory generated by "LearningRate"  0.5 and "LearningRate"  0.1:
    MaxIterations

    Results Overview

    Verify the evolution of the paremeters for each case:
    Quantum Linear Solver
    We will briefly demonstrate how to apply all these functions in Wolfram quantum framework.

    Details and Options

    QuantumLinearSolve utilizes a multiplexer-based variational quantum linear solver algorithm. The primary distinctions from a conventional variational quantum linear solver algorithm are as follows:
    ◼
  • The solution to the linear system is encoded directly in the amplitudes of the resultant quantum state.
  • This approach simplifies the standard variational quantum linear solver by reducing the need for multiple circuits used in the real-imaginary decomposition of the solution and the term-by-term computation within quantum circuits. A detailed step-by-step implementation is outlined in this documentation.
    The basic steps to implement the algorithm are the following:

    Algorithm Implementation

    Ansatz
    Generate a variational state in 4D:
    Show its formula:
    Represent it directly in a quantum circuit:
    Request "Ansatz" from QuantumLinearSolve function:
    All parameters are real in above equation.
    Multiplexer
    Use QuantumOperator property to obtain the Pauli decomposition of matrix m:
    It is possible to apply the tensor product of Pauli matrices as controlled operations:
    Request "CircuitOperator" from QuantumLinearSolve function to obtain the variational circuit including the multiplexer and the variational ansatz:
    Cost Function
    In other words, this cost function does not ensure that the amplitudes of both states are identical, but rather that they are proportional by a global phase:
    In order to correct our result, we need to find the global phase and apply it along with the normalization terms for both b and m once our calculation is done such that:

    Options

    Ansatz
    Compare the result with the classical method:
    GlobalPhaseAccuracy
    Let's indicate a high accuracy:
    AccuracyGoal & PrecisionGoal
    We can change them in order to get less precision but faster timing:
    Compare both approximate results with the original result:
    Method
    Heuristic methods include:
    Some methods may give suboptimal results for certain problems:
    WorkingPrecision

    Example

    Simple example
    Generate a random 4×4 real matrix:
    Generate a random complex vector of the length 4:
    When running above code, you may see a progress box describing steps and estimated time.
    Compare the results:
    Properties
    Examples Custom Functions

    Fubini-Study Metric Tensor (Alternative Methods)

    The Fubini-Study metric tensor plays a crucial role in Quantum Natural Gradient Descent. Originating from the field of differential geometry, the Fubini-Study metric tensor enables the quantification of distances between quantum states on the complex projective Hilbert space. In the context of Quantum Natural Gradient Descent, this metric tensor serves as a fundamental tool for characterizing the geometry of the parameter space associated with parameterized quantum circuits. By incorporating the Fubini-Study metric tensor into the optimization process, this algorithm effectively accounts for the curvature of the quantum state manifold, enabling more precise and efficient updates of circuit parameters.

    Example

    Considering the following variational circuit
    Obtain parameterized layers indicating the variational quantum circuit and the parameters it depends on:
    Calculate the Fubini-Study metric tensor using the layers, the parameters and the numerical values to be used for each parameter:

    Stochastic Parameter Shift–Rule

    The Stochastic Parameter Shift Rule (SPSR) represents a powerful optimization technique specifically tailored for parameterized quantum circuits. Unlike conventional gradient-based methods, SPSR offers a stochastic approach to computing gradients, making it particularly well-suited for scenarios involving noisy quantum devices or large-scale quantum circuits. At its core, SPSR leverages the principles of quantum calculus to estimate gradients by probabilistically sampling from parameter space and evaluating the quantum circuit's expectation values. By incorporating random perturbations in parameter values and exploiting symmetry properties, SPSR effectively mitigates the detrimental effects of noise and provides robust optimization solutions.
    By the other hand, the Approximate Stochastic Parameter Shift Rule (ASPSR) emerges as a pragmatic solution to address the computational complexity associated with exact Stochastic Parameter Shift Rule (SPSR) calculations, particularly in scenarios involving high-dimensional parameter spaces or resource-constrained environments. ASPSR strategically balances computational efficiency with optimization accuracy by employing approximation techniques to streamline gradient estimation procedures. By leveraging simplified or truncated calculations while preserving the essential characteristics of SPSR, ASPSR facilitates faster gradient computations without significantly compromising optimization quality.

    Example using SPSRGradientValues

    For this example we will use the following generator:
    We will differentiate θ1. Implement the correspondant matrix fixing other parameters:
    Calculate the gradient using SPSRGradientValues indicating the Pauli matrix associated with θ1:
    Check the result and find a suitable fit:

    Example using ASPSRGradientValues

    For this example we will use the following generator:
    We will differentiate θ1. Implement the correspondant matrix fixing other parameters:
    For this algorithm we need the operator not associated with θ1:
    Calculate the gradient using SPSRGradientValues indicating the Pauli matrix associated with θ1 and the H opeator defined before:
    Check the result and find a suitable fit:

    © 2025 Wolfram. All rights reserved.

    • Legal & Privacy Policy
    • Contact Us
    • WolframAlpha.com
    • WolframCloud.com