Wolfram Language Paclet Repository

Community-contributed installable additions to the Wolfram Language

Primary Navigation

    • Cloud & Deployment
    • Core Language & Structure
    • Data Manipulation & Analysis
    • Engineering Data & Computation
    • External Interfaces & Connections
    • Financial Data & Computation
    • Geographic Data & Computation
    • Geometry
    • Graphs & Networks
    • Higher Mathematical Computation
    • Images
    • Knowledge Representation & Natural Language
    • Machine Learning
    • Notebook Documents & Presentation
    • Scientific and Medical Data & Computation
    • Social, Cultural & Linguistic Data
    • Strings & Text
    • Symbolic & Numeric Computation
    • System Operation & Setup
    • Time-Related Computation
    • User Interface Construction
    • Visualization & Graphics
    • Random Paclet
    • Alphabetical List
  • Using Paclets
    • Get Started
    • Download Definition Notebook
  • Learn More about Wolfram Language

NonlinearCholeskyFactorization

Guides

  • Guide to ZigangPan`NonlinearCholeskyFactorization`

Symbols

  • approximateHJIequation
  • approximatenonlinearCholeskyfactorization
  • backsteppinglocaloptimalmatchingglobalinverseoptimal
  • backsteppinglocaloptimalmatchingglobalinverseoptimalNew
  • backsteppinglocaloptimalmatchingglobalinverseoptimalN
  • backsteppinglocaloptimalmatchingglobalinverseoptimalNNew
  • expandseriesntruncate
  • monomialsofgivenorder
ZigangPan`NonlinearCholeskyFactorization`
backsteppinglocaloptimalmatchingglobalinverseoptimalNNew
​
{V,α,qcheck,rcheck}=backsteppinglocaloptimalmatchingglobalinverseoptimalNNew[f,g,h,q,r,γ,xc,m,x1m,x1M,ϵ]
calculates the locally optimal matching up to order
m
and globally inverse optimal control law for the system
xc'[t]=f[xc]+g[xc]u+h[xc]w
with cost function J =
t
∫
0
(q[xc[τ]]+u[τ].r[xc[τ]].u[τ]-γ^2w[τ].w[τ])τ
​
f[xc]={a1[x1]x2+f1[x1],a2[x1,x2]x3+f2[x1,x2],...fn[x1,...,xn]}
g[xc]={{0},{0},...,{b[xc]}}
h[xc]={h1[x1],h2[x1,x2],...hn[x1,...,xn]}
f[0]=0
;
a1[x1]≠0
;
a2[x1,x2]≠0
; ...
b[xc]≠0
;
q[xc]
is positive definite and proper with leading positive definite quadratic approximants on

​
r[xc]
is positive definite, ∀
xc
∈
γ>0
ϵ>0
x1m<0
;
x1M>0
xc={x1,x2,...,xn}
, where each component are assumed to be scalars. All functions are assumed to be formulas rather than pure functions
xc∈(x1m,x1M)
n-1

=:
;
f
,
g
, and
h
are defined on .
q
:→
is positive definite and proper,
r
:

+
is positive definite. When
ϵ=m
,
x1m=-∞
, and
x1M=∞
, the command is equivalent to
backsteppinglocaloptimalmatchingglobalinverseoptimalNew[f,g,h,q,r,γ,xc,m]
. The returned variables:
V
= the formula of the resulting value function.
α
= the formula of the resulting control law.
qcheck
= the formula of the resulting weighting function on state variables.
rcheck
= the formula of the resulting weighting function on the control variables. Compared to backsteppinglocaloptimalmatchingglobalinverseoptimalN function, this design results in smaller controller magnitude.
​
Examples  
(1)
Basic Examples  
(1)
In[1]:=
Needs["ZigangPan`DifferentialEquationSolver`"]
In[2]:=
xc={x1,x2};x1m=-2;x1M=2;
In[3]:=
$Assumptions=((x1|x2)∈Reals);
In[4]:=
fe[x1_,x2_]:={x2+x1^2,0}
In[5]:=
ge[x1_,x2_]:={{0},{1}}
In[6]:=
he[x1_,x2_]:={{1},{0}}
In[7]:=
qe[x1_,x2_]:=x1^2/(x1-x1m)/(x1M-x1)+x2^2
In[8]:=
γ=4;
In[9]:=
f=Apply[fe,xc];
In[10]:=
g=Apply[ge,xc];
In[11]:=
h=Apply[he,xc];
In[12]:=
r={{1}};
In[13]:=
q=Apply[qe,xc];
In[14]:=
s=(g.Inverse[r].Transpose[g]-h.Transpose[h]/γ^2)/4;
In[15]:=
Va4=
approximateHJIequation
[f,s,q,xc,4]
Out[15]=
0.74566078353098269398742583173524076969324162745495
2
x1
+1.7999503770403896165422532400554460798401124030983
3
x1
+3.5945892008338612327827274794148968365669328969
4
x1
+1.0672406012816462175243920149575540149877246679402x1x2+3.266320905623785474113185224022831513184551539736
2
x1
x2+7.3475915187228446155193927714803611977985262610
3
x1
x2+1.4439658982677348429739207447791524129284836319254
2
x2
+2.0943483047946629482898669389150365008839723100133x1
2
x2
+5.6872646708997637786177347590696430475364888449
2
x1
2
x2
+0.4995957038976601482760750899761287023393977940874
3
x2
+1.97649982972168486501823539943565389418250749619x1
3
x2
+0.268247983506959966810971984491731130085057481673
4
x2
In[16]:=
ans=
backsteppinglocaloptimalmatchingglobalinverseoptimalNNew
[f,g,h,q,r,γ,xc,3,x1m,x1M,3];
In[17]:=
Vx=D[ans〚1〛,{xc}];
In[18]:=
Plot3D[ans〚1〛,{x1,-2,2},{x2,-5,5},PlotRange{0,3}]
Out[18]=
In[19]:=
Plot3D[ans〚4〛,{x1,-2,2},{x2,-5,5},PlotRange{0,2}]
Out[19]=
In[20]:=
Plot3D[ans〚3〛,{x1,-2,2},{x2,-5,5},PlotRange{0,20}]
Out[20]=
In[21]:=
Plot3D[ans〚2〛,{x1,-2,2},{x2,-5,5},PlotRange{-60,10}]
Out[21]=
In[22]:=
Plot3D[Vx.f-Vx.(g.Inverse[ans〚4〛].Transpose[g]-1/γ^2h.Transpose[h]).Vx/4+ans〚3〛,{x1,-1.99,1.99},{x2,-5,5},PlotRange{-0.1,0.1}]
Out[22]=

© 2025 Wolfram. All rights reserved.

  • Legal & Privacy Policy
  • Contact Us
  • WolframAlpha.com
  • WolframCloud.com