-
Notifications
You must be signed in to change notification settings - Fork 8
/
Copy pathCap4_in.tex
669 lines (577 loc) · 40.4 KB
/
Cap4_in.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
\chapter{HJB equation and viscosity solution}\label{Chapter4}
%\blindtext
\minitoc% Creating an actual minitoc
\vspace{5em}
In this chapter we present a brief introduction to the use of the \textbf{dynamic programming principle} for solving stochastic control problems.
The fundamental equation of dynamic programming is a nonlinear evolution equation for the \textbf{value function}.
The value function is the optimum value of the payoff considered as a function of the initial data.
This principle was introduced in the 1950s by Bellman, see \cite{Bellman}.
For controlled Lévy processes,
the approach yields a certain nonlinear PIDE, usually of first or second order, called \textbf{Hamilton-Jacobi-Bellman} (HJB) equation.
The theory of viscosity solutions provides a convenient framework in which to study HJB equations.
Typically, the value function is not smooth enough to satisfy the HJB in the classical sense. However, under certain assumptions, it is the unique viscosity
solution of the HJB equation with appropriate boundary conditions. A general review of the theory of viscosity solutions is presented in \cite{CIL92}.
The first notion of viscosity solution has been introduced by \cite{CL83} for first order partial differential equations. The theory has
been immediately extended by \cite{PLL83} for second order PDEs.
Uniqueness results for general second order equations are presented in \cite{Je88}, \cite{Is89} and \cite{IsLi90}.
The main development of these works is the \emph{Ishii's lemma}, which plays a key role in most of the uniqueness proofs.
The viscosity solution theory has been further extended by \cite{Soner86b}, \cite{Soner86} and \cite{Sayah91} for piecewise deterministic processes with random jumps.
An important paper for viscosity solutions of Lévy-type PIDEs (PIDEs involving the generator of a Lévy process) is \cite{BaIm08}, where the Ishii's
lemma is extended.
Further important contributions that deserve to be mentioned are \cite{Ph98}, that analyzes HJB equations for optimal stopping problems under Lévy processes and \cite{CoVo05} that analyze
the solution of linear PIDEs derived from option pricing problems (plain vanilla and barrier options) for finite and infinite activity Lévy processes.
\section{Optimal control framework}\label{Optimal_control_framework}
We consider a framework where the state of the system $\{X_t\}_{t \geq 0}$, is governed by the following controlled SDE with values in $\R^n$:
\begin{align}\label{controlled_SDE}
dX_t =& \; b(t,X_{t-},\alpha_t) dt + \sigma (t,X_{t-},\alpha_t) dW_t \\ \nonumber
&+ \int_{\R^n} \gamma(t,X_{t-},\alpha_t,z) \tilde N(dt,dz).
\end{align}
where we consider a $d$-dimensional Brownian motion.
The coefficients $b: [0,T]\times \R^n \times A \to \R^n$, $\sigma: [0,T]\times \R^n \times A \to \R^{n \times d}$,
$\gamma: [0,T]\times \R^n \times A \times \R^n \to \R^{n}$ are continuous functions with respect to $(t,x,a)$ and $\gamma(t,x,a,\cdot)$ is also bounded
uniformly in $a \in A$ in any neighborhood of $z=0$.
In this chapter we use the same compact notation introduced for (\ref{Levy_diff}), where all the indexes are suppressed. The expression (\ref{controlled_SDE}) corresponds to the
component-wise equation
\begin{align}
dX^i_t =& \; b^i(t,X_{t-},\alpha_t) dt + \sum_{j=1}^d \sigma^{i,j} (t,X_{t-},\alpha_t) dW^j_t \\ \nonumber
&+ \int_{\R^n} \gamma^{i}(t,X_{t-},\alpha_t,z) \tilde N(dt,dz).
\end{align}
for $1 \leq i \leq n$.
In the following, we will work with the \emph{Wiener-Poisson filtration} $\{\mathcal{F}_{t}\}_{t\geq 0}$ obtained from the natural filtrations of the Brownian motion and the compensated
Poisson random measure.
We define the filtration $\{\mathcal{F}_{t}\}_{t\geq 0}$ as the right-continuous completed revision\footnote{
A filtration is said to be complete if it contains all the sets with $\PP$-measure zero.
A filtration is right-continuous if $\mathcal{F}_{t^+} = \mathcal{F}_{t}$ for all $t\geq 0$.
Given a filtration, we can always enlarge it such that it satisfies the completeness and right continuity property.
The right-continuity property is always satisfied by filtrations generated by Lévy processes (see Theorem 2.1.10 in \cite{Applebaum}).}
of $\{ \mathcal{F}_{t}^W \otimes \mathcal{F}_{t}^{\tilde N} \}_{t\geq 0} $, where
$\mathcal{F}_{t}^W = \sigma\{W_s : 0\leq s \leq t\}$ and
$\mathcal{F}_{t}^{\tilde N} = \sigma \bigl\{ \tilde N(s,B) : 0\leq s \leq t, B \in \mathcal{B}(\R^n \backslash \{ 0 \}) \bigr\}$.
For more information on the Wiener-Poisson filtration we refer to Section 5.1 of \cite{BoTo11}.
We indicate with $\mathcal{A}$ the space of controls.
This space comprises all predictable processes $\alpha: [0,T]\times \Omega \to A$, with respect to $\{\mathcal{F}_{t}\}_{t\geq 0}$.
In this section we assume a compact\footnote{The assumption of a compact set $A$ is a strong assumption. It will be relaxed in the next sections.} set $A \subset \R^m$.
In this thesis we only consider Markovian controls such that $\alpha_t := \alpha(t,X_{t^-}) $.
The function $\alpha(t,x) : [0,T] \times \R^n \to A$ is a measurable function, and
in order to guarantee existence and uniqueness of a solution of (\ref{controlled_SDE}), we assume it satisfies the Lipschitz condition with respect to $x$.
In Section \ref{existence_uniqueness}, we presented existence and uniqueness conditions for a time-homogeneous SDE.
Following Chapter 3.3 of \cite{Skorohod}, we can extend
those conditions for the more general SDE in (\ref{controlled_SDE}).
Let us consider a function $\rho : \R^n \to \R $ satisfying (\ref{rho}).
We have the following:
\begin{itemize}
\item[(C1)] \textbf{Lipschitz condition} There exist $K >0$, such that $\forall x,y \in \R^n$, $\forall t \in [0,T]$ and $\forall a \in A$
\begin{align}\label{Lipschitz_control}
&|b(t,x,a) - b(t,y,a)| + || \sigma(t,x,a) - \sigma(t,y,a) || \leq K |x-y|,\\
& |\gamma(t,x,a,z) - \gamma(t,y,a,z)| \leq |\rho(z)|\,|x-y|. \label{Lipschitz_control2}
\end{align}
\end{itemize}
\begin{Theorem}
The assumptions (\ref{Lipschitz_control}) and (\ref{Lipschitz_control2})
ensure that for each Lipschitz Markov control $\alpha \in \mathcal{A}$, there exists a unique strong solution of (\ref{controlled_SDE}) with given initial
conditions.
\end{Theorem}
See discussion in Chapter 3.3 of \cite{Skorohod} (page 156).
Note that the Lipschitz conditions (\ref{Lipschitz_control}) and (\ref{Lipschitz_control2}) together with the continuity of $b$, $\sigma$, $\gamma$, imply the growth conditions:
\begin{align}
& |b(t,x,a)| + ||\sigma(t,x,a)|| \leq \tilde K |1+|x|| \\
& | \gamma(t,x,a,z) | \leq |\tilde \rho(z)| (1+|x|) \label{Growth_control2}
\end{align}
for $\tilde \rho$ satisfying (\ref{rho}) and $\tilde K > 0$.
\begin{Definition}
Let us indicate with $\mathcal{T}_{t_1,t_2}$ the \textbf{set of all stopping times} in $[t_1,t_2]$ adapted to $\{\mathcal{F}_s\}_{s\in [t_1,t_2]}$.
\end{Definition}
\begin{Theorem}
Let us consider the process described by (\ref{controlled_SDE}) and the hypothesis (\ref{Lipschitz_control}) and (\ref{Lipschitz_control2}) satisfied.
For any $k \in [0,2]$, there exist $C>0$ such that $\forall t \in [0,T]$, for $h \geq t$, $x,y \in \R^n$,
$\alpha \in \mathcal{A}$ and $\tau \in \mathcal{T}_{t,h}$
\begin{align}
\E_{t,x}\bigl[ |X_{\tau}|^k \bigr] &\leq C (1+|x|^k) \\ \label{cond_2}
\E_{t,x}\bigl[ |X_{\tau} - x|^k \bigr] &\leq C (1+|x|^k)(h-t)^{k/2} \\ \label{cond_3}
\E_{t,x}\biggl[ \biggl( \sup_{t \leq s\leq h} |X_{s} - x| \biggr)^k \, \biggr] &\leq C (1+|x|^k)(h-t)^{k/2} \\ \label{cond_4}
\E_{t,x,y}\bigl[ |X_{\tau} - Y_{\tau}|^k \bigr] &\leq C |x-y|^2 \\ \nonumber
\end{align}
\end{Theorem}
\noindent where we used the simple notation $\E_{t,x}[\cdot]$ to indicate the expectation conditioned on the initial value $X_t=x$.
This theorem corresponds to the Lemma 3.1 in \cite{Ph98}.
Lévy processes and, more in general, all controlled processes with dynamics (\ref{controlled_SDE}),
are \textbf{stochastically continuous} i.e. satisfy the following corollary.
\begin{Corollary}\label{stochastic_theorem}
A process $\{X_t\}_{t\geq0}$ starting at $x=X_0$ and solution of (\ref{controlled_SDE}), for each $ \alpha \in \mathcal{A}$ taking values in the compact set $A$,
and for each $\rho>0$, satisfies
\begin{equation}
\PP \biggl( | X_t - x | \geq \rho \biggr) \underset{t\to0}{\longrightarrow} 0.
\end{equation}
\end{Corollary}
\begin{proof}
By the Markov inequality
\begin{align*}
\PP \biggl( | X_t - x | \geq \rho \biggr) & \leq \frac{\E_{x}\biggl[ |X_t - x| \biggr]}{\rho} \\
& \leq \frac{C}{\rho} (1+|x|)\sqrt{t}
\end{align*}
where we used (\ref{cond_2}), with $C>0$. Taking the limit $t\to 0$ proves the theorem.
\end{proof}
Now we are interested in the infinitesimal generator associated to the controlled SDE (\ref{controlled_SDE}). \\
For $\delta > 0$, let us define the following two useful integral operators. \\ \noindent
For $\phi \in C^{1,2}([0,T] \times \R^n) $ we define the operator:
\begin{equation}\label{integral_1}
\mathcal{I}^{1,\delta,a}(t,x,\phi) = \int_{|z| \leq \delta}
\biggl[ \phi(t,x + \gamma(t,x,a,z)) - \phi(t,x) - D_x \phi(t,x) \cdot \gamma(t,x,a,z) \biggr] \nu(dz)
\end{equation}
where $D_x \phi$ corresponds to the gradient of $\phi$.\\
For $\phi \in \mathcal{C}_2([0,T] \times \R^n) $ (see Definition \ref{Cp}) and $p \in \R^n$, we define the operator:
\begin{equation}\label{integral_2}
\mathcal{I}^{2,\delta,a}(t,x,p,\phi) = \int_{|z| > \delta}
\biggl[ \phi(t,x+ \gamma(t,x,a,z)) - \phi(t,x) - p \cdot \gamma(t,x,a,z) \biggr] \nu(dz)
\end{equation}
We can check that the integral operators $\mathcal{I}^{1,\delta,a}$ and $\mathcal{I}^{2,\delta,a}$ are well defined.
Let us consider the integral (\ref{integral_1}). For $|z| \leq \delta$ we know that $\gamma(t,x,a,z)$ is bounded, therefore we call
$ \bar \gamma(t,x,a) = \sup_{|z| \leq \delta} |\gamma(t,x,a,z)|$ for fixed $t$, $x$ and uniformly in $a$.
We can use a first order Taylor approximation on $\phi(t,x + \gamma(t,x,a,z))$ and consider the Lagrange remainder.
For $y \in \bigl(x, \gamma(t,x,a,z)\bigr)$, we can write:
\begin{align*}
& \bigg|\phi(t,x + \gamma(t,x,a,z)) - \phi(t,x) - D_x \phi(t,x) \cdot \gamma(t,x,a,z) \bigg| \\
& = \bigg| \frac{1}{2} \gamma(t,x,a,z)^T \cdot D_{xx} \phi(t,y) \cdot \gamma(t,x,a,z) \bigg| \\ % \sup_{y \in B(x,\rho(z)(1+|x|) )}
& \leq \frac{1}{2} \; \bigg|\bigg| D_{xx} \phi(t,y) \bigg|\bigg| \; \big| \gamma(t,x,a,z) \big|^2 \\
& \leq \frac{1}{2} \; \bigg|\bigg| D_{xx} \phi(t,y) \bigg|\bigg| \; \big| \rho(z) (1+|x|) \big|^2 \\
& \leq \frac{1}{2} M \; \big| \rho(z) (1+|x|) \big|^2.
\end{align*}
where we used the Schwarz inequality and $M = \sup_{y \in [x, \bar \gamma(t,x,a)]} || D_{xx} \phi(t,y) ||$.
Thanks to (\ref{rho}) we can see that the integral is well defined.
Let us consider the integral (\ref{integral_2}) on $|z|\geq \delta$.
\begin{align*}
& \bigg|\phi(t,x + \gamma(t,x,a,z)) - \phi(t,x) - p \cdot \gamma(t,x,a,z) \bigg| \\
& \leq \big| \phi(t,x + \gamma(t,x,a,z))\big| + \big| \phi(t,x)\big| + \big|p \cdot \gamma(t,x,a,z) \big| \\
& \leq C (1 + |\rho(z)|^2)(1 + |x|^2).
\end{align*}
By definition $\phi \in \mathcal{C}_2([0,T] \times \R^n)$ has quadratic growth.
Then we use (\ref{Growth_control2}) for $|\gamma(t,x,a,z)|$.
Again, thanks to (\ref{rho}) we verify that the integral is well defined.
Finally, for $\phi \in C^{1,2}([0,T] \times \R^n) \bigcap \mathcal{C}_2([0,T] \times \R^n)$ we can define the
\textbf{controlled integro-differential operator} as:
\begin{equation}\label{int_diff_oper}
\LL^{a}(t,x,\phi) := \mathcal{D}^{a} \phi(t,x) + \mathcal{I}^{a}(t,x,\phi)
\end{equation}
with
\begin{equation}
\mathcal{D}^{a} \phi(t,x) := D_x \phi(t,x) \cdot b(t,x,a) + \frac{1}{2} \mbox{Tr} \biggl[ \sigma(t,x,a)^T \, D_x^2 \phi(t,x) \, \sigma(t,x,a) \biggr]
\end{equation}
where \emph{Tr} indicates the trace of a matrix, and
\begin{align}\label{int_oper}
\mathcal{I}^{a}(t,x,\phi) &:= \mathcal{I}^{1,\delta,a}(t,x,\phi) + \mathcal{I}^{2,\delta,a}(t,x,D_x \phi,\phi)\\ \nonumber
&= \int_{\R^n}
\biggl[ \phi(t,x + \gamma(t,x,a,z)) - \phi(t,x) - D_x \phi(t,x) \cdot \gamma(t,x,a,z) \biggr] \nu(dz).
\end{align}
Starting from the definition of the infinitesimal generator and using the It\=o lemma,
it is possible to prove that the operator (\ref{int_diff_oper}) is the infinitesimal generator of the process
(\ref{controlled_SDE}). See \cite{Skorohod} pages 179-180.\\
This operator is a generalization of the previously defined operators (\ref{genLevy}) and (\ref{gen_jumpdiff}).
We have seen that those operators are well defined for functions that vanish at infinity.
Under the additional assumption that Lévy processes have finite second moment, we have just seen that the operator
(\ref{int_diff_oper}) is well defined also for functions with quadratic growth.
\noindent
The \textbf{Lévy operator} (\ref{genLevy}) is a special case of (\ref{int_diff_oper}), obtained by setting $\gamma(t,x,a,z) = z$, $\delta = 1$ and $p=0$.
The integral term becomes:
\begin{align*}
\mathcal{I}_L(t,x,\phi) &= \mathcal{I}^{1,1,0}(t,x,\phi) + \mathcal{I}^{2,1,0}(t,x,0,\phi)\\
&= \int_{\R^n} \biggl[ \phi(t,x+ z) - \phi(t,x) - D_x \phi(t,x) \cdot z \mathbbm{1}_{|z|<1}(z) \biggr] \nu(dz)
\end{align*}
The finite second moment assumption ensures that (\ref{rho}) is finite for $\rho(z) = z$, thanks to Theorem (\ref{assumptionM}).
\subsection{Dynamic Programming Principle}\label{DPP_section}
Define the cylindrical region $Q = [t_0,T) \times \mathcal{O}$, with $\mathcal{O} \subseteq \R^n$ an open set.
We define the first exit time from $Q$ as
\begin{equation}\label{exit_time_def}
\tau = \inf \{ s \geq t_0: (s,X_s) \not\in Q \}.
\end{equation}
Note that $\tau$ is the exit time from $O$ if $X_s$ exits before time $T$. If $X_s \in \OO$ $\forall s \in [t_0,T)$ then $\tau = T$. If $\OO = \R^n$, then the set $Q$
does not have lateral boundaries but only a terminal boundary at $\{ T \} \times \R^n$.
\noindent
Let us consider two continuous functions $f:[0,T]\times \R^n \times A \to \R$ and $g: \bigl( [0,T)\times \R^n \backslash \mathcal{O} \bigr) \bigcup \bigl( \{T\} \times \R^n \bigr) \to \R$
satisfying the conditions:
% % \begin{equation}\label{Lipschitz_f}
% % |f(t,x,a) - f(s,y,a)| + |g(t,x)-g(s,y)| \leq C (|t-s|+|x-y|).
% % \end{equation}
\begin{equation}\label{linear_growth_f}
|f(t,x,a)| \leq C (1+|x|^p), \quad \quad |g(t,x)| \leq \tilde C (1+|x|^p)
\end{equation}
for suitable non negative constants $C$, $\tilde C$ and $p$.\\
Let us denote by $\mathcal{A}_{t,x}$ the set of processes $\alpha \in \mathcal{A}$ such that
\begin{equation}
\E_{t,x} \biggl[ \int_{t}^{\tau} |f(s,X_s,\alpha_s)| ds + g(\tau, X_{\tau}) \biggr] < \infty.
\end{equation}
\begin{Remark}
Under the current assumptions of a compact set $A$, continuous function $f$ and finite second moment of $\{X_{t}\}_{t\in [t_0,T]}$,
we can use (\ref{linear_growth_f}) with $p\in [0,2]$ and (\ref{cond_3}), to check that $\mathcal{A}_{t,x} = \mathcal{A}$.
\end{Remark}
We can define the \textbf{objective function}
\begin{equation}\label{cost_functional}
J(t,x;\alpha) = \E_{t,x} \biggl[ \int_t^{\tau} f(s,X_s,\alpha_s) ds + g(\tau, X_{\tau}) \biggr].
\end{equation}
% where $f:[0,T]\times \R^n \times A \to \R$ and $g: \bigl( [0,T)\times \R^n \backslash \mathcal{O} \bigr) \bigcup \bigl( \R^n \times \{T\} \bigr) \to \R$
% are continuous bounded functions.\\
Our goal is to maximize the objective function over the set $\mathcal{A}_{t,x}$. Let us introduce the \textbf{value function}
\begin{equation}\label{general_value_function}
V(t,x) = \sup_{\alpha \in \mathcal{A}_{t,x}} J(t,x,\alpha).
\end{equation}
If the supremum is attained for a control $\alpha \in \mathcal{A}_{t,x}$, then this control is called the
\textbf{optimal control} and is indicated by $\alpha^*$. We can write $V(t,x) = J(t,x; \alpha^*)$.\\
We also always assume that the value function is
measurable. The continuity of $V(t,x)$ can be proved under different assumptions. For instance, in Lemma 3.15 of \cite{Skorohod} the continuity is proved
for $p=0$ in (\ref{linear_growth_f}). In Proposition 3.3 of \cite{Ph98} the authors prove the continuity of $V(t,x)$ assuming Lipschitz $f$ and $g$.
\newline
\noindent
The dynamic programming principle (DPP) is a fundamental principle in the theory of stochastic control. It is formulated as follows.\\
\textbf{Dynamic Programming Principle}:
\emph{
\begin{itemize}
\item For all $\alpha \in \mathcal{A}_{t,x}$ and all stopping times $\theta \in \mathcal{T}_{t,\tau}$
\begin{equation}\label{DPP1}
V(t,x) \geq \E_{t,x} \biggl[ \int_t^{\theta} f(s,X_s,\alpha_s) ds + V(\theta, X_{\theta}) \biggr].
\end{equation}
\item For all $\epsilon > 0$, there exists $\alpha \in \mathcal{A}_{t,x}$ such that $\forall \theta \in \mathcal{T}_{t,\tau}$
\begin{equation}\label{DPP2}
V(t,x) \leq \E_{t,x} \biggl[ \int_t^{\theta} f(s,X_s,\alpha_s) ds + V(\theta, X_{\theta}) \biggr] + \epsilon.
\end{equation}
\end{itemize}
Since $\epsilon$ is arbitrary, we can write the DPP in the following form
\begin{equation}\label{DPP3}
V(t,x) = \sup_{\alpha \in \mathcal{A}_{t,x}} \E_{t,x} \biggl[ \int_t^{\theta} f(s,X_s,\alpha_s) ds + V(\theta, X_{\theta}) \biggr].
\end{equation} }
The proof of the DPP is very technical, and can be found in many textbooks and articles on stochastic control theory, with slight variations in the hypothesis.
For instance, a proof for diffusion processes can be found in Chapter 3.3 of \cite{Pham},
where the author assumes the growth condition (\ref{linear_growth_f}) with $p=2$.
In \cite{FlemingSoner}, the authors present several problems under diffusion dynamics assuming (\ref{linear_growth_f}).
However, in the proof provided in Chapter \rom{4}.7, $f$ and $g$ are assumed to be bounded. \\
A discussion on the proof of the DPP for processes of type (\ref{controlled_SDE}) and with growth conditions (\ref{linear_growth_f}) with $p=1$ can be found in \cite{Ph98},
where the author considers a more general combined optimal control and optimal stopping problem.
Another proof in this direction is presented in \cite{Zalin11}, where the author considers pure jump symmetric processes.
In \cite{Gol16} the authors consider processes of type (\ref{controlled_SDE}) with no assumptions on the finiteness of the moments, but assuming $f$ and $g$ bounded.\\
Note that all these proofs assume a compact control set $A$.\\
An admissible control $\alpha \in \mathcal{A}_{t,x}$ is $\epsilon$-optimal conditioned on $(t,x)$ if and only if it is $\epsilon$-optimal
on every $\mathcal{A}_{\theta, X_{\theta}}$ with
$\theta \in \mathcal{T}_{t,\tau}$. In order to determine the $\epsilon$-optimal control $\alpha_t$, it suffices to consider the DPP with a stopping time $\theta$
arbitrarily close to $t$.
\subsection{HJB equation (formal derivation)}
In this section we assume that the value function is continuously differentiable and obtain a formal nonlinear PIDE satisfied by the value function.
In general however, the value function is not necessarily differentiable, and the notion of \emph{viscosity solution} should be considered.
The Hamilton-Jacobi-Bellman equation (HJB) is the infinitesimal version of the DPP. It describes the local behavior of the value function when the stopping time $\theta$ converges to $t$.
The HJB equation is also called \textbf{dynamic programming equation}.\\
The following heuristic derivation is taken from Section \rom{3}.7 of \cite{FlemingSoner}, where the authors assume the existence of the optimal control.
For this derivation it is necessary to assume that the control is continuous and of Markov type.
\newline
\noindent
In the DPP (\ref{DPP1}),
let us consider the stopping time $\theta = t + h$, with $h>0$ and such that $\theta \in \mathcal{T}_{t,\tau}$, and a constant control $\alpha_s = a$ for $s \in [t,\theta]$.
Subtract $V(t,x)$ from both sides and then divide by $h$. This yields
\begin{align*}
0 \geq& \; \E_{t,x} \biggl[ \int_t^{t+h} f(s,X_s,a) ds + V(t+h, X_{t+h}) - V(t,x) \biggr]. \\
\geq& \; \E_{t,x} \biggl[ \frac{1}{h} \int_t^{t+h} f(s,X_s,a)
+ \biggl( \frac{\partial V}{\partial s} + \LL^{a}V \biggr)(s, X_{s}) \, ds \biggr]. \\
\end{align*}
where we used the Dynkin formula (Eq. \ref{Dynkin_formula}) assuming $V \in C^{1,2}$, with $\LL^{\alpha}$ the integro-differential operator \ref{int_diff_oper}.
By sending $h \to 0$ and using the mean value theorem we obtain the equation:
\begin{equation}
\frac{\partial V(t,x)}{\partial t} + \biggl( f(t,x,a) + \LL^{a}V(t,x) \biggr) \leq 0.
\end{equation}
Since this hold true for every $a \in A$, we can write
\begin{equation}\label{ineq_HJB}
-\frac{\partial V(t,x)}{\partial t} - \sup_{a \in A} \biggl( f(t,x,a) + \LL^{a}V(t,x) \biggr) \geq 0.
\end{equation}
On the other hand, if we suppose that the optimal control $\alpha^*$ exists, then:
\begin{equation}
0 = \E_{t,x} \biggl[ \int_t^{t+h} f(s,X^*_s,\alpha^*_s) ds + V(t+h, X^*_{t+h}) - V(t,x) \biggr],
\end{equation}
where $X^*$ is the optimal state, solution of (\ref{controlled_SDE}), under the control $\alpha^*$.
Using similar arguments as above we obtain:
\begin{equation}\label{opt_HJB}
-\frac{\partial V(t,x)}{\partial t} - \biggl( f(t,x,\alpha_t^*) + \LL^{\alpha_t^*} V(t,x) \biggr) = 0.
\end{equation}
We can combine the two results (\ref{ineq_HJB}), (\ref{opt_HJB}) in a single compact equation i.e. the \textbf{dynamic programming equation}:
\begin{equation}\label{DPE}
-\frac{\partial V(t,x)}{\partial t} - \sup_{a \in A} \biggl( f(t,x,a) + \LL^{a}V(t,x) \biggr) = 0.
\end{equation}
The terminal and lateral boundary conditions of this nonlinear equation are:
\begin{equation}\label{DPE_term_cond}
V(T,x) = g(T,x).
\end{equation}
\begin{equation}\label{DPE_lateral_cond}
V(t,x) = g(t,x) \quad \mbox{ for } \quad (t,x) \in [0,T)\times (\R^n \backslash \mathcal{O}).
\end{equation}
\begin{Remark}
The assumptions used in the derivation of (\ref{DPE}) are very restrictive.
In general, the HJB (\ref{DPE}) is well defined also for non-compact sets $A$, as long as the supremum in $a\in A$ is attained.
\end{Remark}
% In Section \ref{singular_control} we will see that when
% $A$ is unbounded and the supremum in $a\in A$ is not finite, the HJB equation becomes a variational inequality.
\section{Singular control}\label{singular_control}
In the previous section, the theory of stochastic control for generalized jump-diffusion processes is presented under the assumption that the control process
takes values in a compact space.
In this section this assumption is relaxed, and the control space $A$ is assumed to be unbounded.
When the problem coefficients are linear functions of the control, the supremum in (\ref{DPE}) can be infinite.
In Section \ref{singular_sec1}
we present a formal derivation of the HJB equation when $A$ is unbounded and the control acts linearly. This equation has the form of a \textbf{variational inequality}.
A rigorous basis to this derivation can be given a posteriori by means of a verification theorem, or by considering the viscosity solution framework.
For more details we refer to Chapter \rom{8}.4 of \cite{FlemingSoner} or to Theorem 5.2 of \cite{OksendalSulem}.
When considering $A$ unbounded and a linear dependence of the coefficients on $\alpha$, in general there are no optimal controls, and
$\epsilon$-optimal controls may take arbitrarily large values.
For this reason it is convenient to reformulate the problem by introducing a new control variable $\xi$ defined as
\begin{equation}
\xi_t = \int_0^t \hat \alpha_s du_s,
\end{equation}
with
\begin{equation}
\hat \alpha_s := \begin{cases}
\frac{\alpha_s}{|\alpha_s|} & \mbox{if } \alpha_s \not= 0 \\
0 & \mbox{if } \alpha_s = 0 .
\end{cases}
\quad \mbox{ and } \quad
u_t = \int_0^t |\alpha_s| ds.
\end{equation}
In order to obtain optimal controls we enlarge the class of controls to admit $\xi_t$ which may not be an absolutely continuous function of $t$.
We assume that $\xi_t$ is a cádlág, predictable, non-decreasing function with bounded variation on every time interval $[0,T]$.
In Section \ref{singular_sec2} we present the general formulation for \textbf{singular control} problems.
The name \emph{singular} comes from the fact that the control process $\{\xi_t\}_{t \geq 0}$ can be singular with respect to the Lebesgue measure $dt$.
\subsection{Singular control formulation}\label{singular_sec2}
Let us consider a state system governed by the following SDE:
\begin{align}\label{singular_SDE}
dX_t =& \hat b(t,X_{t-}) dt + \hat \sigma (t,X_{t-}) dW_t \\ \nonumber
&+ \int_{\R} \hat \gamma(t,X_{t-},z) \tilde N(dt,dz) + \kappa(t,X_{t-}) d\xi_t .
\end{align}
where the coefficients
$\hat b: [0,T]\times \R^n \to \R^n$, $\hat \sigma: [0,T]\times \R^n \to \R^{n \times d}$,
$\hat \gamma: [0,T]\times \R^n \times \R^n \to \R^{n}$, $\kappa : [0,T] \times \R^n \to \R^{n \times m}$
are continuous in $t$ and Lipschitz-continuous in $x$.
The function $\kappa(t,X_{t-})$ must also satisfy the following:
\begin{itemize}
\item \textbf{Frobenius condition}:
\begin{equation}\label{Frobenius}
\frac{\partial \kappa^i(t,x)}{\partial x} \kappa^j(t,x) = \frac{\partial \kappa^j(t,x)}{\partial x} \kappa^i(t,x) \quad \mbox{for each} \quad 1 \leq i, j \leq m,
\end{equation}
\end{itemize}
The Frobenius conditions requires that the columns of the $n\times m$ matrix $\kappa(t,x)$ commutes. For more information on this topic we refer to Chapter 3.3 of \cite{Arutyunov}.
The m-dimensional process $\xi : [0,T] \times \Omega \to \R^m$ is a $\mathcal{F}_t$-predictable, cádlág, bounded variation,
non-decreasing process, with $\xi_{0^-} = 0$.
We also assume that
\begin{equation}\label{singular_moments}
\E\bigl[ (\xi_T)^2 \bigr] < \infty
\end{equation}
and denote with $\Pi$ the space of all controls $\xi$.
The proof of existence and uniqueness of a solution
of Eq. (\ref{singular_SDE}) can be found in \cite{DoDa76}, where it is also proved that the solution has finite second moment
(see Step 3a of Lemma 1).
Alternative proofs, where some of the assumptions above are slightly relaxed, can be found in \cite{Gal78} and \cite{GyKr80}.
For $\tau$ defined in (\ref{exit_time_def}), the objective function is:
\begin{equation}\label{singular_cost_functional}
J^{\xi}(t,x) := \E_{t,x} \biggl[ \int_t^{\tau} \hat f(s,X_{s}) ds + \int_t^{\tau} h(s,X_{s^-}) d\xi_s + g(\tau, X_{\tau}) \biggr],
\end{equation}
where $\hat f:[0,T]\times \R^n \to \R$, $h: [0,T] \times \R^n \to \R^{m}$
and $g: \bigl( [0,T)\times \R^n \backslash \mathcal{O} \bigr) \bigcup \bigl( \R^n \times \{T\} \bigr) \to \R$ are continuous functions bounded by
polynomial functions of a suitable order $p\geq0$, in order to guarantee $J^{\xi}(t,x) < \infty$.
% Let us define the set
% \begin{equation}
% \Pi_{t,x} = \{ \xi \in \Pi : J^{\xi}(t,x) < \infty \}.
% \end{equation}
The value function is
\begin{equation}\label{controlled_VF}
V(t,x) := \sup_{\Pi} J^{\xi}(t,x).
\end{equation}
A straightforward modification of (\ref{DPP1}) and (\ref{DPP2}) yields the
\textbf{DPP for singular control}:
\emph{
\begin{itemize}
\item For all $\xi \in \Pi$ and all stopping time $\theta \in \mathcal{T}_{t,\tau}$:
\begin{equation}\label{DPP11}
V(t,x) \geq \E_{t,x} \biggl[ \int_t^{\theta} \hat f(s,X_s) ds + \int_t^{\theta} h(s,X_{s^-}) d\xi_s + V(\theta, X_{\theta}) \biggr].
\end{equation}
\item For all $\epsilon > 0$, there exists $\xi \in \Pi$ such that $\forall \theta \in \mathcal{T}_{t,\tau}$:
\begin{equation}\label{DPP22}
V(t,x) \leq \E_{t,x} \biggl[ \int_t^{\theta} \hat f(s,X_s) ds + \int_t^{\tau} h(s,X_{s^-}) d\xi_s + V(\theta, X_{\theta}) \biggr] + \epsilon.
\end{equation}
\end{itemize}
}
% \begin{center}
% \begin{riquadro}{12cm}
% \textbf{Assumption:}\\
% We assume that the value function (\ref{controlled_VF}) satisfies the DPP for singular control problem, under the hypothesis presented in this section.
% For the purpose of this thesis, we require that it holds for $f = h = 0$ and for $g$ satisfying
% \begin{equation}\label{assumption_DPP}
% \E_{t,x} \biggl[ g(\tau, X_{\tau}) \biggr] \,< \, \infty
% \end{equation}
% for all $\xi \in \Pi_{t,x}$.
% \end{riquadro}
% \end{center}
To our knowledge, there is no proof of the DPP for singular control under the assumptions presented in this section. \\
In \cite{FlemingSoner} the authors formulate the DPP in Chapter \rom{8}.5 for diffusion processes, without providing a proof.\\
In \cite{Kab16} the authors prove the DPP for a singular-regular control problem with infinite horizon.
The singular control influences only the state dynamics, described by a geometric Lévy process. The function $f$, instead, is affected by the regular control i.e.
$f(t,\alpha_t) = e^{-\beta t} \mathcal{U}(\alpha_t)$, with $\beta>0$ and $\mathcal{U} : A \to \R^+$ a concave function such that $\mathcal{U}(0)=0$ and
$\frac{\mathcal{U}(x)}{x}\to 0$ as $x\to \infty$.\\
%We argue that the proof of the DPP in our case, can be obtained by a slight modification of the proof in \cite{Kab16}.
\subsection{Derivation of the variational inequality}\label{singular_sec1}
For $(t,x,p,M,I) \in Q \times \R^n \times S^n \times \R$, let us define the Hamiltonian function:\footnote{$S^n$ is the set of symmetric matrices of dimension $n$.}
\begin{align}\label{Hamiltonian}
\mathcal{H}(t,x,p,M,I) =& \sup_{a \in A} \biggl( f(t,x,a) \, + b(t,x,a) p \\ \nonumber
&+ \frac{1}{2} \mbox{Tr} \bigl( \sigma(t,x,a)\sigma^T(t,x,a) \, M \bigr) +I \biggr)
\end{align}
where $f$, $b$, $\sigma$ and $A$ are defined as in Sections \ref{Optimal_control_framework} and \ref{DPP_section}.
Using (\ref{DPE}), we can write:
\begin{align} \nonumber
0 = & - \frac{\partial V(t,x)}{\partial t} - \mathcal{H}\bigl( t,x,D_x V, D_{xx} V, \mathcal{I}^a(t,x,V) \bigr)
\end{align}
where $\mathcal{I}^a(t,x,V)$ is the integral operator (\ref{int_oper}).
If we relax the assumption of a compact control space, and consider $A$ to be unbounded, the Hamiltonian (\ref{Hamiltonian}) may be infinity.
Let us assume that $A$ is a closed cone in $\R^m$ i.e.
\begin{equation}\label{unbounded_set}
a \in A, \quad \lambda > 0 \Longrightarrow \lambda a \in A.
\end{equation}
Let us assume also that the control influences linearly the dynamics of the system and the function $f$ as follows
\begin{align}\label{linear_control}
& b(t,x,a) = \hat b(t,x) + \kappa(t,x) a, \quad \sigma(t,x,a) = \hat \sigma(t,x), \\ \nonumber
& \gamma(t,x,a,z) = \hat \gamma(t,x,z), \quad f(t,x,a) = \hat f(t,x) + h(t,x) a,
\end{align}
where $\kappa : [0,T] \times \R^n \to \R^{n \times m}$ is continuous and Lipschitz and $h: [0,T] \times \R^n \to \R^{m}$ is a continuous function with polynomial growth of same order
of $f$.
Let us indicate with $\hat I$ the integral operator containing $\hat \gamma(t,x,z)$.
The Hamiltonian becomes
\begin{align*}
\mathcal{H}(t,x,p,M,\hat I) =& \biggl( \hat f(t,x) \, + p \, \hat b(t,x) \\ \nonumber
&+ \frac{1}{2} \mbox{Tr} \bigl( \sigma(t,x)\sigma^T(t,x) \, M \bigr) +\hat I \biggr) +
\underbrace{\sup_{a \in A} \biggl( \bigl( p \, \kappa(t,x) + h(t,x) \bigr) a \biggr) }_{\hat H(t,x,p)}
\end{align*}
If for some $a\in A$ and for fixed $(t,x)$ we have $\bigl( p \, \kappa(t,x) + h(t,x) \bigr) a>0$, then $\hat H(t,x,p) = \infty$. We can define
\begin{equation}\label{set_K}
H(t,x,p) = \sup_{a \in \hat K} \biggl( \bigl( p \, \kappa(t,x) + h(t,x) \bigr) a \biggr) \quad \mbox{ with } \quad \hat K = \{ a \in A: |a|=1 \}
\end{equation}
such that
\begin{equation}
\hat H(t,x,p) = \begin{cases}
\infty, & \mbox{if } H(t,x,p) >0 \\
0, & \mbox{if } H(t,x,p) \leq 0 .
\end{cases}
\end{equation}
Using this last equation together with the HJB (\ref{DPE}) with $a=0$ we have the following two equations:
\begin{equation}
\begin{cases}
\frac{\partial V(t,x)}{\partial t} + \hat f(t,x) + \LL^{a=0}V(t,x) \leq 0. \\
H\bigl(t,x, D_x V(t,x) \bigr) \leq 0.
\end{cases}
\end{equation}
Now, suppose $H(t,x,D_x V(t,x)) < 0$. Then $\hat H(t,x,p)=0$ and the optimal control is indeed $a=0$, with the uncontrolled HJB equal to zero:
\begin{equation}
H\bigl( t,x, D_x V(t,x) \bigr) < 0 \quad \Longrightarrow \quad \frac{\partial V(t,x)}{\partial t} + \hat f(t,x) + \LL^{a=0}V(t,x) = 0.
\end{equation}
The last equation can be written in a more compact form. We have the following \textbf{variational inequality}:
\begin{equation}\label{variational_inequality}
\max \biggl\{ \frac{\partial V(t,x)}{\partial t} + \hat f(t,x) + \LL^{a=0}V(t,x) \,,\, H\bigl( t,x, D_x V(t,x) \bigr) \biggr \} =0 \quad \mbox{for} \quad (t,x)\in Q.
\end{equation}
Under certain assumptions, it can be proved that the value function (\ref{controlled_VF}) is a viscosity solution of a variational inequality
(\ref{variational_inequality}).
\section{Definition of viscosity solution}\label{viscosity_solution_section}
This section is dedicated to the definition of viscosity solutions.
In the literature there are different definitions of viscosity solution depending on the context.
For instance, the theory presented in \cite{Pham} considers only the PDE case, while in \cite{Cont}
only linear PIDEs are considered.
Other important references are \cite{FlemingSoner}, \cite{Ph98} and \cite{BaIm08} among others.
In the general discontinuous viscosity solutions approach, there is no
need to prove a priori the continuity of the value function $V$. The continuity will actually follow from
a strong comparison principle.
\noindent
Here we present the definition for continuous viscosity solutions introduced in \cite{Kab16}, which is suitable for the problem proposed in Chapter \ref{Chapter5} of this thesis.
\newline
% Recall the definition of semi-continuous function:
% \begin{Definition}
% Suppose $\mathcal{X}$ is a topological space and $x_0$ is a point in $\mathcal{X}$.
% The function $f: \mathcal{X} \to \R$ is \textbf{upper semi-continuous} (USC) in $x_0$ if for every $\epsilon > 0$ exists a neighborhood of $x_0$, $U_{x_0}$, such that
% $$ f(x) \leq f(x_0) + \epsilon \quad \quad \forall x \in U_{x_0}. $$
% \end{Definition}
% \begin{Definition}
% Under the same assumptions,
% the function $f: \mathcal{X} \to \R$ is \textbf{lower semi-continuous} (LSC) in $x_0$ if for every $\epsilon > 0$ exists a neighborhood of $x_0$, $U_{x_0}$, such that
% $$ f(x) \geq f(x_0) - \epsilon \quad \quad \forall x \in U_{x_0}. $$
% \end{Definition}
\noindent
Let us consider a general parabolic problem:
\begin{equation}\label{parabolic_PIDE}
\begin{cases}
F(t,x,u,D_t u,D_x u,D_{xx}u,\I(t,x,u)) = 0 \quad \mbox{ for } \quad (t,x) \in Q \\
u(t,x) = g(t,x) \quad \mbox{ for } \quad (t,x) \not \in Q
\end{cases}
\end{equation}
where $g \in C^0 \bigcap \mathcal{C}_2([0,T] \times (\R^n \backslash \OO))$ is a given function and
$F$ is a continuous function that satisfies the following elliptic/parabolic local and non local conditions.\footnote{Although I use the same notation,
the functions and variables introduced in this section are not the same as those of the previous sections.}
For all $t\in [0,T); \; x \in \OO; \; r,\hat r \in \R; \; q,\hat q \in \R; \;
p \in \R^n; \; M,\hat M \in \SI^n; \; \I, \hat \I \in \R$:
\begin{itemize}
\item $r \leq \hat r \; \Longrightarrow \; F(t,x,r,q,p,M,\I) \leq F(t,x,\hat r,q,p,M,\I) $
\item $q \leq \hat q \; \Longrightarrow \; F(t,x,r,q,p,M,\I) \geq F(t,x,r,\hat q,p,M,\I) $
\item $M \leq \hat M \; \Longrightarrow \; F(t,x,r,q,p,M,\I) \geq F(t,x,r,q,p,\hat M,\I) $
\item $\I \leq \hat \I \; \Longrightarrow \; F(t,x,r,q,p,M,\I) \geq F(t,x,r,q,p,M,\hat \I). $
\end{itemize}
where the matrix ordering is intended with this meaning:
$$ \hat M \geq M \Leftrightarrow \hat M - M \mbox{ is positive semi-definite.} $$
Having in mind our specific problem, we will assume that the viscosity subsolution and supersolution are continuous on $[0,T]\times \R^n$.
We can now introduce the definitions:
\begin{Definition}
A continuous function $u$ is a \textbf{viscosity subsolution} of (\ref{parabolic_PIDE})
if for any $(\bar t, \bar x) \in [0,T]\times \R^n$ and any test function $ \phi \in C^{1,2}([0,T] \times \R^n) \bigcap \mathcal{C}_2([0,T] \times \R^n)$
such that $u-\phi$ has a global maximum at $(\bar t,\bar x)$ the following is satisfied:
\begin{align}\label{subsolution}
& F\biggl( \bar t,\bar x,u(\bar t,\bar x),D_t \phi(\bar t,\bar x), D_x \phi(\bar t,\bar x),D_{xx}\phi(\bar t,\bar x),\I(\bar t, \bar x,\phi(\bar t,\bar x))\biggr) \leq 0 \\ \nonumber
& \quad \quad \mbox{ for } \quad (\bar t,\bar x) \in Q.
\end{align}
\end{Definition}
\noindent
In the same way we define:
\begin{Definition}
A continuous function $u$ is a \textbf{viscosity supersolution} of (\ref{parabolic_PIDE})
if for any $(\bar t, \bar x) \in [0,T]\times \R^n$ and any test functions $ \phi \in C^{1,2}([0,T] \times \R^n) \bigcap \mathcal{C}_2([0,T] \times \R^n)$
such that $u-\phi$ has a global minimum at $(\bar t,\bar x)$ the following is satisfied:
\begin{align}\label{supersolution}
& F\biggl( \bar t,\bar x,u(\bar t,\bar x),D_t \phi(\bar t,\bar x), D_x \phi(\bar t,\bar x),D_{xx}\phi(\bar t,\bar x),\I(\bar t, \bar x,\phi(\bar t,\bar x))\biggr) \geq 0 \\ \nonumber
& \quad \quad \mbox{ for } \quad (\bar t,\bar x) \in Q.
\end{align}
\end{Definition}
\noindent
When a function is both a viscosity subsolution and supersolution, it is called a \textbf{viscosity solution}.
% \begin{align}\label{subsolution}
% & F\biggl( \bar t,\bar x,u(\bar t,\bar x),D_t \phi(\bar t,\bar x), D_x \phi(\bar t,\bar x),D_{xx}\phi(\bar t,\bar x),\I(\bar t, \bar x,\phi(\bar t,\bar x))\biggr) \leq 0 \\ \nonumber
% & \quad \quad \mbox{ for } \quad (\bar t,\bar x) \in Q, \\ \nonumber
% & \min \biggl\{ F \biggl( \bar t,\bar x,u(\bar t,\bar x), D_t \phi(\bar t,\bar x), D_x \phi(\bar t,\bar x),
% D_{xx}\phi(\bar t,\bar x),\I(\bar t, \bar x,\phi(\bar t,\bar x)) \biggr) ,\\ \nonumber
% & \quad \quad u(\bar t,\bar x) - g(\bar t,\bar x) \biggr\} \leq 0 \quad \mbox{ for } \quad (\bar t,\bar x) \in \partial Q, \\ \nonumber
% & u(\bar t,\bar x) - g(\bar t,\bar x) \leq 0 \quad \mbox{ for } \quad (\bar t,\bar x) \not \in \bar Q.
% \end{align}
%
% \begin{align}\label{supersolution}
% & F\biggl( \bar t,\bar x,u(\bar t,\bar x),D_t \phi(\bar t,\bar x), D_x \phi(\bar t,\bar x),D_{xx}\phi(\bar t,\bar x),\I(\bar t, \bar x,\phi(\bar t,\bar x))\biggr) \geq 0 \\ \nonumber
% & \quad \quad \mbox{ for } \quad (\bar t,\bar x) \in Q, \\ \nonumber
% & \max \biggl\{ F \biggl( \bar t,\bar x,u(\bar t,\bar x), D_t \phi(\bar t,\bar x), D_x \phi(\bar t,\bar x),
% D_{xx}\phi(\bar t,\bar x),\I(\bar t, \bar x,\phi(\bar t,\bar x)) \biggr) ,\\ \nonumber
% & \quad \quad u(\bar t,\bar x) - g(\bar t,\bar x) \biggr\} \geq 0 \quad \mbox{ for } \quad (\bar t,\bar x) \in \partial Q, \\ \nonumber
% & u(\bar t,\bar x) - g(\bar t,\bar x) \geq 0 \quad \mbox{ for } \quad (\bar t,\bar x) \not \in \bar Q.
% \end{align}
\section{Chapter conclusions}
This chapter serves as a brief introduction to the theory of stochastic control for processes with a jump and a diffusion components.
The topics in the initial Section \ref{Optimal_control_framework} follow closely the presentations given in \cite{Skorohod} and \cite{Ph98}.
In the same section, we introduce the important \emph{dynamic programming principle}, and derive the HJB equation for ``regular'' controls.
In this presentation we assume a compact control set.
In Section \ref{singular_control} we extend the theory to \emph{singular controls}.
In this framework the control set is assumed to be unbounded and the class of controls is enlarged in order to contain also non absolute continuous processes.
We also presented the Dynamic Programming Principle for singular control problems, and assumed that it is verified by the value function.
This assumption is important because the DPP will be used in the proof of existence of a viscosity solution in Section \ref{Existence_viscosity}.
For completeness, we started the presentation of this chapter with the theory for regular controls.
However, in this thesis we are more interested in the theory for singular controls! The reason is that it includes the portfolio optimization problems with transaction costs
that we will formulate in Chapter \ref{Chapter5}.\\
The end of the chapter is dedicated to the definition of viscosity solution.
% The existence proof frequently makes use of stopping times to ensure that a stochastic
% process $\{X_t\}_{t\geq0}$, started at $x = X_0$, is contained in some small set. This works for continuous processes, because for a stopping time defined as
% \begin{equation}
% \tau_{\rho} = \inf\{ s \geq 0 : |X_s - x| \geq \rho \} \wedge N
% \end{equation}
% with $\rho >0$ and $N > 0$, the condition $|X_{\tau_{\rho}} - x| \leq \rho$ always holds. For a process including (non-predictable) jumps, however,
% $|X_{\tau_{\rho}} - x|$ may be greater than $\rho$.
%
% Lévy processes and, more in general, all the controlled processes with dynamics as in Eq. (\ref{controlled_SDE}),
% are \textbf{stochastically continuous}, which
% means that the probability of $X_{t}$ being outside $\mathcal{B}(x, \rho )$ converges to zero, if $t\to 0$.