-
Notifications
You must be signed in to change notification settings - Fork 8
/
Copy pathCap1_in.tex
1052 lines (948 loc) · 58.7 KB
/
Cap1_in.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
\chapter{Modeling with Lévy Processes}\label{Chapter1}
%\blindtext
\minitoc% Creating an actual minitoc
\vspace{5em}
In mathematical finance, Lévy processes are a powerful tool to describe the observed reality of financial markets.\\
Usually it is common to model the market dynamics in a continuous time setting by means of the \emph{log-returns}
\begin{equation}
\log S_{t+\Delta t} - \log S_t = \log \left(\frac{S_{t+\Delta t}}{S_t}\right),
\end{equation}
where $S_t$ is the spot price of a financial asset at time $t$.\\
Log-returns are preferred to the \emph{relative price change} $(S_{t+\Delta t} - S_t )/S_t$, because the sum of log-returns
over $n$ consecutive periods is the log-return of the period $n \Delta t$.
The reason to use the log-returns rather than modeling directly the prices $\{S_t\}_{t \geq0}$, is that they have better statistical properties.
Furthermore, log-returns can assume negative values, and thus can be modeled by distributions
with ``nicer'' analytical properties.\\
Since the renowned paper of \cite{BS73}, a common assumption is that $t$-log-returns are
normally distributed as $\mathcal{N}(\mu t,\sigma^2 t)$.
This is largely due to the fact that the normal distribution as well as the continuous-time process
it generates (Brownian motion) has nice analytical properties.
Under this assumption, the dynamics of log-returns follows the Brownian motion process
\begin{equation}\label{GBM}
\log \left( \frac{S_t}{S_0} \right) = \mu t + \sigma W_t ,
\end{equation}
with constant drift $\mu \in \R$ and constant volatility $\sigma >0$, where $\{W_t\}_{t \geq0}$ is a \emph{standard Brownian motion}.\\
This model guarantees positiveness of the prices. By It\={o} formula, we obtain the stochastic differential equation for the price
\begin{equation}\label{GBM_sde}
\frac{d S_t}{S_t} = (\mu + \frac{1}{2} \sigma^2) d t + \sigma dW_t .
\end{equation}
The process $\{S_t\}_{t \geq0}$ is called \emph{geometric Brownian motion}.\\
A thorough look at data collections from various areas of finance reveals that the normality assumption
is not a very good approximation of reality.
Indeed, empirical return distributions have substantially
more mass around the origin and along the tails (\emph{heavy tails}), see \cite{Cont01}.
This means that the normal distribution underestimates the probability of large positive and negative returns.
In the real market instead, returns manifest frequently high peaks, that come more and more evident when
looking at short time scales.
The return peaks correspond to sudden large changes in the price, that cannot be reproduced by the dynamics of the Brownian motion.\\
The Brownian motion, which is a scale invariant process with continuous paths, can be a good approximation for long
time scales ($\Delta t \sim$ months to years), but is not a good model to reproduce
the peaks of the log-returns at short time scales.
In the last thirty years, a lot of research has been done on processes with jumps and their applications to financial derivatives.\\
Lévy processes belong to the bigger family of semimartingales.
If $\{X_t\}_{t \ge 0}$ is a Lévy process, there are relevant quantities such as the stochastic integral $\int_0^t \phi dX_t$ or any non-linear function
$f(t,X_t)$ that are not Lévy processes anymore. For some applications it is important to consider the larger class of processes of \emph{semimartingales}, which is closed
with respect to integration and non-linear transformations.
A general theory for semimartingales is presented in the books
of \cite{Protter} and \cite{JacodShi}.\\
\cite{Sato} is a complete reference book for the theory of Lévy
processes and their analytical properties.
\cite{Applebaum} presents Lévy processes with more emphasis on stochastic calculus and stochastic differential equations (SDEs).
For a comprehensive guide to applications of Lévy processes in finance some good sources are the books of
\cite{Cont} and \cite{Schoutens}.\\
Among the most popular Lévy processes applied to finance, it is worth to mention:
\begin{itemize}
\item[-] the Merton jump-diffusion model \cite{Me76}
\item[-] the Kou jump-diffusion model \cite{Kou02}
\item[-] the $\alpha$-stable \cite{Ma63}, \cite{BoPoCo97}, \cite{alpha09}
\item[-] the Variance-Gamma (VG) \cite{MaSe90}, \cite{MCC98}
\item[-] the Normal-Inverse-Gaussian (NIG) \cite{BN97}
\item[-] hyperbolic Lévy processes \cite{EbKe95}
\item[-] Carr-Geman-Madan-Yor (CGMY) model \cite{CGMY02}
\end{itemize}
This chapter reviews the most important points concerning the theory of Lévy processes and the stochastic calculus applied to jump processes.
A remarkable emphasis is given to the presentation of the exponential Lévy models used in this thesis:
the \emph{Merton model} and the \emph{Variance Gamma model}.
\section{Properties of Lévy processes}
\subsection{Basic definitions}
Let $\{X_t\}_{t \ge 0}$ be a stochastic process taking values in $\R^n$,
defined on a probability space $(\Omega,\mathcal{F},\{\mathcal{F}_t\}_{t \ge 0},\PP)$,
where we assume $\{\mathcal{F}_t\}_{t \ge 0}$ is the natural filtration\footnote{The natural filtration is defined as $\mathcal{F}_{t} = \sigma\{X_s :
0\leq s \leq t\} $. Any process $\{X_t\}_{t \geq 0}$ is adapted to its own filtration.}.\\
\begin{Definition}\label{LevyDef}
We say that $\{X_t\}_{t \ge 0}$ is a \textbf{Lévy Process} if the following conditions hold:
\begin{itemize}
\item[(\textbf{L1})] $X_{0} = 0$.
% \item[(\textbf{L2})] $\{X_t\}_{t \ge 0}$ has independent and stationary increments:\\ For each sequence $t_1,t_2,...,t_n$ such that $0<t_1<t_2 <... <t_n<\infty$
% $$ X_{t_{j+1}} - X_{t_j} \mbox{are independent.} $$
% $$ X_{t_{j+1}} - X_{t_j} \; \overset{d}{=} \; X_{t_{j+1}- t_{j} }. $$
\item[(\textbf{L2})] $\{X_t\}_{t \ge 0}$ has independent increments i.e. $X_t - X_s$ is independent of $\mathcal{F}_s$ for any $t > s \geq 0$.
\item[(\textbf{L3})] $\{X_t\}_{t \ge 0}$ has stationary increments i.e. for any $s,t \geq 0$, the distribution of $X_{t+s} - X_t$ does not depend on $t$.
\item[(\textbf{L4})] $\{X_t\}_{t \ge 0}$ is stochastically continuous: $\forall \epsilon > 0 $ and $\forall t \ge 0$ $$\lim_{h\to 0} \PP(|X_{t+h}-X_t| > \epsilon)=0. $$
\end{itemize}
\end{Definition}
It is well known (see \cite{Protter} Chapter 1, Theorem 30) that a Lévy process has a modification with ``cádlág''
paths, i.e. paths which are right-continuous and have left limits. \\
Lévy processes are intrinsically connected with infinitely divisible distributions. In particular the Lévy-Khintchine formula
for infinitely divisible random variables, is an essential tool for the classification of the Lévy processes by the form of their
characteristic functions.\\
In the following, let us present some definitions. In $\R^n$ we indicate a vector of random variables with $X = (X^1, ..., X^n)$.\footnote{
Let us recall some basic definitions. We denote the inner product for $x,y \in \R^n$ by
$$(x,y) := \sum_{i=1}^n x_i y_i ,$$ and the Euclidean norm by $$|x| := \sqrt{(x,x)}.$$ }
\begin{Definition} \label{chf}
Let $X$ be a random variable taking values in $\R^n$.\\
The \textbf{Characteristic function} $\phi_X:\R^n \to \C$ of $X$ is defined by
\begin{align}
\phi_{X}(u) &= \E [e^{i(u,X)}] \nonumber \\
&= \int_{\Omega} e^{i(u,X)} \PP(d\omega) \nonumber \\
&= \int_{\R^n} e^{i(u,x)} f_X(dx).
\end{align}
for each $u \in \R^n$. We indicated with $f_X$ the \textbf{probability density function} (pdf) of $X$.
\end{Definition}
For each $1 \leq j \leq n$ and $p \in \N$ , if $\E\bigl[ |(X^j)^{p}| \bigr] < \infty$, then
\begin{equation}\label{moments}
\E\biggl[ (X^j)^{p} \biggr] = i^{-p}\frac{\partial^p}{\partial u_j^{p}} \phi_X(u) \biggr|_{u=0} .
\end{equation}
With this property it is straightforward to compute the moments of each component of the random vector, as long as we know the analytic form
of the characteristic function.
The following properties hold for all $p>0$:
\begin{itemize}
\item $\E \bigl[ |X|^p \bigr] < \infty $ if and only if $\E \bigl[ |(X^j)|^p \bigr] < \infty $ for each $1 \leq j \leq n$.
\item If $\E \bigl[ |X|^p \bigr] < \infty $ then $\E \bigl[ |X|^q \bigr] < \infty $ for all $0 < q < p$.
\end{itemize}
For more information see Sections 1.1.2 and 1.1.6 of \cite{Applebaum}.
\begin{Definition}\label{inf_div}
Let $X$ be a random variable taking values in $\R^n$.
We say that X is \textbf{infinitely divisible} if for all $m \in \N$ there exist i.i.d. random variables $X_1^{(m)},...,X_m^{(m)}$
such that
\begin{equation}
X \overset{d}{=} X_1^{(m)} + ... + X_m^{(m)}.
\end{equation}
The superscript $(m)$ is to remember that the random variables depend on the initial choice of $m\in \N$.
\end{Definition}
\begin{Theorem}
A Lévy process $\{X_t\}_{t \ge 0}$ is infinitely divisible for each $t\geq0$.
\end{Theorem}
\begin{proof}
For each $n \in \N$ we can write
$$ X_t = \bigl( X_1^{(m)} \bigr)_t + ... + \bigl( X_m^{(m)} \bigr)_t $$
where $ \bigl( X_k^{(m)} \bigr)_t = X_{\frac{kt}{m}} - X_{\frac{(k-1)t}{m}} $ are i.i.d. by Definition \ref{LevyDef}.
\end{proof}
The opposite implication also holds.
\begin{Theorem}
Every infinitely divisible distribution is the distribution of a Lévy process.
\end{Theorem}
A proof of the previous theorem can be found in \cite{Applebaum} (Corollary 1.4.6).
Other properties of infinitely divisible distribution and connections with Lévy processes can be found in Chapter 2 of \cite{Sato}.\\
\subsection{Lévy-Khintchine representation}
We now present a beautiful formula, first established by Paul Lévy and A.Ya. Khintchine in the 1930s
which gives a characterization of every infinitely divisible random variable.\\
\begin{Definition} \label{Levy_measure}
Let $\nu(dx)$ be a Borel measure defined in $\R^n$. We say it is a \textbf{Lévy measure} if it satisfies
\begin{equation}
\nu (\{ 0 \} ) = 0,
\end{equation}
\begin{equation} \label{Levy_m}
\int_{\R^n} (1\wedge x^2) \nu(dx) < \infty.
\end{equation}
\end{Definition}
The characteristic function of an infinitely divisible random variable has the following \textbf{Lévy Khintchine representation}:
\begin{Theorem}
Let $X$ be an infinitely divisible random variable. Then there exist $b\in \R^n$, a positive definite $n\times n$ matrix $A$
and a Lévy measure $\nu$ on $\R^n$, such that $\forall u \in \R^n$:
\begin{align}
\phi_X(u) &= \mathbb{E} [e^{i(u,X)}] \\
&= \exp \left[ \left( i(b,u) - \frac{1}{2}(u,Au) + \int_{\R^n}
( e^{i(u,x)} -1 -i(u,x) \mathbbm{1}_{(|x|<1)}(x) ) \nu(dx) \right) \right]. \nonumber \\
&= e^{\eta(u)} \nonumber
\end{align}
\end{Theorem}
A proof can be found in \cite{Applebaum} (Theorem 1.2.14).
We call the map $\eta : \R^n \to \C$, the \textbf{Lévy symbol}.
Now we can easily find the Lévy Khintchine representation for a Lévy process
\begin{Theorem}
If $\{X_t\}_{t \ge 0}$ is a Lévy process, then
\begin{equation}\label{Levy_Kint}
\phi_{X_t}(u) = e^{t \eta(u)},
\end{equation}
where $\eta$ is the Lévy symbol of the random variable $X_t$ at $t=1$.
\end{Theorem}
For a proof of this theorem see Theorem 1.3.3 in \cite{Applebaum}.
The triplet $(b, A, \nu)$ is called \textbf{Lévy triplet}, and completely characterizes the Lévy process.
\subsection{Random measures}\label{random_measures}
A convenient tool for analyzing the jumps of a Lévy process is the random
measure of the jumps.
The jump process $\{\Delta X_t\}_{t \ge 0}$ associated to the Lévy process $\{X_t\}_{t \ge 0}$ is
defined, for each $t \geq 0$ , by
\begin{equation}\label{jump}
\Delta X_t = X_t - X_{t^-}
\end{equation}
where $X_{t^-} = \lim_{s\uparrow t} X_s $.\\
In the following, we indicate with $\mathcal{B}(\R^n)$
the Borel $\sigma$-algebra of $\R^n$, i.e. the smallest $\sigma$-algebra of subsets of $\R^n$ that contains all the open sets.
\begin{Definition}
Consider a set $A \in \mathcal{B}(\R^n \backslash \{ 0 \})$.
We define the \textbf{random measure} of the jumps of the process $\{X_t\}_{t \ge 0}$ by
\begin{align}
N^X(t,A)(\omega) &= \# \{ s \in [0,t] \, : \; \Delta X_s(\omega) \in A \} \\
&= \sum_{0 \leq s \leq t} \mathbbm{1}_A(\Delta X_s(\omega)) . \nonumber
\end{align}
\end{Definition}
For each $\omega \in \Omega$ and for each $0 \leq t < \infty$, the map
$$ A \to N^X(t,A)(w) $$
is a counting measure on $\mathcal{B}(\R^n \backslash \{ 0 \})$. %We indicate with $N^X(dt,dx)(w)$ the differential form.
We say that $A\in \mathcal{B}(\R^n \backslash \{ 0 \})$ is \emph{bounded below} if $0 \not \in \bar A$ (zero does not belong to the closure of $A$).
\begin{itemize}
\item For each $A$ bounded below, the process $\bigl \{ N^X(t,A)(\omega) \bigr \}_{t\geq 0}$ is a Poisson process with intensity
\begin{equation}
\mu(A) = \mathbb{E}[N^X(1,A) ]
\end{equation}
\item If $A_1, ..., A_m \in \mathcal{B}(\R^n \backslash \{ 0 \})$ are disjoint and bounded below and $t_1, ..., t_m \in \R^+$ are distinct, then
the random variables $N(t_1,A_1), ..., N(t_m,A_m)$ are independent.
\end{itemize}
(see \cite{Applebaum} Theorem 2.3.5). \\
A random measure satisfying the properties above is called \textbf{Poisson random measure}.\\
If $A$ is not bounded below, it is possible to have $\mu(A) = \infty$.
We can also define the \textbf{Compensated Poisson random measure}. For each $t \geq 0$ and $A$ bounded below, let us define:
\begin{equation}
\tilde{N}(t,A) = N(t,A) - t\mu(A).
\end{equation}
This is a martingale-valued measure, i.e. for each $A$ the process $\bigl \{ \tilde{N}(t,A) \bigr \}_{t\geq 0} $ is a martingale with respect to the natural filtration.
\noindent
Now we can define the integration with respect to a random measure:
\begin{Definition} \label{Poisson_int}
Let $N$ be the Poisson random measure associated to a Lévy process $\{X_t\}_{t \geq 0}$, and let $f:\R^n \to \R^n$ be a Borel-measurable
function. For any $A$ bounded below, $\omega \in \Omega$ and $t\geq0$, we define the \textbf{Poisson integral} of $f$ as
\begin{equation}
\int_A f(x) N(t,dx)(\omega) = \sum_{x\in A} f(x) N(t,\{x\})(\omega).
\end{equation}
\end{Definition}
Since $N(t,\{x\}) \neq 0 \Leftrightarrow \Delta X_s=x$ for at least one $s\in [0,t]$, we have
\begin{equation}
\int_A f(x) N(t,dx)(\omega) = \sum_{0 \leq s \leq t} f(\Delta X_s) \mathbbm{1}_A(\Delta X_s).
\end{equation}
The Poisson integral has the following important properties:
\begin{Theorem}
For $t\geq 0$ and for any $A$ bounded below, the random variable $\int_A f(x) N(t,dx)$ has the characteristic function
\begin{equation}
\E \left[ \exp \left( i u \int_A f(x) N(t,dx) \right) \right] =
\exp \left( t \int_{\R^n} [e^{iux}-1] \mu_{A,f}(dx) \right),
\end{equation}
where $\mu_{A,f}(B) = \mu(A \cap f^{-1}(B))$ for each $B \in \mathcal{B}(\R^n)$.
\end{Theorem}
This is the Theorem 2.3.7 in \cite{Applebaum}. By differentiation (see eq. \ref{moments}) we can derive:
\begin{equation}\label{Exp_poiss}
\E \left[ \int_A f(x) N(t,dx) \right] = t \int_A f(x) \mu(dx) \quad \mbox{ for } \quad f \in L^1(A,\mu_A),
\end{equation}
\begin{equation}
Var \left[ \biggr|\int_A f(x) N(t,dx)\biggr|\right] = t \int_A |f(x)|^2 \mu(dx) \quad \mbox{ for } \quad f \in L^2(A,\mu_A).
\end{equation}
\noindent
For $f \in L^1(A,\mu_A)$, we can also define the \textbf{compensated Poisson integral}
\begin{equation}
\int_A f(x) \tilde{N}(t,dx) := \int_A f(x) N(t,dx) - t \int_A f(x) \mu(dx),
\end{equation}
The random variable $\int_A f(x) \tilde{N}(t,dx)$ has characteristic function
\begin{equation}
\E \left[ \exp \left( i(u,\int_A f(x) \tilde{N}(t,dx)) \right) \right] =
\exp \left( t \int_{\R^n} [e^{(i(u,x))}-1-i(u,x)] \mu_{A,f}(dx) \right).
\end{equation}
for each $B \in \mathcal{B}(\R^n)$. \\
The process $\{\int_A f(x) \tilde N(t,dx)\}_{t \ge 0}$ is a martingale. For $f \in L^2(A,\mu_A)$ it holds
\begin{equation}
Var \left[ \biggr|\int_A f(x) \tilde N(t,dx)\biggr|\right] = t \int_A |f(x)|^2 \mu(dx).
\end{equation}
\begin{Theorem}
The intensity measure $\mu$ is a Lévy measure.
\end{Theorem}
See Corollary 2.4.12 in \cite{Applebaum}. From here on, we always indicate the Lévy measure with the symbol $\nu$.
We can further define:
\begin{equation}
\int_{|x|<1} f(x) \tilde N(t,dx) := \lim_{\epsilon \to 0} \int_{\epsilon < |x| < 1} f(x) \tilde N(t,dx),
\end{equation}
that represents the compensated sum of small jumps.
\subsection{Lévy-It\={o} decomposition}
The following is a fundamental theorem which decomposes a general Lévy process in the superposition
of independent processes: a drift term, a Brownian motion, a Poisson process with ``big jumps'' and a compensated Poisson process with ``small jumps''.
\begin{Theorem}
Given a Lévy process $\{X_t\}_{t \ge 0}$ , there exist $b\in \R^n$, a Brownian motion $W^A$ with diffusion matrix $A$, and an
independent Poisson random measure $N$ on $\R^+ \times \R^n$ such that
\begin{equation}\label{Levy_Ito}
X_t = bt + W^A_t + \int_{|x|<1} x \tilde{N}(t,dx) + \int_{|x|\geq1} x N(t,dx).
\end{equation}
This is called the \textbf{Lévy-It\={o} decomposition}.
\end{Theorem}
For a proof the reader can look at Theorem 2.4.16 in \cite{Applebaum}.\\
A lot of information on the features of a Lévy process can be derived from the integrability conditions of its Lévy measure.
The next theorem shows that finiteness of moments depends only on the frequency of large jumps.
\begin{Theorem} \label{assumptionM}
Let $\{X_t\}_{t \geq 0}$ be a Lévy process with Lévy measure $\nu$. Then
$\{X_t\}_{t \geq 0}$ has finite p-moment i.e.
$\E[|X_t|^p]<\infty$ for $p > 0$ and for all $t\geq0$, if and only if $\int_{|x| \geq 1} |x|^p \nu(dx) <\infty$.
\end{Theorem}
For a proof we refer to \cite{Applebaum} theorem 2.5.2.
In \cite{Sato} a stronger result is proved (Theorem 25.3).
\begin{Theorem}\label{g-moment}
Let $\{X_t\}_{t \geq 0}$ be a Lévy process with Lévy measure $\nu$.
Let $g$ be a non-negative measurable function on $\R^n$ satisfying the sub-multiplicative
property\footnote{A function is said to be sub-multiplicative if $\exists K>0$ such that $g(x+y)<Kg(x)g(y)$.},
then $\{X_t\}_{t \geq 0}$ has finite \emph{g-moment} i.e. $\E[g(X)]<\infty$ for all $t\geq0$, if and only if $\int_{|x| > 1} g(x) \nu(dx) <\infty$.
\end{Theorem}
The theorem \ref{assumptionM} is a special case of the theorem \ref{g-moment} since $g(x) = \max\{ |x|,1 \}$ is sub-multiplicative.
If we consider the sub-multiplicative function $g(x) = e^{(p,x)}$ with $p\in \R^n$,
it follows that
$\{X_t\}_{t \geq 0}$ has finite \textbf{exponential moment} if for all $t\geq0$:
\begin{equation}\label{exp-moment}
\E \bigl[ e^{(p,X_t)} \bigr]<\infty \quad \Leftrightarrow \quad
\int_{|x| \geq 1} e^{(p,x)} \nu(dx) <\infty.
\end{equation}
See \cite{Sato} theorem 25.17.
The majority of Lévy processes used in finance have finite moments.
For practical reasons, it makes sense to assume finite mean and variance of the price process.
In this thesis we will model (see Section \ref{Section_ELM}) the 1-dimensional dynamics of the prices with the exponential of a Lévy process,
i.e. $S_t = S_0 e^{X_t}$. Let us introduce the important assumption:
\begin{center}
\begin{riquadro}{12cm}
Assumption \textbf{EM}\label{AssumptionEM} :\\
In this thesis we consider only 1-dimensional Lévy processes with finite exponential second moment.\\
According to \ref{exp-moment} with $p=2$ it follows that:
$$ \E\bigl[ S_t^2 \bigr] < \infty \quad \Leftrightarrow \quad \int_{|x| \geq 1} e^{2x} \nu(dx) <\infty $$
\end{riquadro}
\end{center}
The existence of the exponential 2-moment implies that $\{X_t\}_{t \geq 0}$ has finite p-moment for all $p \in \N$.
\vspace{1.5em}
If we assume that $\{X_t\}_{t \geq 0}$ has finite first moment, we can simplify the Lévy-It\={o} formula, by adding the finite terms $\pm \int_{|x| \geq 1} x t\, \nu(dx)$
to (\ref{Levy_Ito}). The last term becomes a martingale and the new drift becomes $b' = b+\int_{|x| \geq 1} x \nu(dx)$.\\
The new decomposition has the form:
\begin{equation}\label{Levy_Ito2}
X_t = b't + W^A_t + \int_{\R^n} x \tilde{N}(t,dx).
\end{equation}
The Lévy-Khintchine formula \ref{Levy_Kint} becomes
\begin{equation}\label{Levy_Kint2}
\phi_X(u) = \mbox{exp} \left[ t \left( i(u,b') - \frac{1}{2}(u,Au) + \int_{\R^n}
\bigl( e^{i(u,x)} -1 -i(u,x) \bigr) \nu(dx) \right) \right].
\end{equation}
Because in (\ref{Levy_Ito2}) the only non martingale term is the drift, we see that the drift term $b' = \E[ X_{1} ]$.
\section{Infinitesimal Generator and stochastic calculus}\label{Infinitesimal_generator_stoch_calc}
Lévy processes belong to the big family of Markov processes. Therefore, all the general properties of Markov processes apply to
Lévy processes as well.
In this section,
we do not go too deep in the big field of semigroups theory, but we just present the fundamental definitions and
theorems necessary to define the infinitesimal generator of a Lévy process.
We also present the It\={o} formula for any Lévy stochastic integral, and the sufficient conditions for the existence of solutions of the SDEs
considered in this chapter.
\subsection{Infinitesimal generator of a Markov process}
Let us denote with $B_b(\R^n)$
the linear space of all bounded Borel measurable functions $f: \R^n \to \R$.
Let $C_b(\R^n)$ be the subspace of $B_b(\R^n)$ containing continuous functions, and
$C_0(\R^n)$ be the subspace of $C_b(\R^n)$ containing continuous functions such that $\lim_{|x|\to \infty} f(x) = 0$.
The spaces introduced above are all Banach spaces under the norm $||f|| = \sup\{ |f(x)| : x\in \R^n \}$.
We also denote with $C_0^{n}(\R^n)$ the set of $f \in C_0(\R^n)$ such that $f$ is $n$ time differentiable and
the partial derivatives of $f$ with order $\leq n$ belong to $C_0(\R^n)$. \\
Let us also recall some definitions of operators in a Banach space $B$.
A linear operator $\LL$ is a mapping from a linear subspace $D_{\LL}$ of $B$ into $B$ such that
$$ \LL ( af + bg ) = a \LL f + b \LL g \quad f,g \in D_{\LL} \quad a,b \in \R. $$
The set $D_{\LL}$ is the domain of $\LL$. A linear operator is called bounded if $D_{\LL} = B$ and $||\LL|| := \sup_{||f||\leq 1} ||\LL f||$ is finite.
If the previous conditions are not satisfied, the operator is called unbounded.
\begin{Definition}
Let $\{X_t\}_{t \ge 0}$ be an adapted process on the probability space $(\Omega,\mathcal{F},\PP)$ equipped with a filtration $\{ \mathcal{F}_t\}_{t\geq0}$.
We say that $\{X_t\}_{t \ge 0}$ is a \textbf{Markov process} if for all $f\in B_b(\R^n)$
and $0\leq s \leq t < \infty$
\begin{equation} \label{Markov_prop}
\E \bigl[ f(X_t) \big| \mathcal{F}_s \bigr] = \E \bigl[ f(X_t) \big| X_s \bigr].
\end{equation}
\end{Definition}
The property (\ref{Markov_prop}) is called \textbf{Markov property}.
We can define the \textbf{stochastic evolution} of a Markov process as
\begin{equation}\label{stoch_evolution}
(T_{s,t}f)(x) = \E[f(X_t) |X_s = x],
\end{equation}
for every $f \in B_b(\R^n)$ and $0\leq s \leq t < \infty$.
The family of operators $T_{s,t} : B_b(\R^n) \to B_b(\R^n)$ satisfies
\begin{enumerate}
\item $T_{s,t}$ is linear for each $0\leq s \leq t < \infty$.
\item $ T_{s,s} = I $ for each $s \geq 0$.
\item $T_{r,s} T_{s,t} = T_{r,t}$ for each $0\leq r \leq s \leq t < \infty$
\item $f>0 \Rightarrow T_{s,t}f >0$.
\item $T_{s,t}$ is a contraction, i.e. $||T_{s,t}||<1$ for each $0\leq s \leq t < \infty$.
\item $T_{s,t} (1) = 1$.
\end{enumerate}
See Theorem 3.1.2 in \cite{Applebaum}. An immediate consequence of condition 5 is that $T_{s,t}$ is a bounded operator.
\begin{Definition}\label{trans_prob}
We can define the \textbf{transition probability} as the mapping $p_{s,t}(x,B) : \R^n \times \mathcal{B}(\R^n) \to [0,1]$,
with $0\leq s \leq t < \infty$ as:
\begin{equation}
p_{s,t}(x,B) := T_{s,t} \mathbbm{1}_B(x) = P(X_t \in B | X_s=x).
\end{equation}
\end{Definition}
It is related with the semigroup operator as follows:
\begin{equation}\label{evolution}
(T_{s,t}f)(x) = \int_{\R^n} f(y) p_{s,t}(x,dy).
\end{equation}
The transition probabilities satisfy the properties
\begin{enumerate}
\item The maps $x \to p_{s,t}(x,A)$ are measurable for each $A\in \mathcal{B}(\R^n)$.
\item $p_{s,t}(x,\cdot)$ is a probability measure on $\mathcal{B}(\R^n)$ for each $x \in \R^n$.
\item $p_{s,s}(x,B) = \mathbbm{1}_B(x)$ for $s \geq 0$.
\item For $0\leq r \leq s \leq t$ it satisfies the \textbf{Chapman-Kolmogorov} equation
\begin{equation}
p_{r,t}(x,B) = \int_{\R^n} p_{r,s}(x,dy) p_{s,t}(y,B).
\end{equation}
\end{enumerate}
\begin{Definition}\label{time-homogeneous}
We say that the transition probability is \textbf{time homogeneous} if
\begin{equation}
p_{s,t}(x,B) = p_{0,t-s}(x,B).
\end{equation}
\end{Definition}
\begin{Definition}
We say that the transition probability is \textbf{translation invariant} if
\begin{equation}
p_{s,t}(x,B) = p_{s,t}(0,B-x),
\end{equation}
where $B-x = \{y-x : y\in B \}$.
\end{Definition}
\begin{Theorem}
Every Lévy process is a time homogeneous and translation invariant Markov process.
\end{Theorem}
For a proof of this theorem, we refer to Theorems (10.4) and (10.5) in \cite{Sato}.\\
\noindent
A Markov process is said to be time homogeneous if the associated stochastic evolution operator satisfies $T_{s,t} = T_{0,t-s}$ for all $0 \leq s \leq t < \infty$.
This is analogous to require that the transition probabilities are time homogeneous
(can be verified by using (\ref{time-homogeneous}) and (\ref{evolution})).
We indicate the operator $T_{0,t}$ as $T_t$. For a time homogeneous Markov process, the condition 3 for the stochastic evolution operators (\ref{stoch_evolution}) becomes:
\begin{equation}\label{semigroup}
T_{s+t} = T_s T_t \quad \forall s,t \geq 0.
\end{equation}
Any family of linear operators on a Banach space that satisfies (\ref{semigroup}) is called \textbf{semigroup}.
\begin{Definition}
The operator $T_t$ associated with a time homogeneous Markov process, is called \textbf{Feller semigroup} if:
\begin{enumerate}
\item $T_t : C_0(\R^n) \to C_0(\R^n) $ for all $t \geq 0$,
\item The map $t \to T_t $ with $t \geq 0$ is strongly continuous at 0, i.e. $$\lim_{t \downarrow 0} ||T_t f - f|| = 0.$$
\end{enumerate}
\end{Definition}
The homogeneous Markov process associated with the Feller semigroup is called \textbf{Feller process}.
In the definition of Feller semigroup, we used the space $C_0(\R^n)$ although some authors prefer to use $C_b(\R^n)$.
The space $C_0(\R^n)$ has nicer analytical properties than $C_b(\R^n)$ and allows to prove important probabilistic theorems.
In particular, when replacing $C_0(\R^n)$ by $C_b(\R^n)$, the condition 2 above may fail.
\begin{Definition}\label{Infinitesimal_generator_def}
The \textbf{infinitesimal generator} $\LL$ of the Feller semigroup $T_t$ is defined by
\begin{equation}
\LL f = \lim_{t \downarrow 0} \frac{T_t f - f}{t}.
\end{equation}
The domain $D_{\LL}$ of $\LL$ is the subspace of $C_0(\R^n)$ such that the limit above exists.
\end{Definition}
In general $\LL$ is a linear (possibly unbounded) operator.
Among the many properties of the infinitesimal generator, it is possible to prove (see Hille-Yosida, Theorem 31.3 in \cite{Sato})
that $\LL$ is closed and that $D_{\LL}$ is dense in $C_0(\R^n)$.
For more information we refer to Chapter 1 of \cite{EthierKurtz}.
The following theorem gives an explicit form for the infinitesimal generator
of a general Lévy process.
\begin{Theorem}\label{Inf_gen_theorem}
Let $\{X_t\}_{t \ge 0}$ be a Lévy process with Lévy triplet $(b,A,\nu)$. Let $T_t$ be the associated Feller-semigroup
with generator $\LL$. For each $f\in C_0^{2}(\R^n)$, $t\geq0$, $x\in \R^n$,
$\LL$ has the form:
\begin{align}\label{genLevy}
(\LL f)(x) &= \sum_{j=1}^n b_j \frac{\partial f}{\partial x_j}(x) +
\frac{1}{2} \sum_{i,j=1}^n A_{i,j} \frac{\partial^2 f}{\partial x_i \partial x_j}(x)\\
& + \int_{\R^n} \left( f(x+y) - f(x) - \sum_{j=1}^n y_j \frac{\partial f}{\partial x_j}(x)
\mathbbm{1}_{\{ |y|<1 \}}(y) \right) \nu(dy). \nonumber
\end{align}
\end{Theorem}
A proof of this theorem can be found in \cite{Sato} (Theorem 31.5).\\
\noindent
In the more general framework of Section \ref{Optimal_control_framework}, we will show that for processes with finite second moment, the operator
$\LL$ is well defined also for functions that do not vanish at infinity, but have polynomial growth (see Definition \ref{Cp}) of second order.
\subsection{The It\={o} formula}
Following Section 4.2.2 of \cite{Applebaum}, we call $\mathcal{P}_2(T,E)$ with $E \in \mathcal{B}(\R^n)$,
the set of all functions $f:[0,T]\times E \times \Omega \to \R$ satisfying the two conditions:
\begin{enumerate}
\item Predictable\footnote{Given the probability space $(\Omega,\mathcal{F},\{\mathcal{F}_t\}_{t\in [0,T]},\PP)$ and $E \in \mathcal{B}(\R^n)$.
A function $f:[0,T]\times E \times \Omega \to \R$
is said to be \textbf{predictable}, if for each $0 \leq t \leq T$ the mapping $(x,\omega) \to f(t,x,\omega)$ is $\mathcal{B}(E)\otimes \mathcal{F}_t$-measurable,
and for each $x\in E$ and $\omega \in \Omega$ the mapping $t \to f(t,x,\omega)$ is left-continuous.\\
If $f$ is predictable, the process $t \to f(t,x,\cdot)$ for each $x \in E$, is adapted.}.
\item $\PP \biggl( \int_0^T \int_E |f(t,x)|^2 \nu(dx) dt <\infty \biggr) = 1 $.
\end{enumerate}
An $\R^n$ valued stochastic process $\{Y_t\}_{t \ge 0}$ is a \textbf{Lévy stochastic integral} if can be written as a superposition of
an ordinary integral, an It\={o} integral and a Poisson and compensated Poisson integrals.
\begin{align} \label{Levy_int}
Y^i_t &= Y^i_0 + \int_0^{t} G^i_s ds + \int_0^{t} F^i_s dW(s)\\ \nonumber
&+ \int_0^{t} \int_{|x|<1} H^i(s,x) \tilde N (ds,dx)\\ \nonumber
&+ \int_0^{t} \int_{|x|\geq 1} K^i(s,x) N(ds,dx),
\end{align}
for $1 \leq i \leq n$ and $t>0$. $Y_0$ is a $\mathcal{F}_0$-measurable random variable.
The expression (\ref{Levy_int}) has the differential form
\begin{align} \label{Levy_diff}
dY_t &= G_t dt + F_t dW_t \\ \nonumber
&+ \int_{|x|<1} H(t,x) \tilde N (dt,dx) + \int_{|x|\geq 1} K(t,x) N(dt,dx).
\end{align}
where we dropped the indexes and used the convention that integrals are taken on each component.
In order for (\ref{Levy_int}) to be well defined, for each $1 \leq i \leq n$, the processes $|G^i_s|^{1/2}$ and $F^i_s$ are in $\mathcal{P}_2 \bigl( T,\{0\} \bigr)$.
The function $H^i(s,x) \in \mathcal{P}_2 \bigl( T, \{ |x|<1 \} \bigr)$
and $K^i(s,x)$ have to be predictable.
Now let us introduce the most important formula in stochastic calculus: the \textbf{It\={o}'s formula}.
\begin{Theorem}
If $\{Y_t\}_{t \ge 0}$ is the Lévy stochastic integral (\ref{Levy_int}), for each $f \in C^2(\R^n)$ we have
\begin{align} \label{Ito_form}
df(Y_t) &= \sum_{j=1}^n \frac{\partial f}{\partial y_j}(Y_{t^-}) G^j_t dt + \sum_{j=1}^n \frac{\partial f}{\partial y_j}(Y_{t^-}) F^j_t dW_t \\ \nonumber
&+ \frac{1}{2} \sum_{j=1}^n \frac{\partial^2 f}{\partial y_j^2}(Y_{t^-}) (F^j_t)^2 dt \\ \nonumber
&+ \int_{|x|\geq 1} \bigl[ f\bigl( Y_{t^-} + K(t,x) \bigr) - f( Y_{t^-} ) \bigr] N(dt,dx) \\ \nonumber
&+ \int_{|x|< 1} \bigl[ f\bigl( Y_{t^-} + H(t,x) \bigr) - f(Y_{t^-}) \bigr] \tilde N(dt,dx) \\ \nonumber
&+ \int_{|x|< 1} \bigl[ f\bigl( Y_{t^-} + H(t,x) \bigr) - f(Y_{t^-}) - \sum_{j=1}^n \frac{\partial f}{\partial y_j}(Y_{t^-}) H^j(t,x) \bigr] \nu(dx)dt. \nonumber
\end{align}
\end{Theorem}
For a complete proof see \cite{Applebaum} Theorem 4.4.7.
The terms in the first two lines are the same as in
the diffusion case. The other terms are due to the discontinuous part of the process.
Let us introduce also the \textbf{It\={o} product rule}.
\begin{Theorem}
If $Y^1_t$ and $Y^2_t$ are $\R$-valued stochastic integrals of the form (\ref{Levy_int}), then for all $t\geq0$, with probability 1 we have
\begin{equation}\label{Ito_product}
d \bigl( Y^1_t \cdot Y^2_t \bigr) = Y^1_{t^-}\,dY^2_t + Y^2_{t^-}\,dY^1_t + d\bigl[ Y^1_t,Y^2_t \bigr],
\end{equation}
where the quadratic variation term is
\begin{align*}
d\bigl[ Y^1_t, Y^2_t \bigr] =& \; F^1_t F^2_t dt \\
&+ \int_{|x|<1} H^1(t,x) H^2(t,x) N (dt,dx) \\
&+ \int_{|x|\geq 1} K^1(t,x) K^2(t,x) N(dt,dx).
\end{align*}
\end{Theorem}
The proof of this theorem can be obtained by a direct application of the It\={o}'s formula (\ref{Ito_form}) to the product of $Y^1_t$ and $Y^2_t$.
See Theorem 4.4.13 of \cite{Applebaum}.
The next theorem establishes the useful \textbf{Dynkin formula} for Lévy processes:
\begin{Theorem}\label{Dynkin_formula}
Let $\{X_t\}_{t \ge 0}$ be a Lévy process, and let $f\in C_0^2(\R^n)$. Let $\tau$ be a stopping time\footnote{A stopping time is a random
variable $\tau : \Omega \to \R_+$ such that $\{w\in\Omega : \tau(\omega) \leq t\} \in \mathcal{F}_t$.} such that
$\E_x[\tau] < \infty$, then
\begin{equation}\label{Dynkin_theorem}
\E_x[f(X_{\tau})] = f(x) +\E_x\left[ \int_0^{\tau} \LL f(X_{s})ds \right],
\end{equation}
where $\LL$ is the infinitesimal generator as in eq. (\ref{genLevy}).
\end{Theorem}
\begin{proof}
This result comes by applying It\={o}'s formula (\ref{Ito_form}) to $f(X_s)$, integrating in $[0,\tau]$ and taking
expectation conditioned by $X_0=x$.
\end{proof}
\subsection{Existence and uniqueness}\label{existence_uniqueness}
Let us consider, for simplicity, a time-homogeneous SDE like:
\begin{align} \label{SDE}
dY_t &= b(Y_{t^-} ) dt + \sigma(Y_{t^-}) dW_t\\ \nonumber
&+ \int_{|x|<c} F(Y_{t^-},x) \tilde N (dt,dx) + \int_{|x|\geq c} G(Y_{t^-},x) N(dt,dx),
\end{align}
with $\{W_t\}_{t\geq 0}$ a d-dimensional Brownian motion.
The functions $b:\R^n \to \R^n$, $\sigma:\R^n \to \R^{n\times d}$, $F:\R^n\times \R^n \to \R^n$ and $G:\R^n \times \R^n \to \R^n$ are measurable.
The constant $c \in [0,\infty]$ give us the freedom to specify the size of the big and small jumps.
Usually, a common choice is $c=1$, as we did for the differential form of a Lévy-type stochastic integral (\ref{Levy_diff}), and
the Lévy-It\={o} decomposition (\ref{Levy_Ito}).
If we want to put both large and small jumps in the same integral, we choose $c = \infty$ or $c=0$.
Let us choose $c=\infty$ and write the SDE in the following form:
\begin{align} \label{modSDE}
dY_t &= b(Y_{t^-}) dt + \sigma(Y_{t^-}) dW_t\\ \nonumber
&+ \int_{\R^n} F(Y_{t^-},x) \tilde N (dt,dx).
\end{align}
Let us introduce two conditions:
\begin{itemize}
\item[(C1)] \textbf{Lipschitz condition} There exist $K_1 >0$, such that $\forall y_1,y_2 \in \R^n$,
\begin{align}\label{Lipschitz}
&|b(y_1) - b(y_2)| + || \sigma(y_1) - \sigma(y_2) || \\
& + \int_{\R^n} |F(y_1,x)-F(y_2,x)| \nu(dx) \; \leq \; K_1 |y_1-y_2|. \nonumber % \label{Lipschitz2}
\end{align}
\item[(C2)] \textbf{Linear growth condition}. There exist $K_2>0$, such that $\forall y \in \R^n$,
\begin{align}\label{Growth}
|b(y)|^2 + ||\sigma(y)||^2
+ \int_{\R^n} |F(y,x)|^2 \nu(dx) \; \leq \; K_2 (1+|y|^2), % \label{Growth2}
\end{align}
\end{itemize}
where for every $(d\times n)$ matrix, the norm is defined as
\begin{equation}
|| \sigma ||^2 = \sum_{i=1}^d \sum_{j=1}^n [\sigma_{i,j}]^2 .
\end{equation}
If $\nu$ is finite, then condition \textbf{C2} is a consequence of \textbf{C1}. If it is possible to write $F(y,x) = H(y)\rho(x)$, with $H$ Lipschitz and
$\rho$ satisfying:
\begin{equation}\label{rho}
\int_{\R^n} |\rho(x)|^2 \nu(dx) < \infty,
\end{equation}
then again the growth condition is a consequence of the Lipschitz condition.
\begin{Theorem}
Given the conditions \textbf{C1}, \textbf{C2}, there exists a unique strong solution $Y_t$ of (\ref{modSDE}) with initial condition $Y_{0}$.
The process $\{Y_t\}_{t\geq 0}$ is cádlág and adapted to $\{\mathcal{F}_t\}_{t\geq 0}$.
\end{Theorem}
A proof of existence and uniqueness based on Picard iteration can be found in \cite{Applebaum} (Theorem 6.2.3). For more information we refer to Chapter 3.2 of
\cite{Skorohod} (Theorem 3.4), where the time inhomogeneous case is considered.
\begin{Theorem}
Let us consider the Feller semigroup associated with the Feller process described by the SDE (\ref{SDE}).
For each $f\in C_0^{2}(\R^n)$, $t\geq0$, $x\in \R^n$,
the infinitesimal generator $\LL$ has the form:
\begin{align} \label{gen_jumpdiff}
(\LL f)(x) &= \sum_{j=1}^n b_j(x) \frac{\partial f}{\partial x_j}(x) +
\frac{1}{2} \sum_{i,j=1}^n A_{i,j}(x) \frac{\partial^2 f}{\partial x_i \partial x_j}(x)\\ \nonumber
& + \int_{|y|<c} \left( f\bigl(x+F(x,y)\bigr) - f(x) - \sum_{j=1}^n F^j(x,y) \frac{\partial f}{\partial x_j}(x) \right) \nu(dy) \\ \nonumber
& + \int_{|y| \geq c} \biggl( f\bigl(x+G(x,y)\bigr) - f(x) \biggr) \nu(dy). \nonumber
\end{align}
with $A(x) = \sigma(x) \sigma^T(x)$.
\end{Theorem}
The proof can be obtained by the application of the It\={o} lemma, see Theorem 6.7.4 of \cite{Applebaum}, where the author also show that $C_0^{2}(\R^n) \subseteq D_{\LL}$.
\section{Exponential Lévy models}\label{Section_ELM}
Finally we are able to generalize the equation (\ref{GBM}) for the process of the log-returns\footnote{
It is also possible to generalize the differential equation (\ref{GBM_sde}). The modified SDE is called \emph{geometric Lévy process}
and its solution is the \emph{Doleans-Dade exponential} of a Lévy process. The two approaches are equivalent (see propositions 8.22 in \cite{Cont}).
In this thesis we choose to use exponential Lévy models.}.
We write:
\begin{equation}
\log \left( \frac{S_t}{S_0} \right) = X_t ,
\end{equation}
where $X_t$ is a one dimensional Lévy process with triplet $(b,\sigma^2,\nu)$.
The name \textbf{exponential Lévy model} comes from the expression written as:
\begin{equation}\label{ELM}
S_t = S_0 e^{X_t} ,
\end{equation}
\subsection{Exponential Lévy SDE}\label{Section_ELM33}
In order to obtain an SDE for the process (\ref{ELM}), we apply It\={o} formula (\ref{Ito_form}), and we consider
the Lévy-It\={o} decomposition (\ref{Levy_Ito}) for $X_t$, written in the differential form like (\ref{Levy_diff}).
\begin{align*}
d S_t \; &= S_0 e^{X_{t^-}} b dt \; + \; S_0 e^{X_{t^-}} \sigma dW_t \; + \; \frac{1}{2}S_0 e^{X_{t^-}}\sigma^2 dt \\ \nonumber
&+ \int_{|x|\geq 1} (S_0 e^{X_{t^-}+x} - S_0 e^{X_{t^-}}) N(dt,dx) \\ \nonumber
&+ \int_{|x|< 1} (S_0 e^{X_{t^-}+x} - S_0 e^{X_{t^-}}) \tilde N(dt,dx) \\ \nonumber
&+ \int_{|x|< 1} (S_0 e^{X_{t^-}+x} - S_0 e^{X_{t^-}} - x S_0 e^{X_{t^-}}) \nu(dx) dt. \nonumber
\end{align*}
After some substitutions we can see that the resulting equation, as expected, is a generalization of the equation (\ref{GBM_sde}).
\begin{align}
\frac{d S_t}{S_{t^-}} \; &= (b + \frac{1}{2}\sigma^2 ) dt + \sigma dW_t \\ \nonumber
&+ \int_{|x|< 1} ( e^{x} - x - 1) \nu(dx) dt \\ \nonumber
&+ \int_{|x|\geq 1} (e^{x} - 1) N(dt,dx) + \int_{|x|< 1} (e^{x} - 1) \tilde N(dt,dx). \nonumber
\end{align}
Thanks to the assumption \textbf{EM} (Section \ref{AssumptionEM}) we can simplify this equation.
First we look at the integrability conditions:
\begin{itemize}
\item $\int_{|x|\geq 1} e^{x} \nu(dx) < \infty$ by \textbf{EM}.
\item $\int_{|x|\geq 1} 1\; \nu(dx) < \infty$ by Definition \ref{Levy_measure}.
\end{itemize}
We can add and subtract $\pm \int_{|x|\geq 1} ( e^{x} - 1) \nu(dx) dt $ and obtain the final form
\begin{align} \label{exp_sde}
\frac{d S_t}{S_{t^-}} \; &= \left(b + \frac{1}{2}\sigma^2 + \int_{\R} ( e^{x} - 1 -x\mathbbm{1}_{|x|<1}) \nu(dx) \right) dt \\ \nonumber
&+ \sigma dW_t \; + \int_{\R} (e^{x} - 1) \tilde N(dt,dx). \nonumber
\end{align}
If we set
\begin{equation}\label{mu}
\mu := b + \frac{1}{2}\sigma^2 + \int_{\R} ( e^{x} - 1 -x\mathbbm{1}_{|x|<1}) \nu(dx)
\end{equation}
we have an SDE of type (\ref{modSDE}).
\begin{equation}\label{exp_sde2}
d S_t = \; \mu S_{t^-} dt + \sigma S_{t^-} dW_t \; + \int_{\R} S_{t^-} (e^{x} - 1) \tilde N(dt,dx).
\end{equation}
The same equation can be derived quickly by considering the Lévy-It\={o} form (\ref{Levy_Ito2}) for $X_t$:
\begin{align*}
d S_t \; =& \; S_0 e^{X_{t^-}} \biggl( b + \int_{|x|\geq 1}x \nu(dx) \biggr) dt \; + \; S_0 e^{X_{t^-}} \sigma dW_t \; + \; \frac{1}{2}S_0 e^{X_{t^-}}\sigma^2 dt \\ \nonumber
&+ \int_{\R} (S_0 e^{X_{t^-}+x} - S_0 e^{X_{t^-}}) \tilde N(dt,dx) + \int_{\R} (S_0 e^{X_{t^-}+x} - S_0 e^{X_{t^-}} - x S_0 e^{X_{t^-}}) \nu(dx) dt \\ \nonumber
=& \; S_{t^-} \biggl[ \mu dt + \sigma dW_t \; + \int_{\R} (e^{x} - 1) \tilde N(dt,dx) \biggr].\\
\end{align*}
It is easy to check that the coefficients of this equation satisfy the conditions (\ref{Lipschitz})
(\ref{Growth}).
For this purpose let us define $\rho(x) = e^x-1$
and let us verify that it satisfies the integrability condition (\ref{rho}):
We write $$ \int_{\R} (e^x-1)^2 \nu(dx) = \int_{|x|\geq1} (e^x-1)^2 \nu(dx) + \int_{|x| < 1} (e^x-1)^2 \nu(dx),$$
\begin{itemize}
\item For $|x|\geq 1$ the three integrals $\int_{|x|\geq1} e^{2x} \nu(dx)$ , $\int_{|x|\geq1} (-2e^x) \nu(dx)$ ,
$\int_{|x|\geq1} \nu(dx)$ are finite by assumption \textbf{EM} and by (\ref{Levy_m}).
\item For $|x| < 1$:
$$(e^x-1)^2 \; = \; x^2 \left(\frac{e^x-1}{x}\right)^2 \; < \; x^2 (e-1)^2. $$
The Lévy measure definition \ref{Levy_m} says that $ \int_{|x|<1} |x|^2 \nu(dx) < \infty $, so
$$ \int_{|x|<1} (e^x-1)^2 \nu(dx) \; < \; (e-1)^2 \int_{|x|<1} |x|^2 \nu(dx) \; < \; \infty. $$
\end{itemize}
So we have checked that the equation (\ref{exp_sde2}) admits a unique solution which is given by the exponential
Lévy process (\ref{ELM}).
\subsection{The Merton Model}\label{Merton_section}
The first jump-diffusion model for the log-prices is the \emph{Merton model}, presented in
\cite{Me76}. In the same paper the author also obtains a closed form solution for the price of an European vanilla option.
The Merton model describes the log-price evolution with a Lévy process with a nonzero diffusion
component and a finite activity jump process with normal distributed jumps.
\begin{equation}\label{MertonM}
X_t = \bar b t + \sigma W_t + \sum_{i=1}^{N_t} Y_i,
\end{equation}
where $N_t$ is a Poisson random variable counting the jumps of $X_t$ in $[0,t]$, and $Y_i \sim \mathcal{N}(\alpha, \xi^2)$ represents the size of the jumps.
Using the Poisson integral notation (Def. \ref{Poisson_int}), the process
\begin{equation*}
X_t = \bar b t + \sigma W_t + \int_{\R} x N(t,dx)
\end{equation*}
corresponds to the Lévy-It\={o} decomposition (\ref{Levy_Ito}),
where we defined the drift
$\bar b := b - \int_{|x|<1} x \nu(dx)$.\\
The Lévy measure of a finite activity Lévy process, can be factorized in the activity $\lambda$ of the Poisson process and
the pdf of the jump size:
\begin{align*}
\nu(dx) &= \lambda f_Y(dx), \\
&= \frac{\lambda}{\xi \sqrt{2\pi}} e^{- \frac{(x-\alpha)^2}{2\xi^2}} dx.
\end{align*}
such that $\int_{\R} \nu(dx) = \lambda$.\\
Since the term $\int_{|x|<1} x \nu(dx)$ is finite, the jump process has \textbf{finite variation}. However,
the Merton model has infinite variation due to the presence of the diffusion component.\\
The Lévy exponent has the following form:
\begin{equation}
\eta(u) = i\bar b u - \frac{1}{2} \sigma^2 u^2 + \lambda \biggl( e^{i\alpha u -\frac{\xi^2 u^2}{2} }-1 \biggr).
\end{equation}
\newline
Using the formula for the moments (\ref{moments}) we obtain:
\begin{align}\label{Merton_moments}
\E[X_t] &= t(\bar b+\lambda \alpha). \\ \nonumber
\mbox{Var}[X_t] &= t(\sigma^2 + \lambda \xi^2 + \lambda \alpha^2). \\ \nonumber
\mbox{Skew}[X_t] &= \frac{t\lambda (3\xi^2 \alpha + \alpha^3)}{\bigl(\mbox{Var}[X_t])^{3/2}}. \\ \nonumber
\mbox{Kurt}[X_t] &= \frac{t \lambda (3\xi^3 +6\alpha^2\xi^2 +\alpha^4)}{\bigl(\mbox{Var}[X_t]\bigr)^2}. \nonumber
\end{align} \newline
The stock price SDE (\ref{exp_sde2}) has the following form:
\begin{equation}\label{Merton_sde}
\frac{d S_t}{S_{t-}} = \; \bar \mu dt + \sigma dW_t \; + \int_{\R} (e^{x} - 1) \tilde N(dt,dx).
\footnote{In the literature, the jump part is often indicated with the not rigorous notation $(J-1)dN_t$, where $J$ is lognormal distributed
and $dN_t$ is the infinitesimal variation of the Poisson process.}
\end{equation}
with
$$ \bar \mu := \bar b + \frac{1}{2}\sigma^2 + \int_{\R} (e^x - 1) \nu(dx). $$
\newline
\subsection{The Variance Gamma process}\label{VG_section}
The \emph{variance gamma} process is a pure jump Lévy process with infinite activity.
The first presentation with applications in finance is due to \cite{MaSe90}.
The model presented in their paper is however a symmetric VG model,
where there is only an additional parameter which controls the kurtosis, while the skewness is still not considered.\\
The non-symmetric VG process is described in \cite{MCC98} where a closed form solution for European vanilla options is also presented.\\
The VG process is obtained by time changing a Brownian motion with drift. The new time variable is a random variable
$T_t$ whose increments are Gamma distributed with density $T_t \sim \Gamma(\mu t,\kappa t)$ \footnote{Usually the Gamma distribution is
parametrized by a shape and scale positive parameters $X \sim \Gamma(\rho,\theta)$. The random variable $X_t \sim \Gamma(\rho t,\theta)$
has pdf
$f_{X_t}(x) = \frac{\theta^{-\rho t}}{\Gamma(\rho t)}x^{\rho t -1}e^{-\frac{x}{\theta}}$ and has $\E[X_t]=\rho \theta t$
and $\mbox{Var}[X_t] = \rho \theta^2 t$. Here we use a parametrization as in \cite{MCC98} such that $\E[X_t]=\mu t$ and $\mbox{Var}
[X_t] = \kappa t$, so $\theta=\frac{\kappa}{\mu}$, $\rho=\frac{\mu^2}{\kappa}$.}.
\begin{equation}
f_{T_t}(x)= \frac{(\frac{\mu}{\kappa})^{\frac{\mu^2 t}{\kappa}}}{\Gamma(\frac{\mu^2 t}{\kappa})}x^{\frac{\mu^2 t}{\kappa} -1}
e^{-\frac{\mu x}{\kappa}}.
\end{equation}
The process $\{T_t\}_{t\geq0}$ is called \textbf{subordinator}. In general a subordinator is a one dimensional Lévy process that is
non-decreasing almost surely. Therefore it is consistent to be a time variable.\\
The characteristic function of $T_t$ is:
\begin{equation}
\phi_{T_t}(u) = \biggl( \frac{1}{1-iu\frac{\kappa}{\mu}} \biggr)^{\frac{\mu^2 t}{\kappa}} .
\end{equation}
The Lévy measure is:
\begin{equation}
\nu^{T_t}(dx) = \begin{cases}
\frac{\mu^2 e^{-\frac{\mu}{\kappa}x}}{\kappa x} dx, & \hspace{2em} \mbox{for } x>0,\\
0 & \hspace{2em} \mbox{otherwise.}
\end{cases}
\end{equation}
\newline
If we consider a Brownian motion with drift $X_t = \theta t + \bar\sigma W_t$ and substitute the time variable with the gamma subordinator
$T_t \sim \Gamma(t,\kappa t)$ ($\mu=1$),
we obtain the \textbf{variance gamma} process:
\begin{equation}\label{VG_process}
X_{T_t} = \theta T_t + \bar\sigma W_{T_t} .
\end{equation}
It depends on three parameters:
\begin{itemize}
\item $\bar\sigma$, the volatility of the Brownian motion
\item $\kappa$, the variance of the Gamma process
\item $\theta$, the drift of the Brownian motion
\end{itemize}
The VG is a process with \textbf{finite variation}. Every process with finite variation can be written as the difference of two increasing
processes. In this case the two increasing processes are Gamma processes:
\begin{equation}
X_t = Y^p_t - Y^n_t,
\end{equation}
with $Y^p_t \sim \Gamma(\mu_p t, \kappa_p t)$ and $Y^n_t \sim \Gamma(\mu_n t, \kappa_n t)$. For the specific relation between the parameters
$\mu_p,\kappa_p,\mu_n,\kappa_n$ and $\bar\sigma,\kappa,\theta$ we refer to \cite{MCC98}.\\
\newline
The probability density function of $X_t$ can be computed conditioning on the realization of $T_t$:
\begin{align}\label{VG_density}
f_{X_t}(x) &= \int_y f_{X_t,T_t}(x,y) dy = \int_y f_{X_t|T_t}(x|y) f_{T_t}(y) dy \\ \nonumber
&= \int_0^{\infty} \frac{1}{\bar\sigma \sqrt{2\pi y}} e^{-\frac{(x -\theta y)^2}{2\bar\sigma^2 y}}
\frac{y^{\frac{t}{\kappa} -1}}{\kappa^{\frac{t}{\kappa}} \Gamma(\frac{t}{\kappa})}
e^{-\frac{y}{\kappa}} \, dy \\ \nonumber
&= \frac{2 \exp(\frac{\theta x}{\bar\sigma^2})}{\kappa^{\frac{t}{\kappa}} \sqrt{2\pi}\bar\sigma \Gamma(\frac{t}{\kappa}) }
\biggl( \frac{x^2}{2\frac{\bar\sigma^2}{\kappa} + \theta^2} \biggr)^{\frac{t}{2\kappa}-\frac{1}{4}}
K_{\frac{t}{\kappa}-\frac{1}{2}}
\biggl( \frac{1}{\bar\sigma^2} \sqrt{x^2 \bigl(\frac{2\bar\sigma^2}{\kappa}+\theta^2 \bigr)} \biggr),
\end{align}
where the function $K$ is a modified Bessel function of the second kind (see \cite{MCC98} for the computations).\\
The characteristic function can be obtained by conditioning too:
\begin{align*}
\phi_{X_t}(u) &= \biggl( 1-i \kappa \bigl( u\theta +\frac{i}{2}\bar\sigma^2 u^2 \bigr) \biggr)^{-\frac{t}{\kappa}} \\
&= \biggl( 1-i\theta \kappa u + \frac{1}{2} \bar\sigma^2 \kappa u^2 \biggr)^{-\frac{t}{\kappa}}.
\end{align*}
\newline
The VG Lévy measure is
\begin{equation}\label{VG_measure}
\nu^{X_t}(dx) = \frac{e^{\frac{\theta x}{\bar\sigma^2}}}{\kappa|x|} \exp
\left( - \frac{\sqrt{\frac{2}{\kappa} + \frac{\theta^2}{\bar\sigma^2}}}{\bar\sigma} |x|\right) dx,
\end{equation}
and the Lévy exponent is
\begin{equation}
\eta(u) = -\frac{1}{\kappa} \log(1-i\theta \kappa u + \frac{1}{2} \bar\sigma^2 \kappa u^2).
\end{equation}
Using the formula for the moments (\ref{moments}) we obtain:
\begin{align}\label{VG_cumulants}
\E[X_t] &= t\theta. \\ \nonumber
\mbox{Var}[X_t] &= t(\bar\sigma^2 + \theta^2 \kappa). \\ \nonumber
\mbox{Skew}[X_t] &= \frac{t (2\theta^3\kappa^2 + 3 \bar\sigma^2 \theta \kappa)}{\bigl(\mbox{Var}[X_t])^{3/2}}. \\ \nonumber
\mbox{Kurt}[X_t] &= \frac{t (3\bar\sigma^4 \kappa + 12\bar\sigma^2 \theta^2 \kappa^2 +6\theta^4\kappa^3)}{\bigl(\mbox{Var}[X_t]\bigr)^2}.\nonumber
\end{align}
\\
The Lévy-It\={o} decomposition (\ref{Levy_Ito}) for any pure jump finite variation process i.e $\int_{|x|<1} x \nu(dx) < \infty$,
can be written as
\begin{equation}\label{Levy_Ito3}
X_t = \tilde b t + \int_{\R} x N(t,dx)
\end{equation}
with $\tilde b = b - \int_{|x|<1} x \nu(dx)$. We can apply the It\={o} formula to (\ref{ELM}) to obtain the
stock price SDE for a finite variation process:
\begin{align}
\frac{d S_t}{S_{t-}} &= \; \tilde b dt \; + \int_{\R} (e^{x} - 1) N(dt,dx). \\ \nonumber
&= \; \biggl( b + \int_{\R} \bigl( e^{x} -1 -x \mathbbm{1}_{|x|<1}(x) \bigr) \nu(dx) \biggr) dt \; + \int_{\R} (e^{x} - 1) \tilde N(dt,dx).
\end{align}
\\
Consider the process (\ref{VG_process}). We can take its expectation
$$\E[X_{T_t}] = \theta \E[T_t] + \bar\sigma \E[W_{T_t}] = \theta t,$$
which is equal to the expectation of (\ref{Levy_Ito3}). Using (\ref{Exp_poiss}) we obtain
\begin{align}
\E[X_t] &= \tilde b t + \E \biggl[ \int_{\R} x N(t,dx)\biggr] \\ \nonumber
&= t \biggl( \tilde b + \int_{\R} x \, \nu(dx) \biggr), \nonumber
\end{align}
and therefore $ \tilde b = \theta - \int_{\R} x \nu(dx) $.\\
We can compute the integral using the explicit formula (\ref{VG_measure}) for the Lévy measure.
Let us call $$A = \frac{\theta}{\bar\sigma^2} \hspace{2em} \mbox{and} \hspace{2em}
B=\frac{|\theta|}{\bar\sigma^2}\sqrt{1+\frac{2\bar\sigma^2}{\kappa \theta^2}}$$
with $A<B$, and solve the integral:
\begin{align*}
\int_{\R} \frac{x}{\kappa |x|} e^{Ax-B|x|} \, dx &= \int_{0}^{\infty} \frac{1}{\kappa} e^{(A-B)x} \, dx
- \int_{-\infty}^0 \frac{1}{\kappa} e^{(A+B)x} \, dx \\
&= \frac{1}{\kappa} \frac{2A}{B^2-A^2} \\
&= \theta.
\end{align*}
As expected, $\tilde b = 0$. \\
The Lévy-It\={o} decomposition for the VG process in (\ref{VG_process}) is simply
\begin{equation}
X_t = \int_{\R} x N(t,dx).
\end{equation}
All the information is contained in the Lévy measure (\ref{VG_measure}),
which completely describes the process. Even if the process has been created by Brownian
subordination, it has no diffusion component.
The L\'evy triplet is
\begin{equation}\label{VG_triplet}
\biggl( \int_{|x|<1} x \nu(dx), 0, \nu \biggr).
\end{equation}
The SDE for the stock price following an \textbf{exponential VG} is
\begin{equation}\label{VG_sde}
\frac{d S_t}{S_{t-}} = \; \int_{\R} (e^{x} - 1) N(dt,dx).
\end{equation}
\subsection{Infinitesimal Generator for exponential Lévy processes}
This section derives the infinitesimal generator for the stock price process under the \emph{Merton} and the \emph{Variance Gamma} models.\\
The stock price SDE (\ref{exp_sde2}) has the form (\ref{SDE}), with $c=\infty$. Using (\ref{gen_jumpdiff}), the corresponding infinitesimal generator is:
\begin{align}\label{inf_gen_exp_levy}
\LL^S f(s) =& \; \mu s \frac{\partial f(s)}{\partial s}
+ \frac{1}{2} \sigma^2 s^2 \frac{\partial^2 f(s)}{\partial s^2} \\ \nonumber
&+ \int_{\R} \biggl[ f(se^x) - f(s) - s(e^x-1)\frac{\partial f(s)}{\partial s} \biggr] \nu(dx).
\end{align}
Let us derive the infinitesimal generators for
the Merton and VG processes described by the SDEs (\ref{Merton_sde}) and (\ref{VG_sde}): \\
\begin{itemize}
\item \textbf{Merton model generator}:\\
\noindent
Under the representation with $c=\infty$, the 1-dim generator $\LL^{M}$ has the form:
\begin{align}\label{Merton_gen}
(\LL^{M} f)(s) &= \; \bar \mu s \frac{\partial f(s)}{\partial s} + \frac{1}{2} \sigma^2 s^2 \frac{\partial^2 f(s)}{\partial s^2} \\ \nonumber
&+ \; \int_{\R} \bigl( f(s e^x) - f(s) - s(e^x-1)\frac{\partial f(s)}{\partial s} \bigr) \nu(dx),
\end{align}
where $\nu(dx)$ is the Merton Lévy measure and with $$ \bar \mu = \bar b + \frac{1}{2}\sigma^2 + m $$
and we introduced the parameter
\begin{equation}\label{parameter_m}
m := \int_{\R} ( e^{x} - 1 ) \nu(dx) = \lambda \biggl( e^{\alpha + \frac{1}{2} \xi^2} -1 \biggr).
\end{equation}
Under the equivalent representation with $c=0$, the generator is:
\begin{align}\label{Merton_gen}
(\LL^{M} f)(s) &= \; \biggl( \bar b + \frac{1}{2}\sigma^2 \biggr)s \frac{\partial f(s)}{\partial s}
+ \frac{1}{2} \sigma^2 s^2 \frac{\partial^2 f(s)}{\partial s^2} \\ \nonumber
&+ \; \int_{\R} \bigl( f(se^x) - f(s) \bigr) \nu(dx).
\end{align}
\item \textbf{Variance Gamma generator}:\\
\noindent
Using the representation with $c = 0$, and the VG Lévy measure, the generator $\LL^{VG}$ is:
\begin{equation}\label{VG_gen}
(\LL^{VG} f)(s) = \int_{\R} \bigl( f(se^x) - f(s) \bigr) \nu(dx).
\end{equation}
\newline
Under $c = \infty$ we obtain the equivalent form:
\begin{equation}
(\LL^{VG} f)(s) = w s \frac{\partial f(s)}{\partial s}
+ \int_{\R} \biggl( f(s e^x) - f(s) - s(e^x-1)\frac{\partial f(s)}{\partial s} \biggr) \nu(dx),
\end{equation}
\end{itemize}
where we introduced the new parameter
\begin{equation}\label{parameter_w}
w := \int_{\R} (e^x-1) \nu(dx) = - \frac{1}{\kappa} \log \left( 1-\theta \kappa -\frac{1}{2}\bar\sigma^2 \kappa \right).
\end{equation}
In order to calculate the integral, we use the following relation between the Lévy measure and the transition probability (\ref{trans_prob}) of the process:
\begin{equation}
\nu(dx) = \lim_{t\to 0} \frac{1}{t} p_{0,t}(0,dx).
\end{equation}
This relation is presented by \cite{Cont} in Chapter 3.6, and a proof can be found in Corollary 8.9 of \cite{Sato}. \\
Let us compute first the expected value of the exponential VG process
\begin{align*}
\E[ e^{X_t}] &= \phi_{X_t}(-i) = \exp \biggl( -\frac{t}{\kappa} \log(1-\theta \kappa -\frac{1}{2}\bar\sigma^2 \kappa ) \biggr)\\
&= e^{w t}.
\end{align*}
The integral becomes
\begin{align*}
\int_{\R} (e^x-1) \nu(dx) &= \int_{\R} (e^x-1) \lim_{t\to 0} \frac{1}{t} p_{0,t}(0,dx) \\
&= \lim_{t\to 0} \frac{1}{t} \E[ e^{X_t} - 1 ] \\
&= w.
\end{align*}
Remember that since the VG has finite variation, the integral is finite because the integrand is $e^x-1 = x + \mathcal{O}(x^2)$,
so we can always take the limit outside the integral.
\section{Cumulants}\label{cumulant_sec}
The cumulant generating function $H_{X_t}(u)$ of $X_t$ is defined as the natural logarithm of its characteristic function
(see \cite{Cont}).
Using the Lévy-Khintchine representation for the characteristic function (\ref{Levy_Kint}), it is easy to find its relation with
the Lévy symbol:
\begin{align}