2016年8月14日星期日

From discrete martingale to continuous martingale

Recently, I give a tutorial about the theory of martingale and Markov chain to my classmate, who has missed the exam, so I will write something about the martingale theory.

[Conditional expectation]
To understand the theory of martingale, I believe the conditional expectation is the most important part. According to my experience, I would like to learn some advanced topics in probability theory for long time, but I found it so difficult to study it without a good understanding of conditional expectation. 

In fact, the sigma algebra can be seen as a kind of information. The conditional expectation is just a kind of approximation given a subset of the information. In the image process, the counterpart is multi-resolution analysis; in the language of functional analysis, it is the projection in subspace of Hilbert space. But the general existence is the density of intergrable function in L^2 space.

[Martingale: Its motivation and importance]
To define a martingale, we should have a family of filtration, i.e a series of information. The martingale is to say the best approximation of today is just the random variable yesterday. 

Then is the Doob's theorem, a discrete version of integration. That is to say, if we do invest only depending on the old information, we cannot do better than just keep the mean 0. 

More tricks can be found in the theory of martingale, specially when we add the role of stopping time. Perhaps, we can also say, the development of martingale is just for answering a lot questions about the stopping - - - all from optimal stopping time theorem. So, the profs always say, we know all about the random processes if only we find a martingale.

Last but not the least is the convergence of martingale, the most beautiful theorem in the martingale theorem because it uses nice inequality like Doob's maximal inequality and upcrossing inequality. We know generally, if the martingale is uniformly L^1 bounded, it converges almost sure. If the condition is stronger L^p, we get a L^p convergence. But for get L^1 convergence, the suitable condition is uniformly intergrable.  In addition, this convergence, the most precise resolution can do approximation in different level to get the discrete martingale.

[Martingale: from discrete to continuous]
When we pass from discrete to continuous martingale, it's necessary to say something about the regularity about the random processes. Why? While people talk about the random processes, they sometimes say cadlag. a french word, continue à droite et limite à gauche. This condition makes sure that some martingale has sense, and the processes is more regular. In fact, we can change the processes in a 0 measure to make it cadlag. 

And, only in this case, all the convergence theorem like L^P, p.s , L^1 works. The technique part is just like the discrete version but with more treat in the limit of all the time. 

In conclusion, we give a short introduction about what the martingale is and why this tool is so useful. I agree that it is a little abstract and we can understand its sense only with the concrete example.

   

没有评论:

发表评论