每年一调整夏令时,巴黎的白天就显得特别长,气温也非常配合地直接进入夏天的节奏,从四月开始,当真是一年中我最喜欢的季节了。
时间好快,现在是2018年了。
四年前的那个剧本如约走到了结局,但其中的变化好像也不曾是当时所想。之前只是想,还是要做个论文,后来觉得概率似乎比分析有意思,再后来迷上了随机几何,心心念念想做这个方向,也听了很多报告学了很多知识还去了一些会。再后来M2开学,发现之前两三年所学都好似花架子没一点真材实料的,心里很慌。
后来小半年时间每天上课记笔记,回来就复习琢磨证明,到现在基本上拿个思路都能够差不多补齐了。最后M2考试成绩也不错。
可是这个时候,老师和我说觉得有些方向人太多了,还是建议换个方向吧。
开始我是有点失落的,但可能就是在这样的过程中吧,对随机是什么,思考越来越多了,发现概率也不只是一个方向,也有很多很多不同的分支,应该都看看的。想到这里,似乎思路就打开了,也就不拘泥说非得做某一个课题了。
然后就到了现在的课题,似乎是之前所有所学的总和呢,也算无心插柳吧。
就是得把分析和方程都捡回来了,不怕,当年我就是从实变泛函助教做起的。
回到开头那个话题,为什么喜欢夏天呢?因为白天很长,有很多时间可以做自己喜欢的事情。恍惚又想到了2015的那个夏天,那个被压抑了特别久,特别想学数学的状态,每天在PC算题目,算到天黑11点,有一次甚至是两三点,然后醒了就继续算。把课后习题一道一道算过去……
写这段的时候,不为了别的什么,就是想让自己回想起那个勇敢的自己。有句歌词说“如果知道这些当时我到底去不去?”
当然去了,这几年挺开心的,也看着理想在实现,自己也在不断强大起来,我还想看看自己还能进化成什么样子呢,拭目以待。
夏天来了,抓紧时间再疯狂一次吧
(立个Flag:每周读至少一个和主线不是那么相关的证明,更新一个博客,还是要保持一点学习的劲头不能变成只会一个方向的傻瓜的。毕竟心里还有一个大问题啊!)
2018年4月26日星期四
2018年4月6日星期五
TCL theorem for one type Riemann integral of Brownian motion
This is one question in the exercise of "Local time and excursion", but I think it is very interesting.
We consider a measurable function $g$ and a Brownian motion $(B_t)_{t \geq 0}$ one integral defined as
$$A_t = \int_0^t g(B_s)ds$$
means the integration along the path. We suppose that $g$ is intégrable then this formula makes sense. Well, if $g$ is continuous this is obvious : although there is random part, it's in fact a Riemann integral (or Lebesgue) of continuous function. In general case, we apply a very useful formule called time of occupation
$$\int_0^t g(B_s)ds = \int g(a) L^a_t(B)da$$
Then, since $\left(L^a_t(B)\right)_{a \geq 0}$ is continuous and zero at infinity, the one has a max so $A_t$ is well defined.
One more remark for this formule : One large advantage of Lebesgue integral is the introduction of measure, so when we compare two integral, we have not to compare it point-wisely, but cut them into blocks. However, the integral like Riemann isn't good, but local time transform it again with the style of Lebesgue one. The stochastic integral face the same problem, luckily we have Ito, Doob, BDG so we can make that one like "one deterministic term + one random error".
One more remark for this formule : One large advantage of Lebesgue integral is the introduction of measure, so when we compare two integral, we have not to compare it point-wisely, but cut them into blocks. However, the integral like Riemann isn't good, but local time transform it again with the style of Lebesgue one. The stochastic integral face the same problem, luckily we have Ito, Doob, BDG so we can make that one like "one deterministic term + one random error".
Our main theorem is to prove that
$$\frac{1}{\sqrt{t}} A_t \Rightarrow \int g(a)da |N|$$
where $N \sim \mathcal{N}(0,1)$. This convergence is in weak sense.
We remark why this formula should be correct. One important observation is one Levy's theorem that
$$\left( L^0_t, |B_t| \right)_{t \geq 0} = (\text{law}) \left( S_t, S_t - B_t \right)_{t \geq 0}$$
So we have obviously $\frac{1}{\sqrt{t}}L^0_t$ has the same law as $|N|$. For the local time at other level, once it is touched, it will behavior like $L^0_t$.
However, the problem is that : the convergence in law of sigle random variable doesn't mean the convergence in law of random process. I make the this phrase red to point out the danger. But we know that $a \rightarrow L^a_t(B)$ is also continuous, so what's the error term ? Could this error disappear after the normalization of $\frac{1}{\sqrt{t}}$ ?
We have to go back to the analysis of the regularity of the local time. Using the Tanaka formule
$$L^a_t(B) = 2(B_t - a)^ + - 2(B_0 - a)^ + - 2\int_{0}^t \mathbb{1}_{\{B_s > a\}} dB_s$$
We obtain that
$$L^a_t(B) - L^0_t(B) = \left[2(B_t - a)^ + - 2(B_0 - a)^ + \right] - \left[2(B_t )^ + - 2(B_0 )^ +\right] + 2\int_{0}^t \mathbb{1}_{\{0 < B_s \leq a\}} dB_s$$
We would like to say that $L^a_t(B) - L^0_t(B)$ is uniformly little. The difficulty is the last one stochastic integration. However, we see that when $s$ grows, it's very rare that the Brownian motion could stay in the interval so it contributes very little to the integral (even in a stochastic one !). The powerful tool like BDG inequality tells totally the moment of this random variable. We estimate the tail so with large probability $1 - \epsilon$, the process will converge uniform to a $|N|$ process. We write down the proof properly by the density argument etc and conclude the proof.
Finally, I have to say once I come to the part of analysis the size of a random variable, the training in the course of statistic helps really a lot. That may be why we say probability and statistics are always together. (Oui, et analyse est aussi son bon amie)
where $N \sim \mathcal{N}(0,1)$. This convergence is in weak sense.
We remark why this formula should be correct. One important observation is one Levy's theorem that
$$\left( L^0_t, |B_t| \right)_{t \geq 0} = (\text{law}) \left( S_t, S_t - B_t \right)_{t \geq 0}$$
So we have obviously $\frac{1}{\sqrt{t}}L^0_t$ has the same law as $|N|$. For the local time at other level, once it is touched, it will behavior like $L^0_t$.
However, the problem is that : the convergence in law of sigle random variable doesn't mean the convergence in law of random process. I make the this phrase red to point out the danger. But we know that $a \rightarrow L^a_t(B)$ is also continuous, so what's the error term ? Could this error disappear after the normalization of $\frac{1}{\sqrt{t}}$ ?
We have to go back to the analysis of the regularity of the local time. Using the Tanaka formule
$$L^a_t(B) = 2(B_t - a)^ + - 2(B_0 - a)^ + - 2\int_{0}^t \mathbb{1}_{\{B_s > a\}} dB_s$$
We obtain that
$$L^a_t(B) - L^0_t(B) = \left[2(B_t - a)^ + - 2(B_0 - a)^ + \right] - \left[2(B_t )^ + - 2(B_0 )^ +\right] + 2\int_{0}^t \mathbb{1}_{\{0 < B_s \leq a\}} dB_s$$
We would like to say that $L^a_t(B) - L^0_t(B)$ is uniformly little. The difficulty is the last one stochastic integration. However, we see that when $s$ grows, it's very rare that the Brownian motion could stay in the interval so it contributes very little to the integral (even in a stochastic one !). The powerful tool like BDG inequality tells totally the moment of this random variable. We estimate the tail so with large probability $1 - \epsilon$, the process will converge uniform to a $|N|$ process. We write down the proof properly by the density argument etc and conclude the proof.
Finally, I have to say once I come to the part of analysis the size of a random variable, the training in the course of statistic helps really a lot. That may be why we say probability and statistics are always together. (Oui, et analyse est aussi son bon amie)
订阅:
博文 (Atom)