This is one question in the exercise of "Local time and excursion", but I think it is very interesting.
We consider a measurable function $g$ and a Brownian motion $(B_t)_{t \geq 0}$ one integral defined as
$$A_t = \int_0^t g(B_s)ds$$
means the integration along the path. We suppose that $g$ is intégrable then this formula makes sense. Well, if $g$ is continuous this is obvious : although there is random part, it's in fact a Riemann integral (or Lebesgue) of continuous function. In general case, we apply a very useful formule called time of occupation
$$\int_0^t g(B_s)ds = \int g(a) L^a_t(B)da$$
Then, since $\left(L^a_t(B)\right)_{a \geq 0}$ is continuous and zero at infinity, the one has a max so $A_t$ is well defined.
One more remark for this formule : One large advantage of Lebesgue integral is the introduction of measure, so when we compare two integral, we have not to compare it point-wisely, but cut them into blocks. However, the integral like Riemann isn't good, but local time transform it again with the style of Lebesgue one. The stochastic integral face the same problem, luckily we have Ito, Doob, BDG so we can make that one like "one deterministic term + one random error".
One more remark for this formule : One large advantage of Lebesgue integral is the introduction of measure, so when we compare two integral, we have not to compare it point-wisely, but cut them into blocks. However, the integral like Riemann isn't good, but local time transform it again with the style of Lebesgue one. The stochastic integral face the same problem, luckily we have Ito, Doob, BDG so we can make that one like "one deterministic term + one random error".
Our main theorem is to prove that
$$\frac{1}{\sqrt{t}} A_t \Rightarrow \int g(a)da |N|$$
where $N \sim \mathcal{N}(0,1)$. This convergence is in weak sense.
We remark why this formula should be correct. One important observation is one Levy's theorem that
$$\left( L^0_t, |B_t| \right)_{t \geq 0} = (\text{law}) \left( S_t, S_t - B_t \right)_{t \geq 0}$$
So we have obviously $\frac{1}{\sqrt{t}}L^0_t$ has the same law as $|N|$. For the local time at other level, once it is touched, it will behavior like $L^0_t$.
However, the problem is that : the convergence in law of sigle random variable doesn't mean the convergence in law of random process. I make the this phrase red to point out the danger. But we know that $a \rightarrow L^a_t(B)$ is also continuous, so what's the error term ? Could this error disappear after the normalization of $\frac{1}{\sqrt{t}}$ ?
We have to go back to the analysis of the regularity of the local time. Using the Tanaka formule
$$L^a_t(B) = 2(B_t - a)^ + - 2(B_0 - a)^ + - 2\int_{0}^t \mathbb{1}_{\{B_s > a\}} dB_s$$
We obtain that
$$L^a_t(B) - L^0_t(B) = \left[2(B_t - a)^ + - 2(B_0 - a)^ + \right] - \left[2(B_t )^ + - 2(B_0 )^ +\right] + 2\int_{0}^t \mathbb{1}_{\{0 < B_s \leq a\}} dB_s$$
We would like to say that $L^a_t(B) - L^0_t(B)$ is uniformly little. The difficulty is the last one stochastic integration. However, we see that when $s$ grows, it's very rare that the Brownian motion could stay in the interval so it contributes very little to the integral (even in a stochastic one !). The powerful tool like BDG inequality tells totally the moment of this random variable. We estimate the tail so with large probability $1 - \epsilon$, the process will converge uniform to a $|N|$ process. We write down the proof properly by the density argument etc and conclude the proof.
Finally, I have to say once I come to the part of analysis the size of a random variable, the training in the course of statistic helps really a lot. That may be why we say probability and statistics are always together. (Oui, et analyse est aussi son bon amie)
where $N \sim \mathcal{N}(0,1)$. This convergence is in weak sense.
We remark why this formula should be correct. One important observation is one Levy's theorem that
$$\left( L^0_t, |B_t| \right)_{t \geq 0} = (\text{law}) \left( S_t, S_t - B_t \right)_{t \geq 0}$$
So we have obviously $\frac{1}{\sqrt{t}}L^0_t$ has the same law as $|N|$. For the local time at other level, once it is touched, it will behavior like $L^0_t$.
However, the problem is that : the convergence in law of sigle random variable doesn't mean the convergence in law of random process. I make the this phrase red to point out the danger. But we know that $a \rightarrow L^a_t(B)$ is also continuous, so what's the error term ? Could this error disappear after the normalization of $\frac{1}{\sqrt{t}}$ ?
We have to go back to the analysis of the regularity of the local time. Using the Tanaka formule
$$L^a_t(B) = 2(B_t - a)^ + - 2(B_0 - a)^ + - 2\int_{0}^t \mathbb{1}_{\{B_s > a\}} dB_s$$
We obtain that
$$L^a_t(B) - L^0_t(B) = \left[2(B_t - a)^ + - 2(B_0 - a)^ + \right] - \left[2(B_t )^ + - 2(B_0 )^ +\right] + 2\int_{0}^t \mathbb{1}_{\{0 < B_s \leq a\}} dB_s$$
We would like to say that $L^a_t(B) - L^0_t(B)$ is uniformly little. The difficulty is the last one stochastic integration. However, we see that when $s$ grows, it's very rare that the Brownian motion could stay in the interval so it contributes very little to the integral (even in a stochastic one !). The powerful tool like BDG inequality tells totally the moment of this random variable. We estimate the tail so with large probability $1 - \epsilon$, the process will converge uniform to a $|N|$ process. We write down the proof properly by the density argument etc and conclude the proof.
Finally, I have to say once I come to the part of analysis the size of a random variable, the training in the course of statistic helps really a lot. That may be why we say probability and statistics are always together. (Oui, et analyse est aussi son bon amie)
没有评论:
发表评论