We often says that the convergence almost surely, convergence in probability and convergence in law cannot be mixed and the first one implies the second one, which implies the third one. But in some situation, these three claims can be the same. One very famous situation is the Levy's equivalence theorem : We set $(X_m)_{m \geq 1}$ a family of independent random variables and we define $S_n = \sum_{m=1}^n X_m$, then convergence in law can imply the convergence almost surely !
This seems a little incredible and such an amazing theorem does not often appear in a general textbook. Maybe it is for the reason that such an easy theorem requires sometimes technical proof. Here, I give a personal argument using many big theorems.
The key idea is to use the Kolmogorov's three-series theorem. It is an equivalent criteria for that $S_n$ converges almost surely. The three conditions are to define that $Y_m = X_m \mathbf{1}_{|X_m| \leq A}$ and
$$
\sum_{m=1}^{\infty} \mathbb{P}[|X_m|>A] < \infty, \quad \sum_{m=1}^{\infty}\mathbb{E}[Y_m]\text{ converges }, \quad \sum_{m=1}^{\infty}\mathbf{Var}[Y_m] < \infty.
$$
For the first one, we put the random variables in Skorohod representation theorem, and say that one is necessary to say the fluctuation is not so often.
For the third one, we argue by absurd and put it in the Lindeberg Feller central limit theorem, to say that if the variance goes to infinite, then
$$
\frac{\sum_{m}^n Y_m - \mathbb{E}[Y_m]}{\sqrt{\sum_{m=1}^{n}\mathbf{Var}[Y_m]}} \Rightarrow \mathcal{N}(0,1),
$$
and we know $\sum_{m}^{\infty} Y_m$ also converges in law to some random variable. Thus, the normalization seems too stronger and we obtain that $\frac{\sum_{m}^n - \mathbb{E}[Y_m]}{\sqrt{\sum_{m=1}^{n}\mathbf{Var}[Y_m]}} \Rightarrow \mathcal{N}(0,1),$ which is a contradiction.
Finally, to prove that the expectation is finite, we still argue by absurd. We go back to the very basic definition to test it by a function. However, we have a intuition that $\sum_{m=1}^{\infty}X_m \simeq \sum_{m=1}^{\infty}Y_m$ which will go to infinite, because that its drift is infinite while its variance is finite. This is the situation of vague convergence.
So we prove the three conditions and prove this theorem.
2018年12月29日星期六
2018年9月12日星期三
Analysis and PDE : Basic fact about evolution equation
Equation of the type evolution is one important topic in mathematics. Today, I read some very theoretical basis of this subject.
$$
\partial_t f = \Delta f(x) + a(x) \cdot \nabla f(x) + c(x) f(x) + \int_{\mathbb{R}^d} b(y,x) f(y) dy
$$
with the condition that $a \in W^{1,\infty}, c \in L^{\infty}, b \in L^2(\mathbb{R^d \times R^d})$.
Although there are many equations (maybe of types degenerate), but this model describes the equation of diffusion, of branching, of transport and also of mean-field, so it is already very large. Of course, we will talk about the existence, uniqueness of this equation.
The main frame to talk about the evolution equation is to design a Hilbert space $H$ and a Banach subspace $V \subset H$, the advantage is to use the duality of Hilbert space to get
$$
V \subset H = H' \subset V' .
$$
so we can talk about the weak solution of the equation and we use $|\cdot|, \|\cdot\|$ to represent respectively the norm in $H$ and $V$.
One propose a good condition, similar to that of Lax-Milgram. If we denote by $L$ the generator, then we requires that
(1)$L : V \rightarrow V'$ is bounded.
(2)$L$ is coercive + dissipative that $\langle Lg, g \rangle \leq - \alpha \|g\|^2 + b|g|^2$ for every $g \in V$.
This condition gives an inequality
$$
|g(T)|^2_H + 2 \alpha \int_0^T \|g(s)\|_V^2 ds \leq e^{2bT}|g_0|^2
$$
at first correct for a $g$ more regular and then we can pass to general function by approximation. This is the energy inequality, it says a lot of things and the most important the uniqueness and make sure that the solution stays always in the function space $g \in L^{\infty}(0, T; H) \cap L^2(0,T;V) \cap H^1(0, T; V')$.
(One remark : the dissipative part just says the Cauchy-Liptchitz condition. )
Later, we use a theorem : $ L^2(0,T;V) \cap H^1(0, T; V') \subset C([0,T];H)$. This makes sense when we do formally $g'$ and then we can manipulate as if it is regular.
The existence theory is a variant theorem of Lax-Milgram theory. When we discretize the problem, it becomes a variant elliptic equation in small discrete steps. Then, we do a sub-sequence of weak limit to get the solution of the equation. .
$$
\partial_t f = \Delta f(x) + a(x) \cdot \nabla f(x) + c(x) f(x) + \int_{\mathbb{R}^d} b(y,x) f(y) dy
$$
with the condition that $a \in W^{1,\infty}, c \in L^{\infty}, b \in L^2(\mathbb{R^d \times R^d})$.
Although there are many equations (maybe of types degenerate), but this model describes the equation of diffusion, of branching, of transport and also of mean-field, so it is already very large. Of course, we will talk about the existence, uniqueness of this equation.
The main frame to talk about the evolution equation is to design a Hilbert space $H$ and a Banach subspace $V \subset H$, the advantage is to use the duality of Hilbert space to get
$$
V \subset H = H' \subset V' .
$$
so we can talk about the weak solution of the equation and we use $|\cdot|, \|\cdot\|$ to represent respectively the norm in $H$ and $V$.
One propose a good condition, similar to that of Lax-Milgram. If we denote by $L$ the generator, then we requires that
(1)$L : V \rightarrow V'$ is bounded.
(2)$L$ is coercive + dissipative that $\langle Lg, g \rangle \leq - \alpha \|g\|^2 + b|g|^2$ for every $g \in V$.
This condition gives an inequality
$$
|g(T)|^2_H + 2 \alpha \int_0^T \|g(s)\|_V^2 ds \leq e^{2bT}|g_0|^2
$$
at first correct for a $g$ more regular and then we can pass to general function by approximation. This is the energy inequality, it says a lot of things and the most important the uniqueness and make sure that the solution stays always in the function space $g \in L^{\infty}(0, T; H) \cap L^2(0,T;V) \cap H^1(0, T; V')$.
(One remark : the dissipative part just says the Cauchy-Liptchitz condition. )
Later, we use a theorem : $ L^2(0,T;V) \cap H^1(0, T; V') \subset C([0,T];H)$. This makes sense when we do formally $g'$ and then we can manipulate as if it is regular.
The existence theory is a variant theorem of Lax-Milgram theory. When we discretize the problem, it becomes a variant elliptic equation in small discrete steps. Then, we do a sub-sequence of weak limit to get the solution of the equation. .
2018年8月18日星期六
A very non-rigorous introduction of differential geometry
During the travelling time on train, I review the content of differential geometry - a topic that I have learned many times and long time ago, but handle so little in hand. I believe that I have learned it for at least three times : first time in Hong Kong, second time at Fudan and third time at Polytechnique. Some times later, I finally get understood its main idea, so in this short blog, I attempt to give a very non-rigorous introduction.
The first step is sometimes the most difficult step : to well define the object of manifold $\mathcal{M}$. Generally speaking, it is something similar to $\mathbb{R}^d$. But how it looks like? The standard definition of a manifold uses the language of local coordinate and atlas, I have to say that a very visualization is to imagine that a RPG guy runs in a game, and local we have to draw the map as a surface. We require that the $C^k$ condition to make locally the bijection is OK and we can always to approximation properly. Then, the compatibility condition makes all the local map together and define a good manifold.
The mapping between the manifold is used to define the equivalent class. The notation is complex but the idea is simple : when we talking about the property of squares, circles etc, we never identify a specific square of circle since they have the same property. For a manifold, we should also say the same story. That's why we invent this idea of the mapping between different manifold.
For $p \in \mathcal{M}$, its local increment has $d$ directions induced by the local map, so it has naturally a tangent plane $T_p\mathcal{M}$. When we remove the point $p$, it becomes a tangent fiber $T\mathcal{M}$. But until now, we say nothing about the distance and volume on the manifold. We have to add a metric $g$ to the manifold $(\mathcal{M}, g)$, which is a symmetric matrix, or a $(0,2)$-tensor. $g = g_{i,j}dx^i \otimes dx^j$. Then the curves can be written as
$$
L = \int_a^b \sqrt{g(\frac{d\gamma}{dt}, \frac{d\gamma}{dt})} dt,
$$
and the volume can be written as
$$
\int f d\mathcal{M} = \int_{\mathbb{R}^d} f \sqrt{det(g)} dx_1 dx_2 \cdots dx_d.
$$
We can check that all these definitions do not depend on the choice of local coordinate.
All these definitions make the manifold similar to $\mathbb{R}^d$ except some more profound for curvature. It starts from vector field $X$, which is generally not commutative, so we have the Lie crochet $[X, Y] = XY - YX$. And some more complicated idea is the covariant derivative (connection) written as $\nabla_{X}Y$. They develop the idea for curvature, Ricci curvature and scalar curvature. I skip these part since they may be more useful for the expert of geometry.
In the last par of the lecture book, people use these language to study the theory of relativity. For example, main idea is to find possible $g$ that makes the physical model work by some minimum action. Interested readers can check the lecture note of MAT568 by Jeremie Szeftel.
The first step is sometimes the most difficult step : to well define the object of manifold $\mathcal{M}$. Generally speaking, it is something similar to $\mathbb{R}^d$. But how it looks like? The standard definition of a manifold uses the language of local coordinate and atlas, I have to say that a very visualization is to imagine that a RPG guy runs in a game, and local we have to draw the map as a surface. We require that the $C^k$ condition to make locally the bijection is OK and we can always to approximation properly. Then, the compatibility condition makes all the local map together and define a good manifold.
The mapping between the manifold is used to define the equivalent class. The notation is complex but the idea is simple : when we talking about the property of squares, circles etc, we never identify a specific square of circle since they have the same property. For a manifold, we should also say the same story. That's why we invent this idea of the mapping between different manifold.
For $p \in \mathcal{M}$, its local increment has $d$ directions induced by the local map, so it has naturally a tangent plane $T_p\mathcal{M}$. When we remove the point $p$, it becomes a tangent fiber $T\mathcal{M}$. But until now, we say nothing about the distance and volume on the manifold. We have to add a metric $g$ to the manifold $(\mathcal{M}, g)$, which is a symmetric matrix, or a $(0,2)$-tensor. $g = g_{i,j}dx^i \otimes dx^j$. Then the curves can be written as
$$
L = \int_a^b \sqrt{g(\frac{d\gamma}{dt}, \frac{d\gamma}{dt})} dt,
$$
and the volume can be written as
$$
\int f d\mathcal{M} = \int_{\mathbb{R}^d} f \sqrt{det(g)} dx_1 dx_2 \cdots dx_d.
$$
We can check that all these definitions do not depend on the choice of local coordinate.
All these definitions make the manifold similar to $\mathbb{R}^d$ except some more profound for curvature. It starts from vector field $X$, which is generally not commutative, so we have the Lie crochet $[X, Y] = XY - YX$. And some more complicated idea is the covariant derivative (connection) written as $\nabla_{X}Y$. They develop the idea for curvature, Ricci curvature and scalar curvature. I skip these part since they may be more useful for the expert of geometry.
In the last par of the lecture book, people use these language to study the theory of relativity. For example, main idea is to find possible $g$ that makes the physical model work by some minimum action. Interested readers can check the lecture note of MAT568 by Jeremie Szeftel.
2018年7月20日星期五
Circle packing and its applications
One topic of this year's lecture of Saint-flour is about the circle packing, a very nice theory abridging complex analysis, probability and graph theory. I will record some main results in the two week's lectures.
The main idea is that the circle packing provides a new view point to see the graph. The radius of disk gives something more than the distance on the graph, and this geometric information matches well with the analysis of resistance. If the circle packing is in plane, it could have a log increment of the effective resistance.
The proof of this theorem is tricky and depends mainly on a very nice estimate called "magic" lemma. Personally, I believe that this lemma has something to do with the maximal inequality and Whitney covering.
A final remark is that this seems a nice proof, but how we collect all the necessary result of probability, graph theory and complex analysis ? This is a nice question. And the connection between these three area could be more profound in all senses.
Drawing circle packing : finite and infinite
A circle packing is a collection of disk $P$ to represent a planar graph.$G = (V, E)$, where the center of disk represents the vertex of graph and two disks are tangent if and only if the two vertex are connected.
For a circle packing, we could draw easily a graph, but could we construct a circle packing from any planar graph ? This is the first question. The question could be answered by two steps : in finite case, we could have an algorithm to solve it by minimizing the energy, which depending only on the radius and for given radius we could draw the graph. Moreover, this drawing unique up to Mobius transform if we add an infinite point to make the out face a triangle.
In infinite case, the question is completed. However, if we suppose that the bounded degree condition, this is a direct result of compactness.
Reversible Markov chain and electric network
The main question of the topic is to study the recurrence of a reversible Markov chain. In fact, we could always realize it by a electric network by designing conductance on the edges. The recurrence of an infinite simple random walk on this graph (Markov chain) is equivalent to say $R_{eff}(x \leftrightarrow \infty) = \infty$. Very intuitively, this means that the effective resistance is infinite so we could not go to infinite freely.
Two useful lemma may be Dirichlet and Thomson : the voltage is a harmonic function which minimizes the energy and the current is unit flow which minimize the energy.
By the electric network, the problem of recurrence is transformed to a problem of analysis of graph.
He-Schramn theorem : Classification of by circle packing
However, it's not easy to analyze the effective resistance. But He-Schramm theorem tells a classification of the infinite graph with only bounded degree : hyperbolic, circle packing in unit disk and transient; or parabolic, circle packing in plane and recurrent.The main idea is that the circle packing provides a new view point to see the graph. The radius of disk gives something more than the distance on the graph, and this geometric information matches well with the analysis of resistance. If the circle packing is in plane, it could have a log increment of the effective resistance.
Benjamini-Schramm theorem
Our final object should be that of UIPT, which is a random object and the bounded degree condition is missing. To go pass it, a more powerful tool of Benjamini-Schramm theorem is called. This says that we could always start from some limit graph and if the object studied is a local limit of the finite, uniform rooted graph of bounded degree, it's current.The proof of this theorem is tricky and depends mainly on a very nice estimate called "magic" lemma. Personally, I believe that this lemma has something to do with the maximal inequality and Whitney covering.
Recurrence of UIPT
The final step to prove UIPT requires all the preparation. Moreover, we apply some surgery to replacing some big degree vertex by some trees. This modification makes the Benjamini-Schramm theorem work since it reduces the degree of vertex. Moreover, some quantitative analysis is also needed to calibrate the increment of effective resistance, for example, we should know that the distribution of degree is exponential.A final remark is that this seems a nice proof, but how we collect all the necessary result of probability, graph theory and complex analysis ? This is a nice question. And the connection between these three area could be more profound in all senses.
2018年6月12日星期二
Analysis and PDE : Soblev space - integer and fractional type and inequality of Soblev
The Soblev space $W^{k,p}(\Omega)$ may be one of the most important tool in the domain of PDE. Its interest comes from two sides : from the view-point of mathematics, it engages on a lot of nice technique in functional and harmonic analysis. From the view-point of physic, some thing like
$$\Vert Ext(u) \Vert_{W^{k,p}(\mathbb{R}^d)} \leq C(d,p,k, \Omega)\Vert u \Vert_{W^{k,p}(\Omega)}$$
But this theorem will use something about the property of $\Omega$. So this gives a very intuitive remark that the estimates about $W_0^{k,p}(\Omega), W^{k,p}(\mathbb{R}^d)$ don't use the property of $\Omega$ while the estimate of $W^{k,p}(\Omega)$ will use.
$$\Vert u\Vert_{L^{p*}(\mathbb{R^d})} \leq C(d,p,k) \Vert \nabla^{k}u\Vert_{L^{p}(\mathbb{R^d})}$$
where $p*$ is some critical exponent that $p* = \frac{dp}{d - pk}$. Another important inequality is the one of Poincare that
$$\Vert u\Vert_{L^{p}(\mathbb{R^d})} \leq C(d,p,k) d(\Omega) \Vert \nabla u\Vert_{L^{p}(\mathbb{R^d})}$$
while this time it will gives one time of factor of $d(\Omega)$. The proof starts from the dense subspace $C^{\infty}_0(\Omega)$ and use the expression of integral. So it also works for the type of inequality of $u - (u)_{\Omega}$.
The interest of Poincare is that when we define the weighted norm
$$\Vert u\Vert_{\underline{W}^{k,p}(\Omega)} = \sum_{\beta \leq k}|\Omega|^{\frac{|\beta| - |k|}{d}}\Vert \partial^{\beta} u \Vert_{L^p(\Omega)}$$
we could just consider the last term that
$$\Vert u\Vert_{\underline{W}^{k,p}(\Omega)} \simeq \Vert \nabla^{k}u \Vert_{\underline{L}^{p}(\Omega)}$$
However, when we use the Soblev inequality, we get some supplementary power of the size.
$$[u]^p_{W^{s,p}(\Omega)} = \int\int_{\Omega \times \Omega} \frac{|u(x) - u(y)|^p}{|x-y|^{d + sp}}dx dy$$
We draw an analogue when we define it in different context. However, this one isn't a norm since we observe that after a difference of constant, this semi-norm doesn't change. So when we talk about its norm, we have to add the one of $L^p$. One remark is that this semi-norm only take in account of the small size fluctuation. Two trivial inequalities are that
$$0 < s \leq s' < 1, [u]_{W^{s,p}(\Omega)} \leq (d(\Omega))^{s'-s}[u]_{W^{s',p}(\Omega)}$$
$$\Vert u\Vert_{L^{p*}(\mathbb{R^d})} \leq C(d,p,k) [u]_{W^{s,p}(\mathbb{R^d})}$$
however, its proof is very tricky which uses many technique like decomposition dyadic. Once it's correct, we could also well take the highest derivative as the main part of the weighted norm that
$$\Vert u\Vert_{\underline{W}^{s,p}(\Omega)} \simeq [u]_{\underline{W}^{s,p}(\Omega)}$$
$$\int_{\Omega} |\nabla u(x)|^2 dx < \infty$$
could be interpreted as the finite energy. Although the researchers use the embedding theorem everyday, sometimes when we would some more information from the constants, it's not easy, especially when we talk about the fractional order one $W^{s,p}(\Omega)$. The most direct way to illustrate its delicate usage may be a note. However, here we concentrate on the inequality of Soblev and Poincare type instead of the one of Holder type.
$W^{k,p}(\Omega), W_0^{k,p}(\Omega), W^{k,p}(\mathbb{R}^d)$
The definition of $W^{k,p}(\Omega)$ is just say that its all kind of weak derivative belongs to space $L^p(\Omega)$. However, since the function $C^{\infty}_0(\Omega)$ isn't always dense in this space, we have to define another guy as the closure under the norm of the former one. This case has no problem when we talk about the space $W^{k,p}(\mathbb{R}^d)$. The more tricky problem is that $W^{k,p}(\Omega)$ isn't the subspace of $W^{k,p}(\mathbb{R}^d)$ ! Luckily, we have one extension theorem to define $Ext(u)$ such that$$\Vert Ext(u) \Vert_{W^{k,p}(\mathbb{R}^d)} \leq C(d,p,k, \Omega)\Vert u \Vert_{W^{k,p}(\Omega)}$$
But this theorem will use something about the property of $\Omega$. So this gives a very intuitive remark that the estimates about $W_0^{k,p}(\Omega), W^{k,p}(\mathbb{R}^d)$ don't use the property of $\Omega$ while the estimate of $W^{k,p}(\Omega)$ will use.
$W^{k,p}(\mathbb{R^d})$ and inequality of Soblev-Poincare
For $W_0^{k,p}(\Omega), W^{k,p}(\mathbb{R}^d)$, two useful inequalities are the Soblev inequality that$$\Vert u\Vert_{L^{p*}(\mathbb{R^d})} \leq C(d,p,k) \Vert \nabla^{k}u\Vert_{L^{p}(\mathbb{R^d})}$$
where $p*$ is some critical exponent that $p* = \frac{dp}{d - pk}$. Another important inequality is the one of Poincare that
$$\Vert u\Vert_{L^{p}(\mathbb{R^d})} \leq C(d,p,k) d(\Omega) \Vert \nabla u\Vert_{L^{p}(\mathbb{R^d})}$$
while this time it will gives one time of factor of $d(\Omega)$. The proof starts from the dense subspace $C^{\infty}_0(\Omega)$ and use the expression of integral. So it also works for the type of inequality of $u - (u)_{\Omega}$.
The interest of Poincare is that when we define the weighted norm
$$\Vert u\Vert_{\underline{W}^{k,p}(\Omega)} = \sum_{\beta \leq k}|\Omega|^{\frac{|\beta| - |k|}{d}}\Vert \partial^{\beta} u \Vert_{L^p(\Omega)}$$
we could just consider the last term that
$$\Vert u\Vert_{\underline{W}^{k,p}(\Omega)} \simeq \Vert \nabla^{k}u \Vert_{\underline{L}^{p}(\Omega)}$$
However, when we use the Soblev inequality, we get some supplementary power of the size.
$W^{s,p}(\Omega), W_0^{s,p}(\Omega), W^{s,p}(\mathbb{R}^d)$
Secondary, we think about another norm that$$[u]^p_{W^{s,p}(\Omega)} = \int\int_{\Omega \times \Omega} \frac{|u(x) - u(y)|^p}{|x-y|^{d + sp}}dx dy$$
We draw an analogue when we define it in different context. However, this one isn't a norm since we observe that after a difference of constant, this semi-norm doesn't change. So when we talk about its norm, we have to add the one of $L^p$. One remark is that this semi-norm only take in account of the small size fluctuation. Two trivial inequalities are that
$$0 < s \leq s' < 1, [u]_{W^{s,p}(\Omega)} \leq (d(\Omega))^{s'-s}[u]_{W^{s',p}(\Omega)}$$
$$0 < s < 1, [u]_{W^{s,p}(\Omega)} \leq C(d,p,s)(d(\Omega))^{1-s}\Vert \nabla u \Vert_{L^{p}(\Omega)}$$
They are correct for $u \in W_0^{s,p}(\Omega), W^{s,p}(\mathbb{R}^d)$, otherwise we have to add the constant of $C(\Omega)$.These two inequality share the same spirit of Poincare inequality that we use the diameter to replace the derivative. However, to get better description about the weighted norm, we have to think about other type of inequality. After all, we see that transform between the integer order and fractional order is always a little messy, although that we believe that they share the similar property. (As how we define them properly.)
$W^{s,p}(\mathbb{R}^d) \hookrightarrow L^{p*}(\mathbb{R}^d)$
This result looks like the one of Soblev inequality : for $p* = \frac{dp}{d - ps}$ and $u \in W^{s,p}(\mathbb{R}^d)$, we have$$\Vert u\Vert_{L^{p*}(\mathbb{R^d})} \leq C(d,p,k) [u]_{W^{s,p}(\mathbb{R^d})}$$
however, its proof is very tricky which uses many technique like decomposition dyadic. Once it's correct, we could also well take the highest derivative as the main part of the weighted norm that
$$\Vert u\Vert_{\underline{W}^{s,p}(\Omega)} \simeq [u]_{\underline{W}^{s,p}(\Omega)}$$
$W^{1,p}(\mathbb{R}^d) \hookrightarrow W^{s,p*}(\mathbb{R}^d)$
The final theorem, where we neglect the context since it's always same, is that for $0 < s < 1, p* = \frac{dp}{d + (s-1)p} \Leftrightarrow s - \frac{d}{p*} = 1 - \frac{d}{p}$, we have that
$$ [u]_{W^{s,p*}(\mathbb{R^d})} \leq C(d,p,s)\Vert \nabla u\Vert_{L^{p}(\mathbb{R^d})}$$
To prove it, one important tool is Hardy-Littlewood-Soblev inequality. So this completes the relation between fractional and integer order Soblev space.
Conclusion
We got a lot of inequality. How to remember it ? Ok, we only have to take one ingredient in heart. The weighted norm eliminate the constant in the Poincare and Holder inequality, so we could use the highest derivative and any $p$ that we like. However, if we use the weighted norm of different order, the effect of scale always exists.
2018年5月21日星期一
Analysis and PDE : Harmonic function
$\Delta u = 0$ may be one of the most important function in the PDE since it appears many times in different context and has nice properties : well, we have to say that its beautiful property implies the interesting result in physics and our natural. Here, we recall some basic property and proof strategy of this topic.
Classical solution
Although the existence should be considered as the first property necessary for study the object, maybe it's nicer to give some interesting property at first. We suppose that the solution is of class $C^2$, then one important formula is Stokes formula that
$$\begin{eqnarray*}\int_{\partial \Omega} F \cdot \nu d\sigma = \int_{\Omega} \nabla \cdot F(x) dx \end{eqnarray*}$$
$$\int_{\partial \Omega} u \nu d\sigma = \int_{\Omega} \nabla u(x) dx $$
we apply this and obtain that in a domain of harmonic function we have
$$\int_{\partial \Omega} \nabla u(x) \cdot \nu d\sigma = \int_{\Omega} \Delta u(x) dx = 0 $$
Furthermore, we obtain some useful formula as Green formula, we obtain the mean value principle that
$$u(x) = \frac{1}{|\partial B_R(x)|}\int_{\partial B_R(x) }u(y) dy$$
One remarkable result is that this property improve the regularity as $C^{\infty}$ and it also implies the function is harmonic. (The proof is similar by the convolution below). Since if we do derivative, we get
$$0 = \frac{d}{dr} \int_{\partial B_1(x) }u(x + ry) d\sigma = \int_{\partial B_1(x) }\nabla u(x + ry) \cdot \nu d\sigma = \frac{d}{dr} \int_{ B_r(x) } \Delta u(y) dy$$
A second important property is the Liouville proeprty. It says that a bounded harmonic function is trivial and is constant. The proof uses the fact that all the derivative are also harmonic
$$|\partial_i u| \leq \frac{1}{\omega_d R^d} \int_{\partial B_R} |u| d\sigma \leq \frac{N}{R}\sup|u| \rightarrow 0$$
and then we use the mean value principle to analysis its size.
A third important property is the maximum principle. Idea is simple : the maximum and minimum of the function can only be attended at its boundary. Use the maximum principle, we prove that the uniqueness of the Dirichlet problem.
Weak solution is also classical solution
One very famous theorem Weyl states that all the weak solution that
$$\int_{\Omega} u(x) \Delta \phi(x) dx = 0$$
for the test function $\phi \in C_c^{\infty}(\Omega)$ is also a strong solution of class $C^{\infty}$. The proof is very classical : we do convolution $u_{\epsilon} = u \ast \psi_{\epsilon}$. Then we prove that this function satisfies the harmonic by weak relation. Finally, we pass this limit to the mean value principle.
$$u_{\epsilon}(x) = \frac{1}{| B_R(x)|}\int_{ B_R(x) }u_{\epsilon}(y) dy \rightarrow u(x) = \frac{1}{| B_R(x)|}\int_{ B_R(x) }u(y) dy$$
Since this property doesn't require the regularity, we reprove that is is strong solution.
Existence
Finally, we come back to the problem of existence. There are two ways to prove the existence and it works in some more general framework.
First one is the Lax-Milgram theorem. It treat the problem as find the inverse of operator in some function space
$$a(u, v) = L(v)$$
The second one is the variational formation that we treat the solution as a minimum of
$$J(u) = \frac{1}{2}\int_{\Omega} |\nabla u|^2 dx - \int_{\Omega} f u dx$$
One can also deduce the characterization from one to another.
2018年5月16日星期三
Analysis and PDE : Covering and decomposition lemmas - Vitali, Calderon-Zygmund, Whitney
Some estimations a priori are important in PDE, while they come from some functional inequality, and some of them come from harmonic analysis and require some combinatoric covering lemma to prove it. This is a subject that I have learned several years ago from Prof. Hongquan LI when I was in Fudan. Today I spent a whole day to review them.
Vitali covering
When we study the Hardy-Littlewood maximal function
$$\mathcal{M} f(x) = \sup_{r > 0} \frac{1}{|B_r(x)|} \int_{B_r(x)} f(y) dy$$
We would like to prove that this operator is of type strong $(p,p)$. The strategy is to prove that it is weak $(1,1)$ and strong $(\infty, \infty)$ and then uses the interpolation inequality. However, the weak $(1,1)$ isn't very clear. One tool used in the proof is so called Vitali covering. It says that for a covering $\{B_i\}_{i \geq 0}$ we could abstract a disjoint sub-covering $\{B_{kj}\}_{j \geq 0}$such that
$$\bigcup_j^{\infty} B_{kj} \subset \bigcup_i^{\infty} B_i \subset \bigcup_j^{\infty} 3B_{kj}$$
This nice structure makes some sub-additive into additive and is very useful in the proof.
Calderon-Zygmund decomposition
Calderon-Zygmund decomposition could be seen as a dyadic version of maximal inequality. We start by the system of dyadic cubes
$$\mathcal{D} = \bigcup_{k \in \mathbb{Z}} \mathcal{D_k}$$
where $\mathcal{D_k}$ is the collection of cubes of length $2^{-k}$. Then for any point $x$, it belongs to only one cube in the system $\mathcal{D_k}$. And any two cubes could be disjoint or one included in another - they could not have non-trivial intersection and difference at the same time. This decomposition is used to study singular integral, but at first we could use it to build a generalized cubic version of maximal function defined as
$$\mathcal{M}_Q f(x) = \sup_{x \in Q \in \mathcal{D}} \frac{1}{|Q|} \int_{Q} f(y) dy$$
The first fact is that any open set in $\mathbb{R}^d$ could be built by the disjoint union of cubes. Idea is simple, we give one cube for each point $x$ such that $x \in Q \subset \Omega$. Then we mesh them if one belongs to another.
We apply this idea to the open set
$$A_{\lambda} = \{x | \mathcal{M}_Q f(x) \geq \lambda \}$$
and for each point, we choose the largest $Q$ admit. This works once we suppose $f \in L^1(\mathbb{R}^d)$ where an infinite cubes make it null. Then we know that this gives a perfect partition of the open set.
$$A_{\lambda} = \bigsqcup_{i=1}^{\infty}Q_i$$
Moreover, we know that the average at each cube is less than $2^d \lambda$ if $f$ is positive by the stopping time property, and $ \mathcal{M}_Q f(x) < \lambda \text{ a.e }$ for the point outside $A_{\lambda}$ by the Lebesgue differential theorem.
Whitney decomposition
Finally, we give a stronger version of Whitney decomposition. It is also a dyadic decomposition but has two more properties
1, The distance of one cube from $\Omega^c$ is comparable to its length.
2, If two cubes are neighbors, their lengths are comparable.
So this decomposition looks more balanced.
Its construction is a little tricky : it applies a "level by level peeling" to cover the level by the distance to the boundary. Then we do mesh. This make the decomposition always comparable by some distance.
2018年5月11日星期五
Analysis and PDE : Caccioppoli, Morrey, Holder and ergodic of heat equation
Since I will study my PhD in SPDE, I have to find back my once very solid capacity in analysis and PDE. Maybe, the best way is to record some nice estimation that I have met in exercises and articles.
$$
\frac{1}{|B_r(x)|} \int_{B_r(x)} |f(y) - (f)_{ B_r(x) } |^2 dy \leq M^2 r^{2 \alpha}
$$
then we could say that this function has a a.e modification of class $C^{0, \alpha}$ Holder. A very easy corollary is to replace the oscillation by a gradient function and Poincare. The idea of proof comes from the Lebesgue differential theorem :
$$
a.e \lim_{r \rightarrow 0} (f)_{ B_r(x) } = f(x)
$$
so it suffices to pass all the estimation to the function $(f)_{ B_r(x) }$ and use "three difference trick" to the theorem.
$$
\|f\|_{L^{\infty}(B_r)} \leq C \|f\|^{\frac{2\alpha}{d+2\alpha}}_{\underline{L}^{2}(B_r)}\left(r^{\alpha}[f]_{C^{0, \alpha}(B_r)}\right)^{\frac{d}{d+2\alpha}}
$$
This tells us that if we have the function Holder + $L^2$ implies also $L^{\infty}$. This may be seen as one part of the Soblev injection, but we recall a little its proof. Idea isn't difficult but wise : we find a small ball such that the value in it is at least 1/2 maximum and then we compare its $L^2$ norm. Thanks to the regularity, the radius of ball should not be so small and we get the result.
$$- \nabla \cdot (a(x) \nabla u(x)) = h$$
The regularity is one heart question in the research and one estimation used many many times in it is the Caccioppoli inequality
$$\|\nabla u\|_{\underline{L}^{2}(B_{r/2})} \leq C \left( \frac{1}{r} \|u - (u)_{ B_r(x) }\|_{\underline{L}^{2}(B_r)} + \|h\|_{\underline{H}^{-1}(B_r)}\right)$$
The interpretation is very natural : this bound doesn't require the regularity of the coefficients but enlarge a little the domain. In fact, it tells us the interior of the solution is more regular while the outside may be a little pike.
$$
\|f\|^{1+2/d}_{L^2} \leq \|f\|^{2/d}_{L^1} \|\nabla f\|_{L^2}
$$For the solution of heat equation, we can apply easily the bound of $L^1 \rightarrow L^\infty, L^1 \rightarrow L^1$. By the decreasing of heat flow
$$
\frac{d}{dt}\|u(t, \cdot)\|^2_{L^2} = - \|\nabla u(t, \cdot)\|^2_{L^2}
$$
and Nash inequality we obtain
$$\|u(t, \cdot)\|_{L^2} \leq C t^{-d/4}\|u(0, \cdot)\|_{L^1}$$
The interpolation works and we obtain
$$\|u(t, \cdot)\|_{L^2} \leq C t^{-\frac{d}{2}(1-1/p)}\|u(0, \cdot)\|_{L^1}$$
$$
\|u(t, \cdot) - (u)_{\Box}\|_{L^{\infty}(\Box)} \leq \exp(-ct)
$$
Idea comes from the $L^2$ estimates and Holder estimates. The former is the result of decreasing of heat flow and the latter is the result of convolution. Then the two combine. We remark that when the time is small, we could not expect a better one since the average is of range $1/\sqrt{t}$ and could not regularize better.
Functional inequality Morrey
We start from some functional inequality, the most classical but powerful tools of all analysis. We state the Morrey inequality. Generally speaking, if in domain $\Omega$ and a $L^2(\Omega)$ function, all the oscillation satisfies $\forall x \in \Omega, B_r(x) \subset \Omega,$$$
\frac{1}{|B_r(x)|} \int_{B_r(x)} |f(y) - (f)_{ B_r(x) } |^2 dy \leq M^2 r^{2 \alpha}
$$
then we could say that this function has a a.e modification of class $C^{0, \alpha}$ Holder. A very easy corollary is to replace the oscillation by a gradient function and Poincare. The idea of proof comes from the Lebesgue differential theorem :
$$
a.e \lim_{r \rightarrow 0} (f)_{ B_r(x) } = f(x)
$$
so it suffices to pass all the estimation to the function $(f)_{ B_r(x) }$ and use "three difference trick" to the theorem.
One Holder interpolation
As the Soblev injection tells us, roughly speaking, the Holder space is a little better than all the $L^{p}$ space. Inspired by the interpolation theorem, we would like to obtain the Holder interpolation. One is$$
\|f\|_{L^{\infty}(B_r)} \leq C \|f\|^{\frac{2\alpha}{d+2\alpha}}_{\underline{L}^{2}(B_r)}\left(r^{\alpha}[f]_{C^{0, \alpha}(B_r)}\right)^{\frac{d}{d+2\alpha}}
$$
This tells us that if we have the function Holder + $L^2$ implies also $L^{\infty}$. This may be seen as one part of the Soblev injection, but we recall a little its proof. Idea isn't difficult but wise : we find a small ball such that the value in it is at least 1/2 maximum and then we compare its $L^2$ norm. Thanks to the regularity, the radius of ball should not be so small and we get the result.
Caccioppoli inequality
Then we come to elliptic equation :$$- \nabla \cdot (a(x) \nabla u(x)) = h$$
The regularity is one heart question in the research and one estimation used many many times in it is the Caccioppoli inequality
$$\|\nabla u\|_{\underline{L}^{2}(B_{r/2})} \leq C \left( \frac{1}{r} \|u - (u)_{ B_r(x) }\|_{\underline{L}^{2}(B_r)} + \|h\|_{\underline{H}^{-1}(B_r)}\right)$$
The interpretation is very natural : this bound doesn't require the regularity of the coefficients but enlarge a little the domain. In fact, it tells us the interior of the solution is more regular while the outside may be a little pike.
Functional inequality Nash
Finally, we come to study the behavior of heat equation. We know that it will decrease generally, and one useful functional inequality is Nash that $\forall f \in L^1(\mathbb{R}^d) \bigcap H^1(\mathbb{R}^d)$ we have$$
\|f\|^{1+2/d}_{L^2} \leq \|f\|^{2/d}_{L^1} \|\nabla f\|_{L^2}
$$For the solution of heat equation, we can apply easily the bound of $L^1 \rightarrow L^\infty, L^1 \rightarrow L^1$. By the decreasing of heat flow
$$
\frac{d}{dt}\|u(t, \cdot)\|^2_{L^2} = - \|\nabla u(t, \cdot)\|^2_{L^2}
$$
and Nash inequality we obtain
$$\|u(t, \cdot)\|_{L^2} \leq C t^{-d/4}\|u(0, \cdot)\|_{L^1}$$
The interpolation works and we obtain
$$\|u(t, \cdot)\|_{L^2} \leq C t^{-\frac{d}{2}(1-1/p)}\|u(0, \cdot)\|_{L^1}$$
Ergodic of heat equation
Finally, we would like to study the ergodic property. We suppose that the initial data $u(0, \cdot)$ is Z-periodic. In this case, we have $\forall t > 1$$$
\|u(t, \cdot) - (u)_{\Box}\|_{L^{\infty}(\Box)} \leq \exp(-ct)
$$
Idea comes from the $L^2$ estimates and Holder estimates. The former is the result of decreasing of heat flow and the latter is the result of convolution. Then the two combine. We remark that when the time is small, we could not expect a better one since the average is of range $1/\sqrt{t}$ and could not regularize better.
2018年5月3日星期四
One not so simple question of probability
The sum of i.i.d random variable is one of most classical topic in probability. However, there are always questions not so easy even in the first year course.
Question : Let $X_n$ be a series of independent random variables and $S_n = \sum_{i=1}^{n}X_i$. We would like to study the behavior of $\limsup S_n, \liminf S_n$ in the following two situations :
(1)$$\begin{eqnarray}\mathbb{P}[X_n = n] &=& \frac{1}{n+1} \\ \mathbb{P}[X_n = -1] &=& \frac{n}{n+1}\end{eqnarray}$$
(2)$$\begin{eqnarray}\mathbb{P}[X_n = n^2] &=& \frac{1}{n^2+1} \\ \mathbb{P}[X_n = -1] &=& \frac{n^2}{n^2+1}\end{eqnarray}$$
In another word, we would like to study the behavior of a random walk.
Even we have some advanced tools, we may make mistakes in this questions. Here, I give two false proofs that I have thought.
False proof (1) : OK, the sum $S_n$ is a martingale, so we apply the representation of martingale and we know that we can embed this martingale to the Brownian motion by Dubins-Schwartz theorem. (It's false since it's discrete and we may choose some specific moment in the random process)
False proof (2) : We use the theorem of optional stopping theorem to the stopping time $T = T_a \wedge T_{-b}$ and study it as the ruin of gambler. (This time the errors in the proof is harder to find. In fact, since the positive jump becomes bigger and bigger, even the martingale $S_{n \wedge T}$ isn't U.I. This isn't the same case as simple random walk or continuous martingale)
Let's give a correct proof :
The question (2) isn't so difficult. We apply Borel-Cantelli lemma that
$$\sum_{n=1}^{\infty} \mathbb{P}[X_n = n] = \sum_{n=1}^{\infty} \frac{1}{n^2+1} < \infty$$
this means that the positive only appears finite times almost surely. So, we have of course $\lim_{n \rightarrow \infty} S_n = - \infty$.
The question (1) has an easy part. We apply once again the Borel-Cantelli lemma that
$$\sum_{n=1}^{\infty} \mathbb{P}[X_n = n] = \sum_{n=1}^{\infty} \frac{1}{n^2+1} = \infty$$
so it will happen infinite times. Moreover, we know once the positive jump occurs, it will cover all the negative jump before this step. So as the positive jump accumulates, we conclude that
$$\limsup_{n \rightarrow \infty} S_n = \infty$$
The hard part, we apply another sub-martingale $\exp(- \lambda S_{n \wedge T})$. This time, a very big positive jump doesn't matter as we know this guy is smaller than $\exp(\lambda b)$. So by the optional stopping theorem, we get
$$(1 - \mathbb{P}[T_b < T_a])e^{-\lambda a} + \mathbb{P}[T_b < T_a] \geq 1 \Rightarrow \mathbb{P}[T_b < T_a] \geq \frac{1- e^{-\lambda a}}{e^{\lambda b} - e^{- \lambda a}}$$
We take $a \rightarrow \infty$ and get $\mathbb{P}[T_b < \infty] \geq e^{-\lambda b}$ then we take $\lambda \rightarrow 0$ and get $\mathbb{P}[T_b < \infty] = 1$.
2018年4月26日星期四
又到了夏天来临的日子
每年一调整夏令时,巴黎的白天就显得特别长,气温也非常配合地直接进入夏天的节奏,从四月开始,当真是一年中我最喜欢的季节了。
时间好快,现在是2018年了。
四年前的那个剧本如约走到了结局,但其中的变化好像也不曾是当时所想。之前只是想,还是要做个论文,后来觉得概率似乎比分析有意思,再后来迷上了随机几何,心心念念想做这个方向,也听了很多报告学了很多知识还去了一些会。再后来M2开学,发现之前两三年所学都好似花架子没一点真材实料的,心里很慌。
后来小半年时间每天上课记笔记,回来就复习琢磨证明,到现在基本上拿个思路都能够差不多补齐了。最后M2考试成绩也不错。
可是这个时候,老师和我说觉得有些方向人太多了,还是建议换个方向吧。
开始我是有点失落的,但可能就是在这样的过程中吧,对随机是什么,思考越来越多了,发现概率也不只是一个方向,也有很多很多不同的分支,应该都看看的。想到这里,似乎思路就打开了,也就不拘泥说非得做某一个课题了。
然后就到了现在的课题,似乎是之前所有所学的总和呢,也算无心插柳吧。
就是得把分析和方程都捡回来了,不怕,当年我就是从实变泛函助教做起的。
回到开头那个话题,为什么喜欢夏天呢?因为白天很长,有很多时间可以做自己喜欢的事情。恍惚又想到了2015的那个夏天,那个被压抑了特别久,特别想学数学的状态,每天在PC算题目,算到天黑11点,有一次甚至是两三点,然后醒了就继续算。把课后习题一道一道算过去……
写这段的时候,不为了别的什么,就是想让自己回想起那个勇敢的自己。有句歌词说“如果知道这些当时我到底去不去?”
当然去了,这几年挺开心的,也看着理想在实现,自己也在不断强大起来,我还想看看自己还能进化成什么样子呢,拭目以待。
夏天来了,抓紧时间再疯狂一次吧
(立个Flag:每周读至少一个和主线不是那么相关的证明,更新一个博客,还是要保持一点学习的劲头不能变成只会一个方向的傻瓜的。毕竟心里还有一个大问题啊!)
时间好快,现在是2018年了。
四年前的那个剧本如约走到了结局,但其中的变化好像也不曾是当时所想。之前只是想,还是要做个论文,后来觉得概率似乎比分析有意思,再后来迷上了随机几何,心心念念想做这个方向,也听了很多报告学了很多知识还去了一些会。再后来M2开学,发现之前两三年所学都好似花架子没一点真材实料的,心里很慌。
后来小半年时间每天上课记笔记,回来就复习琢磨证明,到现在基本上拿个思路都能够差不多补齐了。最后M2考试成绩也不错。
可是这个时候,老师和我说觉得有些方向人太多了,还是建议换个方向吧。
开始我是有点失落的,但可能就是在这样的过程中吧,对随机是什么,思考越来越多了,发现概率也不只是一个方向,也有很多很多不同的分支,应该都看看的。想到这里,似乎思路就打开了,也就不拘泥说非得做某一个课题了。
然后就到了现在的课题,似乎是之前所有所学的总和呢,也算无心插柳吧。
就是得把分析和方程都捡回来了,不怕,当年我就是从实变泛函助教做起的。
回到开头那个话题,为什么喜欢夏天呢?因为白天很长,有很多时间可以做自己喜欢的事情。恍惚又想到了2015的那个夏天,那个被压抑了特别久,特别想学数学的状态,每天在PC算题目,算到天黑11点,有一次甚至是两三点,然后醒了就继续算。把课后习题一道一道算过去……
写这段的时候,不为了别的什么,就是想让自己回想起那个勇敢的自己。有句歌词说“如果知道这些当时我到底去不去?”
当然去了,这几年挺开心的,也看着理想在实现,自己也在不断强大起来,我还想看看自己还能进化成什么样子呢,拭目以待。
夏天来了,抓紧时间再疯狂一次吧
(立个Flag:每周读至少一个和主线不是那么相关的证明,更新一个博客,还是要保持一点学习的劲头不能变成只会一个方向的傻瓜的。毕竟心里还有一个大问题啊!)
2018年4月6日星期五
TCL theorem for one type Riemann integral of Brownian motion
This is one question in the exercise of "Local time and excursion", but I think it is very interesting.
We consider a measurable function $g$ and a Brownian motion $(B_t)_{t \geq 0}$ one integral defined as
$$A_t = \int_0^t g(B_s)ds$$
means the integration along the path. We suppose that $g$ is intégrable then this formula makes sense. Well, if $g$ is continuous this is obvious : although there is random part, it's in fact a Riemann integral (or Lebesgue) of continuous function. In general case, we apply a very useful formule called time of occupation
$$\int_0^t g(B_s)ds = \int g(a) L^a_t(B)da$$
Then, since $\left(L^a_t(B)\right)_{a \geq 0}$ is continuous and zero at infinity, the one has a max so $A_t$ is well defined.
One more remark for this formule : One large advantage of Lebesgue integral is the introduction of measure, so when we compare two integral, we have not to compare it point-wisely, but cut them into blocks. However, the integral like Riemann isn't good, but local time transform it again with the style of Lebesgue one. The stochastic integral face the same problem, luckily we have Ito, Doob, BDG so we can make that one like "one deterministic term + one random error".
One more remark for this formule : One large advantage of Lebesgue integral is the introduction of measure, so when we compare two integral, we have not to compare it point-wisely, but cut them into blocks. However, the integral like Riemann isn't good, but local time transform it again with the style of Lebesgue one. The stochastic integral face the same problem, luckily we have Ito, Doob, BDG so we can make that one like "one deterministic term + one random error".
Our main theorem is to prove that
$$\frac{1}{\sqrt{t}} A_t \Rightarrow \int g(a)da |N|$$
where $N \sim \mathcal{N}(0,1)$. This convergence is in weak sense.
We remark why this formula should be correct. One important observation is one Levy's theorem that
$$\left( L^0_t, |B_t| \right)_{t \geq 0} = (\text{law}) \left( S_t, S_t - B_t \right)_{t \geq 0}$$
So we have obviously $\frac{1}{\sqrt{t}}L^0_t$ has the same law as $|N|$. For the local time at other level, once it is touched, it will behavior like $L^0_t$.
However, the problem is that : the convergence in law of sigle random variable doesn't mean the convergence in law of random process. I make the this phrase red to point out the danger. But we know that $a \rightarrow L^a_t(B)$ is also continuous, so what's the error term ? Could this error disappear after the normalization of $\frac{1}{\sqrt{t}}$ ?
We have to go back to the analysis of the regularity of the local time. Using the Tanaka formule
$$L^a_t(B) = 2(B_t - a)^ + - 2(B_0 - a)^ + - 2\int_{0}^t \mathbb{1}_{\{B_s > a\}} dB_s$$
We obtain that
$$L^a_t(B) - L^0_t(B) = \left[2(B_t - a)^ + - 2(B_0 - a)^ + \right] - \left[2(B_t )^ + - 2(B_0 )^ +\right] + 2\int_{0}^t \mathbb{1}_{\{0 < B_s \leq a\}} dB_s$$
We would like to say that $L^a_t(B) - L^0_t(B)$ is uniformly little. The difficulty is the last one stochastic integration. However, we see that when $s$ grows, it's very rare that the Brownian motion could stay in the interval so it contributes very little to the integral (even in a stochastic one !). The powerful tool like BDG inequality tells totally the moment of this random variable. We estimate the tail so with large probability $1 - \epsilon$, the process will converge uniform to a $|N|$ process. We write down the proof properly by the density argument etc and conclude the proof.
Finally, I have to say once I come to the part of analysis the size of a random variable, the training in the course of statistic helps really a lot. That may be why we say probability and statistics are always together. (Oui, et analyse est aussi son bon amie)
where $N \sim \mathcal{N}(0,1)$. This convergence is in weak sense.
We remark why this formula should be correct. One important observation is one Levy's theorem that
$$\left( L^0_t, |B_t| \right)_{t \geq 0} = (\text{law}) \left( S_t, S_t - B_t \right)_{t \geq 0}$$
So we have obviously $\frac{1}{\sqrt{t}}L^0_t$ has the same law as $|N|$. For the local time at other level, once it is touched, it will behavior like $L^0_t$.
However, the problem is that : the convergence in law of sigle random variable doesn't mean the convergence in law of random process. I make the this phrase red to point out the danger. But we know that $a \rightarrow L^a_t(B)$ is also continuous, so what's the error term ? Could this error disappear after the normalization of $\frac{1}{\sqrt{t}}$ ?
We have to go back to the analysis of the regularity of the local time. Using the Tanaka formule
$$L^a_t(B) = 2(B_t - a)^ + - 2(B_0 - a)^ + - 2\int_{0}^t \mathbb{1}_{\{B_s > a\}} dB_s$$
We obtain that
$$L^a_t(B) - L^0_t(B) = \left[2(B_t - a)^ + - 2(B_0 - a)^ + \right] - \left[2(B_t )^ + - 2(B_0 )^ +\right] + 2\int_{0}^t \mathbb{1}_{\{0 < B_s \leq a\}} dB_s$$
We would like to say that $L^a_t(B) - L^0_t(B)$ is uniformly little. The difficulty is the last one stochastic integration. However, we see that when $s$ grows, it's very rare that the Brownian motion could stay in the interval so it contributes very little to the integral (even in a stochastic one !). The powerful tool like BDG inequality tells totally the moment of this random variable. We estimate the tail so with large probability $1 - \epsilon$, the process will converge uniform to a $|N|$ process. We write down the proof properly by the density argument etc and conclude the proof.
Finally, I have to say once I come to the part of analysis the size of a random variable, the training in the course of statistic helps really a lot. That may be why we say probability and statistics are always together. (Oui, et analyse est aussi son bon amie)
2018年3月16日星期五
Est-ce que le processus Gaussien peut être différentiable ?
Comme le titre, c'est une très bonne question. Hier mon ami me demande cette question et au premier coup, j'avais envie de dire que c'est pas possible après plusieurs raisons.
1. Le modèle très simple est le mouvement brownien, qui n'est pas de classe $C^1$.
2. Un théorème de Dubins-Schwarts nous dit que une martingale locale continuée est presque un mouvement brownien après un changement du temps.
3. Oui, on parle de dérivée bruit blanc, mais c'est toujours de sens faible ou il veut dire intégration stochastique.
Si on dit la "dérivée" dans l'autre sens non standard, bon, c'est toujours possible. Mais finalement, on trouve un exo qui nous dit c'est possible lorsque la covariance $K(s,t)$ est de classe $C^2$ et en fait on a fait cet exo avant.
Argument est comme suivant :
1. $\left\{\frac{G_{t+ \delta - G_t}}{\delta}\right\}_{t \geq 0}$ est un processus Gaussien après la linéarité d'espace Gaussien.
2. On vérifie il est de suite de Cauchy dans $L^2$. Donc il admet une limite comme Gaussien.
3. La limite a une covariance $\partial_s \partial_t K(s,t)$.
Donc, on voit dans le sens de limite $L^2$ il admet la limite. C'est tout de propriété de Gaussien, qui est fermé dans le sens limite.
Mais pourquoi on a tendance de mélanger le cas avec la martingale. Voilà, une martingale est un processus ssi il est de croissance indépendante. Mais ici, le processus a vraiment de mémoire et le cas PAIS ne peut pas avoir dérivée. Donc, même si le processus aléa est souvent fractal, il peut être régulier.
2. Un théorème de Dubins-Schwarts nous dit que une martingale locale continuée est presque un mouvement brownien après un changement du temps.
3. Oui, on parle de dérivée bruit blanc, mais c'est toujours de sens faible ou il veut dire intégration stochastique.
Si on dit la "dérivée" dans l'autre sens non standard, bon, c'est toujours possible. Mais finalement, on trouve un exo qui nous dit c'est possible lorsque la covariance $K(s,t)$ est de classe $C^2$ et en fait on a fait cet exo avant.
Argument est comme suivant :
1. $\left\{\frac{G_{t+ \delta - G_t}}{\delta}\right\}_{t \geq 0}$ est un processus Gaussien après la linéarité d'espace Gaussien.
2. On vérifie il est de suite de Cauchy dans $L^2$. Donc il admet une limite comme Gaussien.
3. La limite a une covariance $\partial_s \partial_t K(s,t)$.
Donc, on voit dans le sens de limite $L^2$ il admet la limite. C'est tout de propriété de Gaussien, qui est fermé dans le sens limite.
Mais pourquoi on a tendance de mélanger le cas avec la martingale. Voilà, une martingale est un processus ssi il est de croissance indépendante. Mais ici, le processus a vraiment de mémoire et le cas PAIS ne peut pas avoir dérivée. Donc, même si le processus aléa est souvent fractal, il peut être régulier.
订阅:
博文 (Atom)