2017年2月28日星期二

SLE (1) : A magical random evolution

When I came to France, I have spent longtime thinking about my future and the research field when I stayed in language school. One day, I read a introductory article which states a theorem that

"The Hausdorff dimension of the frontier of Brownian motion is  $\frac{4}{3}$"



I know the definition of Hausdorff dimension. A dimension mesures the fractal object, a good definition but very hard to calculate in maths. Usually, the mathematician gives its upper bound and lower bound but no exact value. How we reach it?

Some further search tells me a word - Schramm-Loewner Evolution, a magical random evolution relates many different models in maths and physics, especially those with fractal structure.

"Yes, it is the maths I want." I told myself and I begins the journey to understand it.

What is SLE

In short, SLE gives us a generally method to define a growing random set $K_t$, which can be considered as the scaling limit of some other random model, such as the interface of the Ising model, the frontier of the Browmian motion, and the limit of uniform spanning tree etc. 

More surprisingly, the description of the random compact set depends only on an equation - Yes, it is the most successful method ever existed for mathematicians and physicians to study the natural phenomena, and moreover, SLE relates the complex analysis and stochastic analysis together, so it takes advantages of a lot of theorems in both these fields. 

But we have to say, there exists a lot of open problems to study, since our nature is so complicated and the physical or biological models are difficult and specific enough - we have to spend a longtime understanding them.

How to define a growing compact set

The classical complex analysis studies conformal mapping and the Riemann mapping theorem tells us there is unique mapping between two domain such that the one point is fixed an the distortion at this point is also fixed. We denote $\mathbb{H}$ the half-upper plane. For a compact set $K$ such that $\mathbb{H} \backslash K$ is simple connected, we have a conformal mapping

$$
\Phi : \mathbb{H} \backslash K \rightarrow \mathbb{H}
$$

We make a linear transform (Hydrodynamic normalisation) such that the $\Phi$ has a analytic development near infinity
$$
\Phi(z) = z + \frac{2a(K)}{z} + o(\frac{1}{z^2}) \dots
$$

We remark that this mapping $\Phi$ exists using Schwartz reflection theorem and is the only one such that
$$
\| \Phi(z) - z \| \rightarrow 0 \text{ as } z \rightarrow \infty
$$

Here, $a(K)$ is called the capacity of $K$ since it measures how big $K$ is, An interesting property is that if we throw a 2-D Brownian motion starting from $Z_0 = iy$ and let $\tau$ be the exiting time of $\mathbb{H} \backslash K$, then
$$
2a = \lim_{y \rightarrow + \infty} y\mathbb{E}[Im(Y_{\tau})]
$$
this means that the capacity is a real positive number.

There is a lot of properties about this maps, such as the composition of map makes just makes the sum of capacity and the scaling property.
$$
\begin{eqnarray*}
a(\Phi_1 \circ \Phi_2) &=& a(\Phi_1) + a(\Phi_2)\\
a(\lambda K) &=& \lambda^2 a(K)\\
\end{eqnarray*}
$$

We would like this application be dynamical - that is to say we would like to define a family of mapping $g_t$ which corresponds to the standard mapply from
$$
\mathbb{H} \backslash K_t \rightarrow \mathbb{H}
$$
Obviously, this time, the series of compact set $K_t$ should have some condition. In maths, it requires that $K_t$ grows locally slowly, monotone and have good paramatrization $a(K_t) = t$. We can prove that, in this case, the growing random compact set can be characterized  by a ODE. - Loewner equation
$$
\partial_t g_t(z) = \frac{2}{g_t(z) - U_t}
$$
The ODE is well define if only there is no sigularity. We call $w_t$ the driven function and we know at the end of the lifetime $\tau$
$$
g_{\tau}(z) = U_{\tau}
$$
otherwise, we can always extend our solution.

In fact, we can treat $g_t(z)$ in two ways. First, we fix $z$, then $g_t(z)$ is a solution of ODE and we get the value of time t. Second, we fix t, then $g_t(z)$ becomes a conformal mapping. Generally, the first one is easier to get calculate the value, but if we would like to get the set $K_t$, the second is more intuitive. $K_t$ is the $z$ such that well define until the time $t$. Formally,
$$
K_t = \mathbb{H} \backslash g_t^{-1}(\mathbb{H})
$$
We have also the analytic serise
$$
g_t(z) = z + \frac{2t}{z} + o(\frac{1}{z})
$$

Some connection between harmonic function and BM is known for longtime, such as the law of BM is same after a normalized conformal mapping. The connection between this equation and probability theory is to make the driven funciton $w_t$ a random process like BM. We will states it in the next section.


Chordal SLE

We studies at first one kind of SLE which starts at 0 and walks always on the half-plan $\mathbb{H}$. We define $SLE_{\kappa}$ as following.

$$
\begin{eqnarray*}
\partial_t g_t(z) &=& \frac{2}{g_t(z) - U_t}\\
g_0(z) &=& z\\
U_t &=& \sqrt{\kappa}B_t
\end{eqnarray*}
$$

Generally, what makes different is that the driven function is a Brownian motion. But we know that the Brownian motion has some universality  in certain sense. We list some most basic properties that make the chordal $SLE_{\kappa}$ different. We recall that the random object here is the compact set $K_t$ and the function aims to help us understand the random compact set.

  1. Markov on domain. Let $T$ be a stopping time, then
    $$
    \begin{eqnarray*}
    g_T(K_{T+t} \backslash K_T) - U_T & \perp & \mathcal{F_T}\\
    g_T(K_{T+t} \backslash K_T) - U_T & \sim^{d} & K_t\\
    \end{eqnarray*}
    $$
  2.  Scaling invariace
    $$
    \frac{1}{\sqrt{\lambda}}K_{\lambda t} \sim^{d} K_t
    $$
  3.  Symmetry.
    $$
    -K_t \sim^{d} K_t
    $$

We can compare these three properties with the basic properties of Brownian motion. There are just totally parallel, That is why the researcher now consider SLE as a basic random object in dimension 2 as Brownian motion.

However, one would like to know why the driving function must be a Brownian motion. In fact, in many statistical physics, it requires a conformal Markov property.
$$
\text{ Conformal Markov preperty } = \text{ Markov on domain } + \text{ Scaling invariance }
$$
In this case, we come back to see that the driving function should be stationary, independant increment and scaling invariant, so the only choice is Brownian motion.

Phase transition

The phase transition is a very interesting topic in chordal $SLE_{\kappa}$. A baby version is to consider how the $K_t$ will eat the axis. The theorem is 
  • If $\kappa \leq 4$, a.s $\bigcup_{t \geq 0} K_t \bigcap \mathbb{R} = \{0\}$
  • If $\kappa > 4$, a.s $\mathbb{R} \subset \bigcup_{t \geq 0} K_t$
The main idea is to write $X_t = \frac{g_t(1) - U_t}{\sqrt(\kappa)}$ then this problem transforms to  a problem of Bessel process and we get the result wanted.

The proof that the $SLE_{\kappa}$ is generated by a curve is more difficult, but if we admit this property, using the Markov property that $g_t(K_{t+s}) - U_t \sim^{d} \tilde{K}_s$, then when $\kappa \leq 4$, the curve after $g_t(K_{t+s}) - U_t$ will not touch the axis, which means that it will not intersect itself and the curve is a simple curve. This is a very interesting result. 

2017年2月1日星期三

MMB (1) : Large deviation

This term, I take two courses M2 in Paris  Orsay and Polytechnique respectively in order to enrich my knowledge in probability. Today, Pascal talks about the large deviation theory.

We know the central limit theory, that is for $\{X_i\}$ i.i.d with finite variance
$$
\sqrt{M} (\bar{X}_M - \mathbb{E}[X] ) \Rightarrow \mathcal{N}(0, Var(X))
$$
However, this gives only the estimation in gap $\sigma$, but what happens for the distribution in large distance from the mean?

This requires the tool of large deviation estimation, This is a basic tool in mathematics and is used every in probability and statistics. For probabilistes, large deviation gives the probability of the events atypical and sometimes the correlation function estimation. For statisticiens, this gives the interval of confiance non-asymptotic. In a word, this is a necessary tool.

Inequality of Chernoff

We start from the typical Markov inequality
$$
\mathbb{P}\left[S_n / n - \mathbb{E}[X] > x\right] = \mathbb{P}[e^{\theta( {S_n}/{n} - \mathbb{E}[X])} > e^{\theta x}]
$$
So we get
$$
\mathbb{P}\left[S_n / n - \mathbb{E}[X] > x\right] \leq e^{- n I(x)}
$$
where we define
$$\begin{eqnarray}
\phi(\theta) &=& \log \mathbb{E}[e^{\theta X}] \\
I(x) &=& \sup_{\theta \in R} \theta x - \phi(\theta)
\end{eqnarray}$$
However, from this simple inequality, a lot technique is developed.


Inequality of Hoeffding

We can give a better estimation for the case $X$ is bounded in $[a,b]$, that is 
$$
\mathbb{P}\left[S_n / n - \mathbb{E}[X] > x\right] \leq \exp(- \frac{2 x^2}{n(b-a)^2})
$$
Idea is to develop the function $\phi$ in 0 and then give a good estimation.

Some generalized version is also possible. For exemple, we can consider not only one function, but a family of function - in another word, a dictionary. The more general theorem depends largely on the theory of covering, or approximation.

Inequality of for Gaussian

However, a big obstacle of the inequality of Hoeffding is that the condition of bound. How to treat the unbounded function, for example, Gaussian, a large class of function?

An idea is a the inequality of type entropy. The entropy of a function under the mesure $\mu$ is to define 
$$ Ent_{\mu}(f)  = \mathbb{E}(f  \log{(f)}) - \mathbb{E}(f) \log{(\mathbb{E}(f))}$$
In some case, the mesure $\mu$ verifies the inequality of log-Soblev such that
$$ Ent_{\mu}(f^2) \leq C_{\mu} \mathbb{E}_{\mu}(|\nabla f|^2) $$
In this case we can deduce an inequality
$$ \mathbb{P}(|f(Y) - \mathbb{E}(f(Y))| > \epsilon) \leq 2 \exp{(-\frac{\epsilon^2}{C_{\mu}|f|^2_{Lip}})}$$ 
It is the type of inequality of large deviation. A natural question is whether the mesure verifies the inequality of log-Soblev. It requires analysis and the answer for Gaussian is positive. However, a general case is just one branch of research.

Theorem of Gramer

A more general principle is the theorem of Gramer, which gives not only the upper bound but also the lower bound of a distribution.
$$\begin{eqnarray}
-\inf_{x \in \Gamma^{O}}I(x)
\leq \liminf_{n \rightarrow \infty} \frac{1}{n} \log \mathbb{P}\left[S_n / n \in \Gamma \right] \\
\leq \limsup_{n \rightarrow \infty} \frac{1}{n} \log \mathbb{P}\left[S_n / n \in \Gamma \right]
\leq -\inf_{x \in \Gamma^{F}}I(x)
\end{eqnarray}$$

In the course, we analyse in detail some properties of the function $\phi(\theta)$ and $I(x)$, like their convexity, zeros and monotony. The upper side is also like the Chernoff upper bound, while the left lower bound use the change of probability to prove it.