2019年2月14日星期四

Spin glass (1) : Basic definition, some basic fact about Ising model

(I took this course at 2019 but the note is revised at 2020 as I would like to understand better this course.)

In this semester, I take one course about spin glass. This model is very classical. We decide the configuration in $\{\sigma_i\}_{1 \leq i \leq N} \in \{1,-1\}^{N}$, and we treat a Hamilton energy as
$$H(\sigma,) = \sum_{1 \leq i,j \leq N} - J_{i,j} \sigma_i, \sigma_j ,$$
then every configuration has a Gibbs measure $\frac{1}{Z_N} \exp (-\beta H(\sigma))$, where $Z_N$ is the function of partition. 

Despite the simple description of the model, this spin model is at the center of the research for a long time since it has nice properties. Some very famous varied models like free energy $J_{i,j}$ is random, Ising model $J_{i,j} = 1$ for short-range while $0$ for long-range and mean-field case where $J_{i,j}$ is always $1$. 

Ising model and its free energy
The Ising model is famous model in statistic physics. We define the Hamilton energy with $\sharp$ boundary condition and with a $h$ exterior magnetic field on the domain $\Lambda$
$$H_{\Lambda}^{\sharp}(\beta, h, \sigma) = - \beta \sum_{\{i,j\} \cap \Lambda \neq 0, i \sim j}  \sigma_i, \sigma_j - h \sum_{i \in \Lambda} \sigma_i,$$ 
and then we define the function of partition as 
$$Z^{\sharp}(\beta, h) = \sum_{\sigma} \exp\left( - H_{\Lambda}^{\sharp}(\beta, h, \sigma) \right).$$
This defines a Gibbs measure and we use always $\langle \cdot \rangle^{\sharp}_{\Lambda, \beta, h}$ to represent the Gibbs measure. We define also the normalized free energy as 
$$F^{\sharp}_{\Lambda}(\beta, h) = \frac{1}{|\Lambda|} \log(Z^{\sharp}(\beta, h)),$$
since it plays an important role in physics. By Holder's inequality, we know that $$(\beta, h) \mapsto F^{\sharp}_{\Lambda}(\beta, h) \text{ is convex}.$$ Then, let $\square_n = \left(\frac{-3^n}{2}, \frac{3^n}{2}\right)$, we can prove use the renormalization argument to prove that 
$$F^{\sharp}_{\square_{n+1}}(\beta, h) \leq F^{\sharp}_{\square_n}(\beta, h) + O(3^{-n}),$$
moreover, since different only has a very small perturbation, this implies that  when the volume grows to $\infty$, the normalized free energy has a limit that 
$$F(\beta, h) = \lim_{\Lambda \rightarrow \infty}F^{\sharp}_{\Lambda}(\beta, h)  $$
and it does not depend on the choice of the boundary condition. 

Magnetization
The magnetization is an important quantity and it is defined as $m_{\Lambda}(\sigma) = \frac{1}{|\Lambda|}\sum_{i \in \Lambda} \sigma_i$, and the average magnetization $\bar{m}^{\sharp}_{\Lambda} = \langle m_{\Lambda}\rangle^{\sharp}_{\Lambda, \beta, h}$ . One important observation is that
$$\bar{m}^{\sharp}_{\Lambda}(\beta, h) =  \partial_h F^{\sharp}_{\Lambda}(\beta, h).$$
Thus a natural question is if we cand define $\bar{m} = \lim_{\Lambda \rightarrow \infty} \bar{m}^{\sharp}_{\Lambda}$ and whether it also holds that
$$\boxed{\bar{m}(\beta, h) =  \partial_h F_{\Lambda}(\beta, h)}.$$
This equation is the key to the topic. Very naturally, to make sense this equation, the right hand should be well-defined. The basic information is that $F$ is convex, so it admits left and right derivative. So the least condition to make sense the equation is at the point of differential point. As $\bar{m}^{\sharp}_{\Lambda}(\beta, h) =  \partial_h F^{\sharp}_{\Lambda}(\beta, h)$ is bounded, it admits a limit $C$ for its subsequence. Using the convexity
$$F^{\sharp}_{\Lambda}(\beta, h') \geq  F^{\sharp}_{\Lambda}(\beta, h) + (h' - h)\partial_h F^{\sharp}_{\Lambda}(\beta, h'),$$
we take the limit to have 
$$F(\beta, h') \geq  F(\beta, h) + (h' - h)C,$$
so by convexity analysis we know if $F(\beta, \cdot)$ differentiable at $h$, the then $C$ has no choice but $\partial_h F(\beta, h)$. For the point non-differentiable, we call the phase transition.


Limit measure and phase transition
We can continue to study other descriptions of the phase transition. From the viewpoint of the probability measure, it is natural to ask if $\langle \cdot \rangle_{\Lambda, \beta, h}^+$ and $\langle \cdot \rangle_{\Lambda, \beta, h}^-$ admit a limit measure. This is true and the proof relies on the FKG inequality. It suffices to study the problem for increasing function, and we use the domain Markov property of the model: for $V \subset U \subset \mathbb{Z}^d$
$$\langle f \rangle_{V, \beta, h}^+ = \langle f  \vert \sigma_i = 1, i \in U \backslash V \rangle_{V, \beta, h}^+ = \frac{ \langle f  \mathbf{1}_{\left\{\sigma_i = 1, i \in U \backslash V\right\}} \rangle_{U, \beta, h}^+}{\langle  \mathbf{1}_{\left\{\sigma_i = 1, i \in U \backslash V\right\}} \rangle_{U, \beta, h}^+}  \geq  \langle f  \rangle_{U, \beta, h}^+.$$
This proves the decreasing of measure, so the limit is well-defined and this measure is stationary. Then we can define
$$\bar{m}^+(\beta,h) =  \langle \sigma_0\rangle_{\beta, h}^+ , \qquad \bar{m}^-(\beta,h) =  \langle \sigma_0\rangle_{\beta, h}^-, $$
and one can prove further
$$\bar{m}^+(\beta,h) = \partial_h^+ F(\beta, h),   \qquad \bar{m}^-(\beta,h) = \partial_h^- F(\beta, h).$$
These two equations make the bridge between the analytic object (derivative on two sides) and the statistic object (magnetization under different boundary conditions). Finally, the phase transition can also be described as $\langle \sigma_0\rangle_{\beta, h}^+ \neq \langle \sigma_0\rangle_{\beta, h}^-$, or the magnetization is non-null because we have naturally $\bar{m}^+(\beta,h)  = - \bar{m}^-(\beta,h) $. 

2019年1月17日星期四

Head run game

The head run game may be one of mathematics topic in research, but I think it is more well-known because it appears in many questions of exam of courses or of interview. Today I met one question asked by one friend, but its answer is very surprising.

We give a series of random variables $(X_i)_{i \geq 1}$ of i.i.d Bernouil law of parameter $p$ and we denote by $(\Omega, \mathcal{F}, \mathbb{P})$ the probability space. Alice and Bob has a game : if pattern $(1, 1, 0)$ appears before $(1,0,1)$, then Alice wins, otherwise Bob wins. So, what is the probability that  Alice wins the game ?

If it is the first time that one meets this question, it may be difficult. If one has met the similar problem, we define $\tau_{110}$ and $\tau_{101}$ for the two patterns, then in the case $p = 1$, a well known result is that $\mathbb{E}[\tau_{110}] = 6$ and $\mathbb{E}[\tau_{101}] = 5$, thus one may guess that $(1,0,1)$ appears at first.

However, it is not the case. And a more amazing result is that, for any $p \in (0,1)$ Alice always has a bigger probability to win the game ! One direct method to do it is to decompose the probability as 
the sum 
$$\mathbb{P}[\tau_{110} < \tau_{101}] = p_{00} + p_{01} + p_{10} + p_{11}$$ where we define that $$p_{00} = \mathbb{P}[\tau_{110} < \tau_{101}, X_1 = 0, X_2 = 0].$$ And then, we use the fact that the event is only locally correlated, which means using strong Markov property, when conditionned all the history, we only have to keep the last two runs. Then we have a big recursive equation that $$  \left[ {\begin{array}{c} p_{00} \\ p_{01} \\ p_{10} \\ p_{11}  \end{array}} \right] = \left[ {\begin{array}{cccc} 1-p& 1-p& 0& 0\\  0& 0& 1-p& 1-p\\ p& 0& 0& 0\\ 0& 0& 0& p\\ \end{array}} \right]   \left[ {\begin{array}{c} p_{00} \\ p_{01} \\ p_{10} \\ p_{11}  \end{array}} \right]  +  \left[ {\begin{array}{c} 0 \\ 0 \\ 0\\ p^2(1-p)  \end{array}} \right]$$
Thus, we can solve the equation and obtain that $$\mathbb{P}[\tau_{110} < \tau_{101}] = \frac{1}{2-p}$$.

From the formula, we get that Alice always has bigger probability to win. (And it is like this I find this amazing result.) But does it have a more natural explication why Alice can win ? In fact, we see that Alice and Bob have to wait for a $1$ to win the game. Once it happens, in the following two round, we have $(0,0), (0,1), (1,0), (1,1)$ 4 types of configurations, and $(0,1),(1,0)$ have the same probability. If $(0,0)$ happens, they have to restart the game. If $(1,1)$ happens, we see that stochasticly Alice has bigger probability to win under this condition. That explains the reason.