2020年7月30日星期四

Variation argument is everywhere

Today I am asked a high school level question: given $1 \geq p_1 \geq p_2 \cdots p_n \geq 0$ and find a subset of these number such that to maximize the quantity
$$F(S) = \sum_{i=1}^{|S|}\frac{p_{\alpha_i}}{1 - p_{\alpha_i}} \left( \prod_{j=1}^{|S|}(1-p_{\alpha_j})\right).$$
If one would like to use the search naively, the complexity will be of course large as $n!$. I want to say a simple variational argument is although simple but useful. But supposing adding one more element, deleting one element, or replacing one element, one can see quickly the description for this optimal subset is $\tau$ that
$$S := \{p_i\}_{1 \leq i \leq \tau}, \tau := \min \left\{n \in \mathbb{N}: \sum_{i=1}^\tau \frac{p_i}{1-p_i} \geq 1\right\}.$$
Thus the complexity is reduced to $O(n)$. So let us always think about the variational argument.

2020年7月5日星期日

Optimization from random constrains

This week I heard a very nice talk from my classmate Huajie Qian. The question can be stated as doing $\min_{x}c \cdot x$ under the constrains $h(x, \zeta) \leq 0$ where $\zeta$ is a random variable. A very naive method is to say: we have data $\{\zeta_i\}_{1 \leq i \leq n}$, and then we do the problem so that these $n$ data is satisfied. Finally it gives the nice estimate that 
$$\mathbb P_{\{\zeta_i\}_{1 \leq i \leq n}}\left[\mathbb P_{\zeta}(h(x,\zeta) \leq 0) \leq 1- \alpha\right] \leq {n \choose d}(1-\alpha)^{n-d},$$
which is a very nice result.

The key to prove the result above depends on a very nice lemma: in fact, one can find $d$ constrains among the $n$ to solve the problem. That is to say: only $d$ constrains concern. 

To prove it, we need two theorems: Radon's theorem and Helly's theorem. The first one says in the space $\mathbb R^d$, for $(d+2)$ points, we can always find a partition $I_1, I_2$ so that the convex hull of $I_1, I_2$ has non-empty intersection. The second one says for convex sets $\{C_i\}_{1 \leq i \leq m}$, if the intersection of any $(d+1)$ has non-empty intersection, then $\cap_{i \leq m} C_i$ is non-empty.

Using the two theorems, we suppose that any $d$ constrains always give a strict better minimiser, then we do the projection on the hyperplane given by the direction $c$, and then apply the Helly's theorem on it to prove the contradiction.

2020年7月4日星期六

Maximal correlation

This is a question about the correlation. Let $X,Y$ be two Gaussion random variable $\mathcal{N}(0,1)$ with correlation $\beta$, then prove that the best constant of Cauchy inequality is 
$$Cov(f(X), g(Y)) \leq \vert \beta \vert \sqrt{Var(f(X)) Var(g(Y))}$$.

In fact, one can define the maximal correlation of random variable by the best constant above and of course it should be bigger than $\beta$. Let us remark how to prove the inequality above quickly. We can use the expansion by Hermit polynomial that we have 
$$\mathbb{E}[H_n(X) H_m(Y)] = \delta_{n,m} \left(\mathbb{E}\frac{1}{n!}[XY]\right)^n.$$
Then a centered $L^2$ functions have projection on $H_0$ zero. Then we have 
$$\mathbb{E}[f(X)g(Y)] = \sum_{n=1}^{\infty}\langle f, H_n \rangle \langle g, H_n \rangle \frac{1}{n!} \beta^n \\ \leq \vert \beta \vert \sqrt{\sum_{n=1}^{\infty}\langle f, H_n \rangle^2 \frac{1}{n!} } \sqrt{\sum_{n=1}^{\infty}\langle g, H_n \rangle^2 \frac{1}{n!} } \\ \leq \beta \vert \sqrt{Var(f(X)) Var(g(Y))}.$$
This concludes the proof.