2020年7月5日星期日

Optimization from random constrains

This week I heard a very nice talk from my classmate Huajie Qian. The question can be stated as doing $\min_{x}c \cdot x$ under the constrains $h(x, \zeta) \leq 0$ where $\zeta$ is a random variable. A very naive method is to say: we have data $\{\zeta_i\}_{1 \leq i \leq n}$, and then we do the problem so that these $n$ data is satisfied. Finally it gives the nice estimate that 
$$\mathbb P_{\{\zeta_i\}_{1 \leq i \leq n}}\left[\mathbb P_{\zeta}(h(x,\zeta) \leq 0) \leq 1- \alpha\right] \leq {n \choose d}(1-\alpha)^{n-d},$$
which is a very nice result.

The key to prove the result above depends on a very nice lemma: in fact, one can find $d$ constrains among the $n$ to solve the problem. That is to say: only $d$ constrains concern. 

To prove it, we need two theorems: Radon's theorem and Helly's theorem. The first one says in the space $\mathbb R^d$, for $(d+2)$ points, we can always find a partition $I_1, I_2$ so that the convex hull of $I_1, I_2$ has non-empty intersection. The second one says for convex sets $\{C_i\}_{1 \leq i \leq m}$, if the intersection of any $(d+1)$ has non-empty intersection, then $\cap_{i \leq m} C_i$ is non-empty.

Using the two theorems, we suppose that any $d$ constrains always give a strict better minimiser, then we do the projection on the hyperplane given by the direction $c$, and then apply the Helly's theorem on it to prove the contradiction.

没有评论:

发表评论