Processing math: 100%

2020年7月5日星期日

Optimization from random constrains

This week I heard a very nice talk from my classmate Huajie Qian. The question can be stated as doing minxcx under the constrains h(x,ζ)0 where ζ is a random variable. A very naive method is to say: we have data {ζi}1in, and then we do the problem so that these n data is satisfied. Finally it gives the nice estimate that 
P{ζi}1in[Pζ(h(x,ζ)0)1α](nd)(1α)nd,
which is a very nice result.

The key to prove the result above depends on a very nice lemma: in fact, one can find d constrains among the n to solve the problem. That is to say: only d constrains concern. 

To prove it, we need two theorems: Radon's theorem and Helly's theorem. The first one says in the space Rd, for (d+2) points, we can always find a partition I1,I2 so that the convex hull of I1,I2 has non-empty intersection. The second one says for convex sets {Ci}1im, if the intersection of any (d+1) has non-empty intersection, then imCi is non-empty.

Using the two theorems, we suppose that any d constrains always give a strict better minimiser, then we do the projection on the hyperplane given by the direction c, and then apply the Helly's theorem on it to prove the contradiction.

没有评论:

发表评论