Consistency of global optimization algorithms

 Consistency of global optimization algorithms

Gaëtan Serré

In 2017, the paper Cédric Malherbe and Nicolas Vayatis, 2017. “Global optimization of Lipschitz functions”. In International conference on machine learning. introduced two global optimization algorithms consistent on any Lipschitz function, i.e, they converge to the global maximum of any Lipschitz function. To prove this result, the authors proved a proposition on meta-optimization, which state that, for any global optimization algorithm, being consistent on any Lipschitz function is equivalent to sample the whole search space.

More formally, Proposition 3 states that, for any stochastic iterative global optimization algorithm A, the two following statements are equivalent:

  1. For any Lipschitz function f defined on \mathcal{X}, \sup_{x \in \mathcal{X}} \min_{i = 0 \dots n} d(X_i, x) \xrightarrow{p} 0.

  2. For any Lipschitz function f defined on \mathcal{X}, \max_{i = 0 \dots n} f(X_i) \xrightarrow{p} \max_{x \in \mathcal{X}} f(x).

Here \mathcal{X} \subset \mathbb{R}^d is compact, (X_i)_{1 \le i \le n} are the samples produced by the algorithm A after n iterations, and \xrightarrow{p} denotes the convergence in probability.

One can see that (2) is a popular definition of the consistency of a stochastic iterative global optimization algorithm while (1) means that A samples the whole domain \mathcal{X}.

This manual is dedicated to the L∃∀N formalization of this proposition.

Contents

  1. 1. Stochastic iterative algorithm
  2. 2. Consistency and sampling
  3. 3. Consistency equivalence
  4. 4. Indistinguishable function