Consistency of global optimization algorithms

1. Stochastic iterative algorithm🔗

To formalize the Proposition 3 of  (Malherbe and Vayatis, 2017)Cédric Malherbe and Nicolas Vayatis, 2017. “Global optimization of Lipschitz functions”. In International conference on machine learning., we need to abstract the notion of a stochastic iterative global optimization algorithm. The full documentation of this framework can be found here. We give here a brief overview of the definition of a stochastic iterative global optimization algorithm.

1.1. Definition🔗

A stochastic iterative global optimization algorithm can be seen as a stochastic process that iteratively samples from a search space, producing a sequence of samples. The first sample is drawn from a probability measure inherent to the algorithm, and subsequent samples are generated based on Markov kernels that depend on the previous samples and their associated evaluations by a measurable function. We define the following structure to represent such an algorithm.

🔗structure
Algorithm.{u_1, u_2} (α : Type u_1) (β : Type u_2) [MeasurableSpace α] [MeasurableSpace β] : Type (max u_1 u_2)
Algorithm.{u_1, u_2} (α : Type u_1) (β : Type u_2) [MeasurableSpace α] [MeasurableSpace β] : Type (max u_1 u_2)

Algorithm α β represents a general iterative stochastic optimization algorithm.

It models a sequence of updates where:

  • α is the search space (e.g., parameters, candidate solutions),

  • β is the evaluation space (e.g., objective values, feedback),

It allows formal reasoning over joint distributions of evaluated points and convergence properties.

Constructor

Algorithm.mk.{u_1, u_2}

Fields

ν : MeasureTheory.Measure α

The initial smeasure over the search space α.

prob_measure : MeasureTheory.IsProbabilityMeasure self.ν

The initial measure ν must be a probability measure.

kernel_iter : (n : )  ProbabilityTheory.Kernel (prod_iter_image α β n) α

The kernels that defines how to sample the next point based on the history of points and evaluations.

markov_kernel :  (n : ), ProbabilityTheory.IsMarkovKernel (self.kernel_iter n)

The kernels must satisfy be Markov kernels.

The type prod_iter_image is the product type (Iic n → α) × (Iic n → β) which represents the sequence of samples and their evaluations at each iteration. The measure ν is the initial probability measure from which the first sample is drawn, and kernel_iter is the Markov kernel that defines how to sample the next element based on the previous ones.