1. Stochastic iterative algorithm
To formalize the Proposition 3 of (Malherbe and Vayatis, 2017)Cédric Malherbe and Nicolas Vayatis, 2017. “Global optimization of Lipschitz functions”. In International conference on machine learning., we need to abstract the notion of a stochastic iterative global optimization algorithm. The full documentation of this framework can be found here. We give here a brief overview of the definition of a stochastic iterative global optimization algorithm.
1.1. Definition
A stochastic iterative global optimization algorithm can be seen as a stochastic process that iteratively samples from a search space, producing a sequence of samples. The first sample is drawn from a probability measure inherent to the algorithm, and subsequent samples are generated based on Markov kernels that depend on the previous samples and their associated evaluations by a measurable function. We define the following structure to represent such an algorithm.
structure Algorithm (α β : Type*) [MeasurableSpace α] [MeasurableSpace β] where
ν : Measure α
prob_measure : IsProbabilityMeasure ν
kernel_iter (n : ℕ) : Kernel (prod_iter_image α β n) α
markov_kernel (n : ℕ) : IsMarkovKernel (kernel_iter n)
The type prod_iter_image is the product type (Iic n → α) × (Iic n → β) which represents the sequence of samples and their evaluations at each iteration. The measure ν is the initial probability measure from which the first sample is drawn, and kernel_iter is the Markov kernel that defines how to sample the next element based on the previous ones.