\displaystyle f_{24}(\mathbf{x}) = \mathrm{min}\left(\sum_{i = 1}^{D}(\hat{x}_i - \mu_0)^2, d\,D+ s\sum_{i = 1}^{D}(\hat{x}_i - \mu_1)^2\right) + 10\left(D- \sum_{i=1}^D\cos(2\pi z_i)\right) + 10^4\,f^{{}_\mathrm{pen}}(\mathbf{x}) + f_{\mathrm{opt}}
\hat{\mathbf{x}} = 2\,\mathrm{{sign}}(\mathbf{x^\mathrm{opt}})\otimes\mathbf{x}, \mathbf{x^\mathrm{opt}}= \frac{\mu_0}{2} \mathbf{1_-^+}
\mathbf{z}= \mathbf{Q}\Lambda^{\!100}\mathbf{R}(\hat{\mathbf{x}}-\mu_0\,\mathbf{1})
\mu_0 = 2.5, \mu_1 = -\sqrt{\dfrac{\mu_0^2-d}{s}}, s = 1 - \dfrac{1}{2\sqrt{D+20}-8.2}, d=1
Symbols and definitions:
Used symbols and definitions of, e.g., auxiliary functions are given in the following. Vectors are typeset in bold and refer to column vectors.
\otimes indicates element-wise multiplication of two D-dimensional vectors, \otimes:\mathcal{R}^D\times\mathcal{R}^D\to\mathcal{R}^D,
(\mathbf{x},\mathbf{y})\mapsto\mathrm{{diag}}(\mathbf{x})\times\mathbf{y}=(x_i\times y_i)_{i=1,\dots,D}
\|.\| denotes the Euclidean norm, \|\mathbf{x}\|^2=\sum_i x_i^2.
[.] denotes the nearest integer value
\mathbf{0} =(0,\dots,0)^{\mathrm{T}} all zero vector
\mathbf{1} =(1,\dots,1)^{\mathrm{T}} all one vector
\Lambda^{\!\alpha} is a diagonal matrix in D dimensions with the ith diagonal element as \lambda_{ii} =
\alpha^{\frac{1}{2}\frac{i-1}{D-1}}, for i=1,\dots,D.
f^{{}}_\mathrm{pen} :\mathcal{R}^D\to\mathcal{R}, \mathbf{x}\mapsto\sum_{i=1}^D\max(0,|x_i| - 5)^2
\mathbf{1}_-^+ a D-dimensional vector with entries of -1 or 1 with equal probability independently drawn.
\mathbf{Q}, \mathbf{R} orthogonal (rotation) matrices. For one function in one dimension a different realization for respectively \mathbf{Q} and \mathbf{R} is used for each instantiation of the function. Orthogonal matrices are generated from standard normally distributed entries by Gram-Schmidt orthonormalization. Columns and rows of an orthogonal matrix form an orthonormal basis.
\mathbf{R} see \mathbf{Q}
T^{{\beta}}_\mathrm{asy} :\mathcal{R}^D\to\mathcal{R}^D, x_i\mapsto
\begin{cases}
x_i^{1+\beta \frac{i-1}{D-1}\sqrt{x_i}} & \text{~if~} x_i>0\\
x_i & \text{~otherwise~}
\end{cases}, for i=1,\dots,D. See Figure 1.
T_\mathrm{\hspace*{-0.01em}osz} :\mathcal{R}^n\to\mathcal{R}^n, for any positive integer n (n=1 and n=D are used in the following), maps element-wise x\mapsto\mathrm{{sign}}(x)\exp\left(\hat{x} +
0.049\left(\sin(c_1 \hat{x}) + \sin(c_2 \hat{x})\right)\right) with \hat{x}= \begin{cases}
\log(|x|) & \text{if~} x\not=0 \\
0 & \text{otherwise}
\end{cases}, \mathrm{{sign}}(x) = \begin{cases} -1 & \text{if~} x < 0 \\
0 & \text{if~} x = 0 \\
1 & \text{otherwise}
\end{cases}, c_1 = \begin{cases}
10 & \text{if~} x > 0\\
5.5 & \text{otherwise}
\end{cases} and c_2 = \begin{cases}
7.9 & \text{if~} x > 0\\
3.1 & \text{otherwise}
\end{cases}. See Figure 1.
\mathbf{x}^\mathrm{opt} optimal solution vector, such that f(\mathbf{x^\mathrm{opt}}) is minimal.
Properties:
Highly multimodal function based on (Lunacek et al. 2008) with two funnels around \frac{\mu_0}{2}\mathbf{1_-^+} and \frac{\mu_1}{2}\mathbf{1_-^+} being superimposed by the cosine. Presumably different approaches need to be used for “selecting the funnel” and for searching the highly multimodal function “within” the funnel. The function was constructed to be deceptive for evolutionary algorithms with large population size.
- the funnel of the local optimum at \frac{\mu_1}{2}\mathbf{1_-^+} has roughly 70\% of the search space volume within [-5,5]^D.
Information gained from this function: Can the search behavior be local on the global scale but global on a local scale?