CMAES
理论基础、收敛性与信息几何分析
探讨CMA-ES及其变体的数学基础,包括线性与非渐进收敛性证明、信息几何解释、漂移分析、质量增益计算以及在特定函数景观(如球形、线性、Rastrigin)下的动力学行为。
- Global linear convergence of evolution strategies with recombination on scaling-invariant functions(Cheikh Toure, A. Auger, N. Hansen, 2021, Journal of Global Optimization)
- Theoretical Foundation for CMA-ES from Information Geometry Perspective(Youhei Akimoto, Y. Nagata, I. Ono, S. Kobayashi, 2012, Algorithmica)
- On the behaviour of weighted multi-recombination evolution strategies optimising noisy cigar functions(D. Arnold, H. Beyer, A. Melkozerov, 2009, Proceedings of the 11th Annual conference on Genetic and evolutionary computation)
- Impacts of invariance in search: When CMA-ES and PSO face ill-conditioned and non-separable problems(N. Hansen, Raymond Ros, Nikolas Mauny, Marc Schoenauer, A. Auger, 2011, Appl. Soft Comput.)
- CMA-ES: evolution strategies and covariance matrix adaptation(N. Hansen, A. Auger, 2011, Proceedings of the 13th annual conference companion on Genetic and evolutionary computation)
- Evolution strategies for multivariate-to-anything partially specified random vector generation(S. Stanhope, 2004, Proceedings of the 2004 Congress on Evolutionary Computation (IEEE Cat. No.04TH8753))
- Explicit and Non-asymptotic Query Complexities of Rank-Based Zeroth-order Algorithms on Smooth Functions(Haishan Ye, 2025, ArXiv)
- Progress analysis of a multi-recombinative evolution strategy on the highly multimodal Rastrigin function(Amir Omeradzic, Hans-Georg Beyer, 2023, Theoretical computer science)
- The Dynamics of Cumulative Step Size Adaptation on the Ellipsoid Model(H. Beyer, Michael Hellwig, 2016, Evolutionary Computation)
- Markov Chain Analysis of Cumulative Step-Size Adaptation on a Linear Constrained Problem(A. Chotard, A. Auger, N. Hansen, 2015, Evolutionary Computation)
- On Proving Linear Convergence of Comparison-based Step-size Adaptive Randomized Search on Scaling-Invariant Functions via Stability of Markov Chains(A. Auger, N. Hansen, 2013, ArXiv)
- Analysis of evolution strategies with the optimal weighted recombination(Chun-Kit Au, Ho-fung Leung, 2018, Proceedings of the Genetic and Evolutionary Computation Conference)
- Qualms Regarding the Optimality of Cumulative Path Length Control in CSA/CMA-Evolution Strategies(H. Beyer, D. Arnold, 2003, Evolutionary Computation)
- Weighted recombination evolution strategy on a class of PDQF's(Steffen Finck, H. Beyer, 2009, No journal)
- Natural Evolution Strategies(Daan Wierstra, T. Schaul, Jan Peters, J. Schmidhuber, 2008, 2008 IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence))
- Quality Gain Analysis of the Weighted Recombination Evolution Strategy on General Convex Quadratic Functions(Youhei Akimoto, A. Auger, N. Hansen, 2016, Theor. Comput. Sci.)
- Mirror Natural Evolution Strategies(Haishan Ye, Tong Zhang, 2019, ArXiv)
- Bias in Standard Self-Adaptive Evolution Strategies(Amir Omeradzic, Hans-Georg Beyer, 2024, 2024 IEEE Congress on Evolutionary Computation (CEC))
- Weighted Multirecombination Evolution Strategies on the Parabolic Ridge(D. Arnold, D. MacDonald, 2006, 2006 IEEE International Conference on Evolutionary Computation)
- Optimal Weighted Recombination(D. Arnold, 2005, No journal)
- Cumulative Step-Size Adaptation on Linear Functions(A. Chotard, A. Auger, N. Hansen, 2012, ArXiv)
- Gaussian Adaptation as a unifying framework for continuous black-box optimization and adaptive Monte Carlo sampling(Christian L. Müller, I. Sbalzarini, 2010, IEEE Congress on Evolutionary Computation)
- Additive drift is all you need - if you are an evolution strategy(Tobias Glasmachers, 2025, Proceedings of the 18th ACM/SIGEVO Conference on Foundations of Genetic Algorithms)
核心机制优化与参数自适应策略
研究步长控制(CSA/TPA)、种群规模动态调整(IPOP/BIPOP)、学习率自适应、主动协方差矩阵更新(Active Update)以及镜像采样等核心组件的改进,以提升算法的搜索效率和鲁棒性。
- CMA-ES with Learning Rate Adaptation(Masahiro Nomura, Youhei Akimoto, Isao Ono, 2024, ACM Transactions on Evolutionary Learning)
- Improving population size adapting CMA-ES algorithm on step-size blow-up in weakly-structured multimodal functions(Chandula Fernando, Kushani De Silva, 2025, ArXiv)
- A New Step-Size Adaptation Rule for CMA-ES Based on the Population Midpoint Fitness(Eryk Warchulski, J. Arabas, 2021, 2021 IEEE Congress on Evolutionary Computation (CEC))
- Maximum Likelihood-Based Online Adaptation of Hyper-Parameters in CMA-ES(I. Loshchilov, Marc Schoenauer, M. Sebag, N. Hansen, 2014, No journal)
- CMA-ES with Two-Point Step-Size Adaptation(Nikolaus Hansen, 2008, ArXiv)
- Population Size Adaptation for the CMA-ES Based on the Estimation Accuracy of the Natural Gradient(K. Nishida, Youhei Akimoto, 2016, Proceedings of the Genetic and Evolutionary Computation Conference 2016)
- Bounding the population size of IPOP-CMA-ES on the noiseless BBOB testbed(T. Liao, T. Stützle, 2013, Proceedings of the 15th annual conference companion on Genetic and evolutionary computation)
- CMA-ES with restarts for solving CEC 2013 benchmark problems(I. Loshchilov, 2013, 2013 IEEE Congress on Evolutionary Computation)
- On the impact of a small initial population size in the IPOP active CMA-ES with mirrored mutations on the noiseless BBOB testbed(D. Brockhoff, A. Auger, N. Hansen, 2012, Proceedings of the 14th annual conference companion on Genetic and evolutionary computation)
- Comparing natural evolution strategies to BIPOP-CMA-ES on noiseless and noisy black-box optimization testbeds(T. Schaul, 2012, Proceedings of the 14th annual conference companion on Genetic and evolutionary computation)
- An Empirical Comparison of CMA-ES in Dynamic Environments(Chun-Kit Au, Ho-fung Leung, 2012, No journal)
- On the effect of mirroring in the IPOP active CMA-ES on the noiseless BBOB testbed(D. Brockhoff, A. Auger, N. Hansen, 2012, Proceedings of the 14th annual conference companion on Genetic and evolutionary computation)
- Comparing the (1+1)-CMA-ES with a mirrored (1+2)-CMA-ES with sequential selection on the noiseless BBOB-2010 testbed(A. Auger, D. Brockhoff, N. Hansen, 2010, No journal)
- Forced optimal covariance adaptive learning: modified CMA-ES for efficient hessian determination(O. M. Shir, J. Roslund, H. Rabitz, 2010, No journal)
- Evolution of Affine Transformations and Iterated Function Systems Using Hierarchical Evolution Strategy(Anargyros Sarafopoulos, 2001, No journal)
- Decoupling of Direction and Length for Cumulative Step Size Adaptation(Shuo Zhang, Zhenhua Li, Detian Yang, Shuo Wang, Xinye Cai, 2021, 2021 17th International Conference on Computational Intelligence and Security (CIS))
- Active Update of Mutation Matrix Adaptation for Variable Metric Evolution Strategy(Huilin Cheng, Zhenhua Li, Detian Yang, Shu You, Xinye Cai, 2021, 2021 17th International Conference on Computational Intelligence and Security (CIS))
- On the Interaction of Adaptive Population Control with Cumulative Step-Size Adaptation(Amir Omeradzic, Hans-Georg Beyer, 2024, ArXiv)
- Investigating Adaptive Population Control Strategies With Cumulative Step-Size Adaptation(Amir Omeradzic, Hans-Georg Beyer, 2026, IEEE Transactions on Evolutionary Computation)
- CMA-ES with Learning Rate Adaptation: Can CMA-ES with Default Population Size Solve Multimodal and Noisy Problems?(M. Nomura, Youhei Akimoto, I. Ono, 2023, Proceedings of the Genetic and Evolutionary Computation Conference)
- PSA-CMA-ES: CMA-ES with population size adaptation(K. Nishida, Youhei Akimoto, 2018, Proceedings of the Genetic and Evolutionary Computation Conference)
- Evaluating the Population Size Adaptation Mechanism for CMA-ES on the BBOB Noiseless Testbed(K. Nishida, Youhei Akimoto, 2016, Proceedings of the 2016 on Genetic and Evolutionary Computation Conference Companion)
- Shrinking Mechanism as an Alternative to Cumulative Step-size Adaptation for Evolution Strategy(Bo Zhang, Lin Wang, Bo Yang, 2023, 2023 International Conference on Networks, Communications and Intelligent Computing (NCIC))
- Empirical Investigation of Simplified Step-Size Control in Metaheuristics with a View to Theory(J. Jägersküpper, M. Preuss, 2008, No journal)
- Parameter Setting of CMA-ES: A Numerical Study on CEC2019 100-Digit Challenge(Ting-Yu Chen, Jia-Fong Yeh, Tsung-Su Yeh, Tsung-Che Chiang, 2019, 2019 International Conference on Technologies and Applications of Artificial Intelligence (TAAI))
- Tuning the hyper-parameters of CMA-ES with tree-structured Parzen estimators(Mengxue Zhao, Jinlong Li, 2018, 2018 Tenth International Conference on Advanced Computational Intelligence (ICACI))
- Active covariance matrix adaptation for the (1+1)-CMA-ES(Dirk V. Arnold, Nikolaus Hansen, 2010, No journal)
- Learning Step-Size Adaptation in CMA-ES(Gresa Shala, André Biedenkapp, Noor H. Awad, Steven Adriaensen, M. Lindauer, F. Hutter, 2020, No journal)
- Functionally specialized CMA-ES: a modification of CMA-ES based on the specialization of the functions of covariance matrix adaptation and step size adaptation(Youhei Akimoto, J. Sakuma, I. Ono, S. Kobayashi, 2008, No journal)
- Enhancing covariance matrix adaptation evolution strategy through fitness inheritance(Rung-Tzuo Liaw, Chuan-Kang Ting, 2016, 2016 IEEE Congress on Evolutionary Computation (CEC))
- A multi-recombinative active matrix adaptation evolution strategy for constrained optimization(Patrick Spettel, H. Beyer, 2019, Soft Computing)
- Evolution Strategy with Covariance Matrix and decreasing step-size Adaptation (CMDSA-ES)(Zhuo Yang, Xi Chen, 2016, 2016 IEEE International Conference on Automation Science and Engineering (CASE))
- Mirrored sampling in evolution strategies with weighted recombination(A. Auger, D. Brockhoff, N. Hansen, 2011, No journal)
- CMA-ES with Optimal Covariance Update and Storage Complexity(Oswin Krause, D. R. Arbonès, C. Igel, 2016, No journal)
- Comparing mirrored mutations and active covariance matrix adaptation in the IPOP-CMA-ES on the noiseless BBOB testbed(D. Brockhoff, A. Auger, N. Hansen, 2012, Proceedings of the 14th annual conference companion on Genetic and evolutionary computation)
- Evaluating the Population Size Adaptation Mechanism for CMA-ES on the BBOB Noisy Testbed(K. Nishida, Youhei Akimoto, 2016, Proceedings of the 2016 on Genetic and Evolutionary Computation Conference Companion)
- A New Step Size Update Strategy for CMA-ES in Multi-objective Optimisation(Zheng Tan, Boyang Yuan, Hao Wang, Miqing Li, 2024, No journal)
- sigma-Self-Adaptive Weighted Multirecombination Evolution Strategy with Scaled Weights on the Noisy Sphere(H. Beyer, A. Melkozerov, 2008, No journal)
- Cumulative Step Size Adaptation for Adaptive SEMO in Integer Space(Günter Rudolph, M. Wagner, 2025, No journal)
- A Covariance Matrix Adaptation Evolution Strategy for Direct Policy Search in Reproducing Kernel Hilbert Space(Ngo Anh Vien, Viet-Hung Dang, TaeChoong Chung, 2017, No journal)
- Investigating the impact of sequential selection in the (1,4)-CMA-ES on the noiseless BBOB-2010 testbed(A. Auger, D. Brockhoff, N. Hansen, 2010, No journal)
- Improved step size adaptation for the MO-CMA-ES(T. Voss, N. Hansen, C. Igel, 2010, No journal)
- Improving Evolution Strategies through Active Covariance Matrix Adaptation(Grahame A. Jastrebski, D. Arnold, 2006, 2006 IEEE International Conference on Evolutionary Computation)
- Mirrored variants of the (1,4)-CMA-ES compared on the noisy BBOB-2010 testbed(A. Auger, D. Brockhoff, N. Hansen, 2010, No journal)
- Mutative σ-self-adaptation can beat cumulative step size adaptation when using weighted recombination(H. Beyer, A. Melkozerov, 2008, No journal)
- Effect of the mean vector learning rate in CMA-ES(H. Miyazawa, Youhei Akimoto, 2017, Proceedings of the Genetic and Evolutionary Computation Conference)
- On the impact of active covariance matrix adaptation in the CMA-ES with mirrored mutations and small initial population size on the noiseless BBOB testbed(D. Brockhoff, A. Auger, N. Hansen, 2012, Proceedings of the 14th annual conference companion on Genetic and evolutionary computation)
- A CMA-ES with Multiplicative Covariance Matrix Updates(Oswin Krause, T. Glasmachers, 2015, Proceedings of the 2015 Annual Conference on Genetic and Evolutionary Computation)
高维优化、大规模计算与效率增强
针对高维黑盒问题(Large-scale)、大规模并行计算需求以及协方差矩阵更新的计算复杂性,提出对角矩阵加速(sep-CMA)、有限内存(Limited-memory)、协同进化及无矩阵变体。
- Distributed Evolution Strategies With Multi-Level Learning for Large-Scale Black-Box Optimization(Qiqi Duan, Chang Shao, Guochen Zhou, Minghan Zhang, Qi Zhao, Yuhui Shi, 2023, IEEE Transactions on Parallel and Distributed Systems)
- Cooperative Coevolution for Non-Separable Large-Scale Black-Box Optimization: Convergence Analyses and Distributed Accelerations(Qiqi Duan, Chang Shao, Guochen Zhou, Hao Yang, Qi Zhao, Yuhui Shi, 2023, Appl. Soft Comput.)
- A parallel implementation of the covariance matrix adaptation evolution strategy(Najeeb Khan, 2018, ArXiv)
- Covariance Matrix Adaptation Evolution Strategy without a matrix(J. Arabas, Adam Stelmaszczyk, Eryk Warchulski, Dariusz Jagodziński, R. Biedrzycki, 2025, Proceedings of the Genetic and Evolutionary Computation Conference Companion)
- On Low Complexity Acceleration Techniques for Randomized Optimization(Sebastian U. Stich, 2014, No journal)
- Improvement of sep-CMA-ES for Optimization of High-Dimensional Functions with Low Effective Dimensionality(Teppei Yamaguchi, Kento Uchida, Shinichi Shirakawa, 2022, 2022 IEEE Symposium Series on Computational Intelligence (SSCI))
- A computational efficient covariance matrix update and a (1+1)-CMA for evolution strategies(C. Igel, T. Suttorp, N. Hansen, 2006, Proceedings of the 8th annual conference on Genetic and evolutionary computation)
- Simplify Your Covariance Matrix Adaptation Evolution Strategy(H. Beyer, B. Sendhoff, 2017, IEEE Transactions on Evolutionary Computation)
- Sparse Inverse Covariance Learning for CMA-ES with Graphical Lasso(Konstantinos Varelas, A. Auger, N. Hansen, 2020, No journal)
- Fast Moving Natural Evolution Strategy for High-Dimensional Problems(M. Nomura, I. Ono, 2022, 2022 IEEE Congress on Evolutionary Computation (CEC))
- Covariance Matrix Adaptation Evolution Strategy for Low Effective Dimensionality(Kento Uchida, Teppei Yamaguchi, Shinichi Shirakawa, 2024, ArXiv)
- Solving large-scale many-objective optimization problems by covariance matrix adaptation evolution strategy with scalable small subpopulations(Huangke Chen, Ran Cheng, Jinming Wen, Haifeng Li, Jian Weng, 2020, Inf. Sci.)
- On the Parallel Speed-Up of Estimation of Multivariate Normal Algorithm and Evolution Strategies(F. Teytaud, O. Teytaud, 2009, No journal)
- Reducing the Time Complexity of the Derandomized Evolution Strategy with Covariance Matrix Adaptation (CMA-ES)(Nikolaus Hansen, Sibylle D. Müller, P. Koumoutsakos, 2003, Evolutionary Computation)
- Covariance Matrix Adaptation Evolution Strategy Assisted by Principal Component Analysis(Yang Mei, Hao Wang, 2021, ArXiv)
- Alternative Step-Size Adaptation Rule for the Matrix Adaptation Evolution Strategy(Eryk Warchulski, J. Arabas, 2024, No journal)
- Diagonal Acceleration for Covariance Matrix Adaptation Evolution Strategies(Youhei Akimoto, N. Hansen, 2019, Evolutionary Computation)
- Cooperative-Coevolution-CMA-ES with Two-Stage Grouping(Dani Irawan, B. Naujoks, M. Emmerich, 2020, 2020 IEEE Congress on Evolutionary Computation (CEC))
- Investigating the Local-Meta-Model CMA-ES for Large Population Sizes(Z. Bouzarkouna, A. Auger, D. Ding, 2010, No journal)
- Large-Scale Evolution Strategy Based on Search Direction Adaptation(Xiaoyu He, Yuren Zhou, Zefeng Chen, Jun Zhang, Wei-neng Chen, 2019, IEEE Transactions on Cybernetics)
- A computationally efficient limited memory CMA-ES for large scale optimization(I. Loshchilov, 2014, Proceedings of the 2014 Annual Conference on Genetic and Evolutionary Computation)
多目标、多模态与复杂约束处理
扩展CMA-ES以处理具有多个冲突目标(MO-CMA-ES)、多个局部最优(多模态)、噪声干扰、以及各种线性/非线性约束的问题,包括小生境技术和排斥机制。
- The effect of mirrored sampling with active CMA and sample reuse in the CMAES-APOP algorithm(Duc Manh Nguyen, 2022, Proceedings of the Genetic and Evolutionary Computation Conference Companion)
- Static and Dynamic Multimodal Optimization by Improved Covariance Matrix Self-Adaptation Evolution Strategy With Repelling Subpopulations(A. Ahrari, S. Elsayed, R. Sarker, D. Essam, C. Coello, 2021, IEEE Transactions on Evolutionary Computation)
- Modified CMA-ES Algorithm for Multi-Modal Optimization: Incorporating Niching Strategies and Dynamic Adaptation Mechanism(W.G.L. Karunarathne, Indu Bala, Dikshit Chauhan, Matthew Roughan, Lewis Mitchell, 2024, ArXiv)
- Multi-Objective Covariance Matrix Adaptation MAP-Annealing(Shihan Zhao, S. Nikolaidis, 2025, Proceedings of the Genetic and Evolutionary Computation Conference)
- Adapting MOEA/D to CMA-ES for Dealing with Ill-conditioned Multiobjective Problems.(Chengyu Lu, Zhenhua Li, Qingfu Zhang, 2026, Evolutionary computation)
- Active covariance matrix adaptation for multi-objective CMA-ES(Christoph Krimpmann, Jan Braun, F. Hoffmann, T. Bertram, 2013, 2013 Sixth International Conference on Advanced Computational Intelligence (ICACI))
- Unbounded Population MO-CMA-ES for the Bi-Objective BBOB Test Suite(Oswin Krause, T. Glasmachers, N. Hansen, C. Igel, 2016, Proceedings of the 2016 on Genetic and Evolutionary Computation Conference Companion)
- A Reference Vector-Based Simplified Covariance Matrix Adaptation Evolution Strategy for Constrained Global Optimization(Abhishek Kumar, Swagatam Das, R. Mallipeddi, 2020, IEEE Transactions on Cybernetics)
- Sampling large hyperplane-truncated multivariate normal distributions(H. Maatouk, Didier Rullière, X. Bay, 2023, Computational Statistics)
- Sign-Averaging Covariance Matrix Adaptation Evolution Strategy(Daiki Morinaga, Youhei Akimoto, 2024, Proceedings of the Genetic and Evolutionary Computation Conference)
- CMA-ES with Adaptive Reevaluation for Multiplicative Noise(Kento Uchida, Kentaro Nishihara, Shinichi Shirakawa, 2024, Proceedings of the Genetic and Evolutionary Computation Conference)
- Robust Covariance Matrix Adaptation Evolution Strategy: Optimal Design of Magnetic Devices Considering Material Variation(Akito Maruo, H. Igarashi, 2023, IEEE Access)
- Natural Evolution Strategy for Mixed-Integer Black-Box Optimization(Koki Ikeda, I. Ono, 2023, Proceedings of the Genetic and Evolutionary Computation Conference)
- On the Application of Danskin's Theorem to Derivative-Free Minimax Optimization(Abdullah Al-Dujaili, Shashank Srikant, Erik Hemberg, Una-May O’Reilly, 2018, ArXiv)
- A Simple Modification in CMA-ES Achieving Linear Time and Space Complexity(Raymond Ros, Nikolaus Hansen, 2008, No journal)
- Distance-weighted Exponential Natural Evolution Strategy for Implicitly Constrained Black-Box Function Optimization(M. Nomura, N. Sakai, Nobusumi Fukushima, I. Ono, 2021, 2021 IEEE Congress on Evolutionary Computation (CEC))
- A covariance matrix adaptation evolution strategy variant and its engineering application(Yajun Liang, Xiaofei Wang, Hui Zhao, Tong Han, Zhenglei Wei, Yintong Li, 2019, Appl. Soft Comput.)
- Extending distance-weighted exponential natural evolution strategy for function optimization in uncertain environments(Kazuyuki Masutomi, Y. Nagata, I. Ono, 2013, 2013 IEEE Congress on Evolutionary Computation)
- A modified covariance matrix adaptation evolution strategy for real-world constrained optimization problems(Abhishek Kumar, Swagatam Das, I. Zelinka, 2020, Proceedings of the 2020 Genetic and Evolutionary Computation Conference Companion)
- Benchmarking several strategies to update the penalty parameters in AL-CMA-ES on the bbob-constrained testbed(Paul Dufossé, Asma Atamna, 2022, Proceedings of the Genetic and Evolutionary Computation Conference Companion)
- Fuzzy cognitive maps learning algorithm based on modified covariance matrix adaptation evolution strategy(Zhuofan Li, Jingping Wang, Yingjun Zhang, Hua Huang, Hui Yin, Hongli Xu, 2023, No journal)
代理模型辅助、混合算法与跨域扩展
利用高斯过程、神经网络等代理模型减少计算开销;将CMA-ES与DE、PSO、强化学习等结合;并将其应用范围扩展至离散空间、函数空间及质量多样性(QD)搜索。
- SCR, an efficient global optimization algorithm for constrained black-box problems(Syed Ali Zaryab, Andrea Manno, Emanuele Martelli, 2025, Optimization and Engineering)
- Landscape analysis of gaussian process surrogates for the covariance matrix adaptation evolution strategy(Z. Pitra, Jakub Repický, M. Holeňa, 2019, Proceedings of the Genetic and Evolutionary Computation Conference)
- Using Evolution Strategy with Meta-models for Well Placement Optimization(Z. Bouzarkouna, D. Ding, A. Auger, 2010, ArXiv)
- Improving Optimization with Gaussian Processes in the Covariance Matrix Adaptation Evolution Strategy(Jiří Tumpach, J. Koza, M. Holeňa, 2023, No journal)
- Combining Gaussian Processes and Neural Networks in Surrogate Modelling for Covariance Matrix Adaptation Evolution Strategy(J. Koza, Jiří Tumpach, Z. Pitra, M. Holeňa, 2021, No journal)
- A Superlinearly Convergent Evolution Strategy(Tobias Glasmachers, 2025, ArXiv)
- Individuals redistribution based on differential evolution for covariance matrix adaptation evolution strategy(Zhe Chen, Yuan Liu, 2022, Scientific Reports)
- A Covariance Matrix Adaptation Evolution Strategy-based Manta Ray Foraging optimization(Qianqian Li, Tongyan Liu, Houtian He, Kaiyu Wang, Shangce Gao, 2022, 2022 Joint 12th International Conference on Soft Computing and Intelligent Systems and 23rd International Symposium on Advanced Intelligent Systems (SCIS&ISIS))
- Accelerate Evolution Strategy by Proximal Policy Optimization(Tao Xu, Hongyang Chen, Jun He, 2024, Proceedings of the Genetic and Evolutionary Computation Conference)
- The Hessian Estimation Evolution Strategy(T. Glasmachers, Oswin Krause, 2020, ArXiv)
- Toward a Matrix-Free Covariance Matrix Adaptation Evolution Strategy(J. Arabas, Dariusz Jagodziński, 2020, IEEE Transactions on Evolutionary Computation)
- Hybrid biogeography-based optimization with enhanced mutation and CMA-ES for global optimization problem(Fuqing Zhao, Songling Du, Yi Zhang, Weimin Ma, Houbin Song, 2020, Service Oriented Computing and Applications)
- Bi-population CMA-ES agorithms with surrogate models and line searches(I. Loshchilov, Marc Schoenauer, M. Sebag, 2013, Proceedings of the 15th annual conference companion on Genetic and evolutionary computation)
- Local-meta-model CMA-ES for partially separable functions(Z. Bouzarkouna, A. Auger, D. Ding, 2011, No journal)
- High-dimensional Bayesian Optimization via Covariance Matrix Adaptation Strategy(Lam Ngo, Huong Ha, Jeffrey Chan, Vu Nguyen, Hongyu Zhang, 2024, Trans. Mach. Learn. Res.)
- Effective Training of PINNs by Combining CMA-ES with Gradient Descent(Lin Liu, Yuan Yuan, 2024, 2024 IEEE Congress on Evolutionary Computation (CEC))
- CMA-MAPPO: Integrating Covariance Matrix Adaptation Evolution Strategy with Multi-Agent Proximal Policy Optimization for enhanced exploration in sparse-reward environments(A.H. Khatami, 2026, Swarm and Evolutionary Computation)
- Geometric Learning in Black-Box Optimization: A GNN Framework for Algorithm Performance Prediction(Ana Kostovska, C. Doerr, S. Džeroski, P. Panov, T. Eftimov, 2025, Proceedings of the Genetic and Evolutionary Computation Conference Companion)
- Transfer weight functions for injecting problem information in the multi-objective CMA-ES(Olacir R. Castro, A. Pozo, J. A. Lozano, R. Santana, 2016, Memetic Computing)
- Adaptive landscape-aware repelling restart covariance matrix adaptation-evolution strategy for multimodal and global optimization(Xikang Wang, Tongxi Wang, Hua Xiang, 2025, Swarm Evol. Comput.)
- Memetic Covariance Matrix Adaptation Evolution Strategy for Bilinear Matrix Inequality Problems in Control System Design(Syue-Cian Lin, Wei-Yu Chiu, Chien-Feng Wu, 2025, ArXiv)
- Improved grey wolf optimization based on the two-stage search of hybrid CMA-ES(Yun-tao Zhao, Weigang Li, Ao Liu, 2019, Soft Computing)
- Coevolutionary CMA-ES for Knowledge-Free Learning of Game Position Evaluation(Wojciech Jaśkowski, M. Szubert, 2016, IEEE Transactions on Computational Intelligence and AI in Games)
- A competitive evolution strategy memetic algorithm for unrelated parallel machine scheduling to minimize total weighted tardiness and flow time(Chiuh-Cheng Chyu, Wei-Shung Chang, 2010, The 40th International Conference on Computers & Indutrial Engineering)
- Enhancing slime mould algorithm for engineering optimization: leveraging covariance matrix adaptation and best position management(Jinpeng Huang, Yi Chen, A. Heidari, Lei Liu, Huiling Chen, Guoxi Liang, 2024, J. Comput. Des. Eng.)
- MOEA/D using covariance matrix adaptation evolution strategy for complex multi-objective optimization problems(Ting-Chen Wang, Rung-Tzuo Liaw, Chuan-Kang Ting, 2016, 2016 IEEE Congress on Evolutionary Computation (CEC))
- A hybrid self-adapting multi-swarm algorithm based on PSO and CMA-ES for continuous dynamic optimization(S. Akhmedova, V. Stanovov, Aleksei Vakhnin, 2022, Proceedings of the Genetic and Evolutionary Computation Conference Companion)
- A discrete version of CMA-ES(E. Benhamou, J. Atif, R. Laraki, A. Auger, 2018, ArXiv)
- A covariance matrix adaptation evolution strategy in reproducing kernel Hilbert space(Viet-Hung Dang, Ngo Anh Vien, TaeChoong Chung, 2019, Genetic Programming and Evolvable Machines)
- A Functional Optimization Method for Continuous Domains(Viet-Hung Dang, Ngo Anh Vien, Pham Le-Tuyen, TaeChoong Chung, 2017, No journal)
- Covariance Matrix Adaptation MAP-Annealing: Theory and Experiments(Shihan Zhao, Bryon Tjanaka, Matthew C. Fontaine, S. Nikolaidis, 2024, ACM Transactions on Evolutionary Learning)
- A Quantum Field Evolution Strategy - An Adaptive Surrogate Approach(Jörg Bremer, S. Lehnhoff, 2016, No journal)
- Covariance matrix adaptation evolution strategy based on ensemble of mutations for parking navigation and maneuver of autonomous vehicles(Esther Aboyeji, Oladayo S. Ajani, Rammohan Mallipeddi, 2024, Expert Syst. Appl.)
基准测试、实证评估与结构偏差分析
通过标准测试集(BBOB, CEC, COCO)对CMA-ES变体进行系统性性能评估,分析算法的结构偏差、参数敏感性及在不同噪声环境下的表现。
- Benchmarking CMA-ES under Additive and Subtractive Noise on the BBOB Testbed(Oskar Girardin, 2025, Proceedings of the Genetic and Evolutionary Computation Conference Companion)
- Benchmarking IPOP-CMA-ES-TPA and IPOP-CMA-ES-MSR on the BBOB Noiseless Testbed(Asma Atamna, 2015, Proceedings of the Companion Publication of the 2015 Annual Conference on Genetic and Evolutionary Computation)
- Benchmarking of two implementations of CMA-ES with diagonal decoding on the bbob test suite(Mohamed Gharafi, 2022, Proceedings of the Genetic and Evolutionary Computation Conference Companion)
- Evaluating the CMA Evolution Strategy on Multimodal Test Functions(N. Hansen, Stefan Kern, 2004, No journal)
- A Deep Dive into Effects of Structural Bias on CMA-ES Performance along Affine Trajectories(N. V. Stein, Sarah L. Thomson, Anna V. Kononova, 2024, No journal)
- Mirrored variants of the (1,2)-CMA-ES compared on the noisy BBOB-2010 testbed(A. Auger, D. Brockhoff, N. Hansen, 2010, No journal)
- Zeroth-order covariance matrix adaptation evolution strategy for single objective bound constrained numerical optimization competition(Yue Ning, Daohong Jian, Hua Wu, Jun Zhou, 2022, Proceedings of the Genetic and Evolutionary Computation Conference Companion)
- Experimental Comparisons of Derivative Free Optimization Algorithms(A. Auger, Nikolaus Hansen, J. M. P. Zerpa, Raymond Ros, Marc Schoenauer, 2009, ArXiv)
- A Comparative Study of CMA-ES on Large Scale Global Optimisation(M. Omidvar, Xiaodong Li, 2010, No journal)
- Using Machine Learning Methods to Assess Module Performance Contribution in Modular Optimization Frameworks.(Ana Kostovska, Diederick Vermetten, Peter Korošec, S. Džeroski, C. Doerr, T. Eftimov, 2024, Evolutionary computation)
工程实践与多领域应用研究
展示CMA-ES在航空航天、电力电子、机器人控制、机器学习超参数优化(AutoML)、神经架构搜索、生物信息学及网络安全等实际工业与科研场景中的广泛应用。
- Trajectory Correction of the Rocket for Aerodynamic Load Shedding Based on Deep Neural Network and the Chaotic Evolution Strategy With Covariance Matrix Adaptation(Guanghui Cheng, Yudon Hu, W. Jing, Ruoming An, C. Gao, 2024, IEEE Transactions on Aerospace and Electronic Systems)
- CMA-ES for Hyperparameter Optimization of Deep Neural Networks(I. Loshchilov, Frank Hutter, 2016, ArXiv)
- Nonlinear Acoustic Tomography for Measuring the Temperature and Velocity Fields by Using the Covariance Matrix Adaptation Evolution Strategy Algorithm(Juqi Zhang, H. Qi, Yukun Ji, Yatao Ren, Mingjian He, M. Su, Xiaoshu Cai, 2022, IEEE Transactions on Instrumentation and Measurement)
- CMA-ES for Post Hoc Ensembling in AutoML: A Great Success and Salvageable Failure(Lennart Purucker, Joeran Beel, 2023, No journal)
- Efficient Power Optimization for $C+L+S$ Band Transmission Using Covariance Matrix Adaptation Evolution Strategy(Miao Gong, Min Ran, Xiao Xiao, Tianye Huang, Xiang Li, Zelin Gan, 2025, 2025 30th OptoElectronics and Communications Conference (OECC) and 2025 International Conference on Photonics in Switching and Computing (PSC))
- Identification of the isotherm function in chromatography using CMA-ES(Mohamed Jebalia, A. Auger, Marc Schoenauer, F. James, M. Postel, 2007, 2007 IEEE Congress on Evolutionary Computation)
- Automatic Gate Pattern Optimization by Covariance Matrix Adaptation Evolution Strategy in Active Gate Control(Masataka Ando, H. Obara, Yasutaka Fujimoto, 2025, 2025 IEEE Energy Conversion Conference Congress and Exposition (ECCE))
- Multiagent Evolution Strategy With Cooperative and Cumulative Step Adaptation for Black-Box Distributed Optimization(Tai-You Chen, Wei-neng Chen, J. Hao, Yang Wang, Jun Zhang, 2025, IEEE Transactions on Evolutionary Computation)
- Ship Autonomous Berthing Simulation Based on Covariance Matrix Adaptation Evolution Strategy(Guoquan Chen, Jian Yin, Shenhua Yang, 2023, Journal of Marine Science and Engineering)
- Load frequency and voltage control of two area interconnected power system using covariance matrix adaptation evolution strategy based Aquila optimization optimized fast fuzzy-fractional order tilt integral derivative controller(Ananta Kumar Sahoo, Ashwin Kumar Sahoo, Smrutiranjan Nayak, Sanjeeb Kumar Kar, 2024, e-Prime - Advances in Electrical Engineering, Electronics and Energy)
- Scaffold Matcher: A CMA‐ES based algorithm for identifying hotspot aligned peptidomimetic scaffolds(Erin R. Claussen, P. Douglas Renfrew, Christian L. Müller, Kevin Drew, 2023, Proteins: Structure)
- Covariance matrix adaptation for the rapid illumination of behavior space(Matthew C. Fontaine, J. Togelius, S. Nikolaidis, Amy K. Hoover, 2019, Proceedings of the 2020 Genetic and Evolutionary Computation Conference)
- Covariance matrix adaptation MAP-Elites for video game level generation(Mengqing Sun, Guifei Jiang, Yuzhi Zhang, 2024, No journal)
- Hybrid RRT–Lévy–CMA Enhanced Multi-Objective PSO for Quadrotor UAV Path Planning(Hongjing Liu, Bingtao Zhu, Gang Yao, Xun Zhu, 2025, 2025 7th International Conference on Robotics, Intelligent Control and Artificial Intelligence (RICAI))
- GRAsPAD: Generalized framework for optimal grasp key-points active detection(Nikolaos Efstathopoulos, Nikolaos Kounalakis, Vasiliki E. Balaska, J. Fasoulas, Dimitrios Papageorgiou, 2025, 2025 34th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN))
- Optimisation for 3D Integrated Circuits using Continuously Improving Evolutionary Strategy Machine Learning Techniques(R. Rohilla, 2025, INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT)
- Calibrating Dynamic Traffic Assignment Models by Parallel Search using Active-CMA-ES(Jae-Seong Hwang, Sungahn Ko, T. Au, 2021, 2021 IEEE International Intelligent Transportation Systems Conference (ITSC))
- Neural Architecture Search Using Covariance Matrix Adaptation Evolution Strategy(Nilotpal Sinha, Kuan-Wen Chen, 2021, Evolutionary Computation)
- Optimum Coordination of overcurrent relays using CMA-ES algorithm(Manohar Singh, B. K. Panigrahi, Rohan Mukherjee, 2012, 2012 IEEE International Conference on Power Electronics, Drives and Energy Systems (PEDES))
- Short-term solar radiation forecasting using hybrid deep residual learning and gated LSTM recurrent network with differential covariance matrix adaptation evolution strategy(M. Neshat, M. M. Nezhad, S. Mirjalili, D. Garcia, E. Dahlquist, Amir H. Gandomi, 2023, Energy)
- CMAES-WFD: Adversarial Website Fingerprinting Defense Based on Covariance Matrix Adaptation Evolution Strategy(Di Wang, Yuefei Zhu, Jin-long Fei, Maohua Guo, 2024, Computers, Materials & Continua)
- Optimizing multilayer perceptron (MLP) hyperparameters via covariance matrix adaptation evolution strategy (CMA-ES) for predicting composite bending behavior(Fatma Bakal Gumus, H. Yıldırım, 2026, International Journal of Mechanics and Materials in Design)
- Empirical evaluation of contextual policy search with a comparison-based surrogate model and active covariance matrix adaptation(Alexander Fabisch, 2018, Proceedings of the Genetic and Evolutionary Computation Conference Companion)
- Joint Active and Passive Beamforming Optimization for Multi-IRS-Assisted Wireless Communication Systems: A Covariance Matrix Adaptation Evolution Strategy(Tongxin Yin, Lixin Li, Wensheng Lin, Haixin Hu, Donghui Ma, Junli Liang, Tong Bai, Cunhua Pan, Zhu Han, 2023, IEEE Transactions on Vehicular Technology)
- Well Control Optimization using Derivative-Free Algorithms and a Multiscale Approach(Xiang Wang, Ronald D. Haynes, Q. Feng, 2015, Comput. Chem. Eng.)
- Computationally optimized multi‐port antenna systems for WLAN/Wi‐Fi (IEEE 802.11a/h/j/n/ac/ax), 5G (mid‐band), and UWB applications(Pradnya A. Gajbhiye, S. P. Singh, M. Sharma, 2024, International Journal of Communication Systems)
- Affine-Invariant Robust Training(Oriol Barbany, 2020, ArXiv)
- Algorithm Design for Kicking by Bipedal Robots Based on Improved CMA-ES(Chunlei Lu, Na Sun, 2024, 2024 International Conference on Intelligent Computing and Robotics (ICICR))
- An intelligent protection framework for intrusion detection in cloud environment based on covariance matrix self-adaptation evolution strategy and multi-criteria decision-making(Mohamad Mulham Belal, Dr. Divya Meena Sundaram, 2023, Journal of Intelligent & Fuzzy Systems)
- Using CMA-ES for tuning coupled PID controllers within models of combustion engines(Katerina Marova, 2016, ArXiv)
- Effective design of semiconductor devices using evolutionary-based derivative free optimization(Giovanni Stracquadanio, C. Drago, V. Romano, Giuseppe Nicosia, 2010, IEEE Congress on Evolutionary Computation)
- Introducing Learning Rate Adaptation CMA-ES into Rigid Intraoperative 2D/3D Registration for Spinal Surgery(Minheng Chen, Zhirun Zhang, 2025, 2025 IEEE 22nd International Symposium on Biomedical Imaging (ISBI))
- Autonomous control of soft robots using safe reinforcement learning and covariance matrix adaptation(Shaswat Garg, Masoud Goharimanesh, S. Sajjadi, Farrokh Janabi-Sharifi, 2025, Eng. Appl. Artif. Intell.)
- Transfer learning based covariance matrix adaptation for evolutionary many-objective optimization(Tingting Li, Lei Chen, Yutao Lai, Hai-Lin Liu, 2024, Expert Syst. Appl.)
- Correlated Geometric Mutations for Integer Evolution Strategies(O. M. Shir, Michael Emmerich, 2025, Proceedings of the Genetic and Evolutionary Computation Conference Companion)
本报告综合了CMA-ES(协方差矩阵自适应进化策略)的全面研究图谱。研究不仅深入探讨了从信息几何到漂移分析的底层数学理论,还通过步长、种群及主动更新机制的持续迭代优化了核心算法性能。针对现代计算挑战,高维大规模优化变体与并行化技术显著提升了处理复杂问题的效率。此外,通过引入代理模型、混合元启发式框架以及对多目标、多模态和约束环境的扩展,CMA-ES已进化为一种极其灵活的优化工具。最后,该算法在从深度学习超参数调优到精密工程设计的广泛应用,证明了其作为黑盒优化领域行业标准的地位。
总计194篇相关文献
No abstract available
Covariance Matrix Adaptation Evolution Strategy (CMA-ES) is a highly effective optimization technique. A primary challenge when applying CMA-ES in high dimensionality is sampling from a multivariate normal distribution with an arbitrary covariance matrix, which involves its decomposition. The cubic complexity of this process is the main obstacle to applying CMA-ES in high-dimensional spaces. We introduce a version of CMA-ES that uses no covariance matrix at all. In the proposed matrix-free CMA-ES, an archive stores the vectors of differences between individuals and the midpoint, normalized by the step size. New individuals are generated as the weighted combinations of the vectors from the archive. The probability distribution of individuals generated by the proposed method is identical to that of the standard CMA-ES. Experimental results show that reducing the archive size to store only a fixed number of the most recent populations is sufficient, without compromising optimization efficiency. The matrix-free and matrix-based CMA-ES achieve comparable results on the quadratic function when the step-size adaptation is turned off. When coupled with the step-size adaptation method, the matrix-free CMA-ES yields comparable results to those of plain CMA-ES, according to the results obtained for the CEC'2017 benchmark suite.
Bilinear Matrix Inequalities (BMIs) are fundamental to control system design but are notoriously difficult to solve due to their nonconvexity. This study addresses BMI-based control optimization problems by adapting and integrating advanced evolutionary strategies. Specifically, a memetic Covariance Matrix Adaptation Evolution Strategy (memetic CMA-ES) is proposed, which incorporates a local refinement phase via a (1+1)-CMA-ES within the global search process. While these algorithmic components are established in evolutionary computing, their tailored integration and specific tuning for control design tasks represent a novel application in this context. Experimental evaluations on $H_{\infty}$ controller synthesis and spectral abscissa optimization demonstrate that the proposed method achieves superior performance compared to existing BMI solvers in terms of both solution quality and robustness. This work bridges the gap between evolutionary computation and control theory, providing a practical and effective approach to tackling challenging BMI-constrained problems.
We propose using a covariance matrix adaptation evolution strategy (CMA-ES) to optimize launch power in the $C+L+S$ band long-haul system, significantly reducing computation time while maintaining performance, and enabling faster response to channel variations.
In recent years, with the practical application of next-generation power semiconductors such as SiC and GaN, suppressing current and voltage overshoot during high-speed switching and reducing switching losses have become key issues. One solution to this problem is active gate control. In this paper, we propose an automatic gate pattern optimization method based on the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). Compared to a fixed gate pattern, this method reduces switching losses by 25.7% and overshoot by 29.2%. CMA-ES does not require prior adjustment of hyperparameters and can be applied without readjustment, even when changing devices or expanding to high-dimensional design spaces.
No abstract available
Abstract Despite the recent development in evolutionary multi- and many-objective optimization, the problems with large-scale decision variables still remain challenging. In this work, we propose a scalable small subpopulations based covariance matrix adaptation evolution strategy, namely S3-CMA-ES, for solving many-objective optimization problems with large-scale decision variables. The proposed S3-CMA-ES attempts to approximate the set of Pareto-optimal solutions using a series of small subpopulations instead of a whole population, where each subpopulation converges to only one solution. In the proposed S3-CMA-ES, a diversity improvement strategy is designed to generate and select new solutions. The performance of S3-CMA-ES is compared with five representative algorithms on 36 test instances with 5–15 objectives and 500–1500 decision variables. The empirical results demonstrate the superiority of the proposed S3-CMA-ES.
Despite the state-of-the-art performance of the covariance matrix adaptation evolution strategy (CMA-ES), high-dimensional black-box optimization problems are challenging tasks. Such problems often involve a property called low effective dimensionality (LED), in which the objective function is formulated with redundant dimensions relative to the intrinsic objective function and a rotation transformation of the search space. The CMA-ES suffers from LED for two reasons: the default hyperparameter setting is determined by the total number of dimensions, and the norm calculations in step-size adaptations are performed including elements on the redundant dimensions. In this paper, we incorporate countermeasures for LED into the CMA-ES and propose CMA-ES-LED. We tackle with the rotation transformation using the eigenvectors of the covariance matrix. We estimate the effectiveness of each dimension in the rotated search space using the element-wise signal-to-noise ratios of the mean vector update and the rank-$\mu$ update, both of which updates can be explained as the natural gradient ascent. Then, we adapt the hyperparameter using the estimated number of effective dimensions. In addition, we refine the cumulative step-size adaptation and the two-point step-size adaptation to measure the norms only on the effective dimensions. The experimental results show the CMA-ES-LED outperforms the CMA-ES on benchmark functions with LED.
Intelligent reflecting surface (IRS) is applied in 5G and 6G to improve communication performance by reconfiguring signal propagation environments while maintaining low computational complexity and high energy efficiency. This paper proposes a novel methodology of joint active and passive beamforming optimization for wireless communications with the aid of multiple IRSs. In the considered system, the IRSs are deployed to assist the communication link from a multi-antenna access point (AP) to single-antenna users. We maximize the received signal to noise ratio (SNR) and sum rate of the multi-IRS-assisted communication system for single user and multiple users respectively, by solving a joint optimization problem of the active beamforming at the AP and the reflection beamforming at the IRSs. Specifically, we develop a mechanism of joint beamforming optimization for multi-IRS-assisted communications based on covariance matrix adaptation evolution strategy (CMA-ES), namely the BO-CMA algorithm, to jointly consider both the AP active beamforming and IRS reflection beamforming in the dynamic signal propagation environment. Furthermore, simulation results are provided to show that the BO-CMA algorithm is capable of significantly outperforming the benchmark schemes. Consequently, the solution based on the BO-CMA algorithm can dynamically rebuild the signal propagation environment, and improve the received SNR and the sum rate respectively for multi-IRS-assisted one user and multiple users communications.
No abstract available
No abstract available
This article studies the problem of trajectory correction optimization for aerodynamic load shedding in the rocket's ascending phase considering the wind. Based on the statistical horizontal wind field, the weakest wind field is proposed to plan the nominal trajectory and the strongest wind field is proposed to test the performance of the designed trajectory. Considering the weakest wind field, two time-varying correction coefficients are calculated by one deep neural network unlike the traditional methods, and used to plan the rocket's flight attitudes when optimizing the rocket's trajectory. There are a large number of parameters to be optimized in this problem, so the traditional trajectory optimization techniques may suffer from poor convergence issues. By introducing the chaotic function into the traditional evolution strategy with covariance matrix adaptation, a novel variant C-CMA-ES is proposed to solve the trajectory optimization problem, and its efficiency is demonstrated by some popular test functions. Besides, the cost function to be minimized is shaped based on the rocket's maximal normal aerodynamic load and final state deviations. Finally, compared with the other three strategies, the efficiency of the proposed trajectory correction strategy is demonstrated by multiple simulation scenarios considering the strongest wind field and the Monte Carlo method considering the random wind fields.
No abstract available
In black-box optimization, the user inevitably encounters noise in an objective function. Many noise treatment techniques have been developed to properly evaluate the effectiveness of solutions in the optimization process. A noise treatment called sign averaging was proposed recently, and it is proved that the adverse effects of noise can be reduced and the ranking of the candidate solutions on the median of the objective function can be estimated, even when the mean of the objective function is not well-defined. Although a theoretical guarantee exists, empirical studies on sign averaging are yet to be conducted. In this study, we implemented sign averaging in a covariance matrix adaptation evolution strategy, named the SA-CMA-ES, with an adaptive mechanism controlling the strength of the noise treatment. We experimentally demonstrated that 1) the SA-CMA-ES successfully continues to lower the median of the objective function given more budgets and is more sample efficient than the SA-CMA-ES without the adaptive mechanism for sign averaging; 2) the SA-CMA-ES is competitive with the UH-CMA-ES with Monte-Carlo median estimation and that with conventional averaging for optimizing the mean and median, and it is better than that with conventional averaging when the variance of the objective function is not well-defined.
No abstract available
Accurate measurement of temperature and velocity fields in combustion and flow diagnostic plays an important role in analyzing the heat transfer mechanism and ensuring the safe operation of equipment. In this work, considering the refraction effect of sound waves, temperature and velocity fields are reconstructed simultaneously based on the nonlinear acoustic tomography (NAT) by using the covariance matrix adaptation evolution strategy (CMA-ES) algorithm. In the forward problem, the fast two-point ray-tracing algorithm is introduced to track curved sound waves more efficiently. Due to the sparse measurement signals caused by the limited space and curved acoustic paths, Tikhonov regularization is introduced as the constraint of smoothing prior information to reduce the ill-posedness of the problem. The influence of the velocity regularization parameter and temperature regularization parameter on simultaneously visualizing the temperature and corresponding velocity fields are investigated, respectively. Then the selection strategies of these two regularization parameters are given, which are proved to have high selection efficiency. On this basis, inhomogeneous temperature and velocity fields are reconstructed simultaneously by using the CMA-ES algorithm, simulated annealing, and genetic algorithms. By solving the inverse problem of the NAT, which is highly nonlinear and ill-posed, the feasibility and effectiveness of the CMA-ES algorithm in the simultaneous reconstruction of velocity and temperature fields are confirmed. The proposed method for visualizing the temperature and velocity fields will supply key feedback for the combustion process control.
Existing research on auto-berthing of ships has mainly focused on the design and implementation of controllers for automatic berthing. For the real automatic docking processes, not only do external environmental perturbations need to be taken into account but also motion paths, docking strategies and ship mechanical constraints, which are important influential factors to measure autonomous docking methods. Through a literature review of ship path planning and motion control for automatic berthing, it is found that many studies ignore the interference of the actual navigational environment, especially for ships sailing at slow speed when berthing, or do not consider the physical constraints of the steering gear and the main engine. In this paper, we propose a hybrid approach for autonomous berthing control systems based on a Linear Quadratic Regulator (LQR) and Covariance Matrix Adaptation Evolution Strategy (CMA-ES), which systematically addresses the problems involved in the berthing process, such as path planning, optimal control, adaptive berthing strategies, dynamic environmental perturbations and physically enforced structural constraints. The berthing control system based on the LQR and modified LQR-CMA-ES have been validated by simulation work. The simulation results show that the proposed method is able to achieve the automatic docking of the ship well and the system is robust and well adapted to environmental disturbances at slow speed when docking.
Most of the real-world black-box optimization problems are associated with multiple non-linear as well as non-convex constraints, making them difficult to solve. In this work, we introduce a variant of the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) with linear timing complexity to adopt the constraints of Constrained Optimization Problems (COPs). CMA-ES is already well-known as a powerful algorithm for solving continuous, non-convex, and black-box optimization problems by fitting a second-order model to the underlying objective function (similar in spirit, to the Hessian approximation used by Quasi-Newton methods in mathematical programming). The proposed algorithm utilizes an e-constraint-based ranking and a repair method to handle the violation of the constraints. The experimental results on a group of real-world optimization problems show that the performance of the proposed algorithm is better than several other state-of-the-art algorithms in terms of constraint handling and robustness.
No abstract available
The security defenses that are not comparable to sophisticated adversary tools, let the cloud as an open environment for attacks and intrusions. In this paper, an intelligent protection framework for intrusion detection in a cloud computing environment based on a covariance matrix self-adaptation evolution strategy (CMSA-ES) and multi-criteria decision-making (MCDM) is proposed. The proposed framework constructs an optimal intrusion detector by using CMSA-ES algorithm which adjusts the best parameter set for the attack detector. Moreover, the proposed framework uses a MEREC-VIKOR, a hybrid standardized evaluation technique. MEREC-VIKOR generates the own performance metrics (S, R, and Q) of the proposed framework which is a combination of multi-conflicting criteria. The proposed framework is evaluated for attack detection by using CICIDS 2017 dataset. The experiments show that the proposed framework can detect cloud attacks accurately with low S (utility), R (regret), and Q (integration between S and R). The proposed framework is analyzed with respect to several evolutionary algorithms such as GA, IGASAA, and CMA-ES. The performance analysis demonstrates that the proposed framework that depends on CMSA-ES converges faster than the other evolutionary algorithms such as GA, IGASAA, and CMA-ES. The outcomes also demonstrate that the proposed model is comparable to the state-of-the-art techniques.
Uncertainties caused by material variation can significantly impair the characteristics of devices. Therefore, it is important to design devices whose performance is not significantly damaged even when material variations occur. Robust optimization seeks for the optimal solutions that are robust to fluctuations due to uncertainties caused by material variation, geometrical variation due to assembly tolerances, and changes in physical properties over time in real-world problems. However, naive robust optimization requires iterative calculations to compute the expected values, which need a huge computational burden. This paper introduces a novel robust optimization method for magnetic devices using the covariance matrix adaptation evolution strategy (CMA-ES). In this method, called RCMA-ES (robust CMA-ES), the expected value of the objective function is evaluated using the local average of neighboring individuals without increasing the computation cost. For validation, RCM-ES and robust genetic algorithm (RGA), one of the robust optimization methods without increasing the computational load, was applied to the topology optimization of a magnetic shield and actuator, considering the uncertainty in the BH characteristics. RCM-ES was demonstrated to be particularly more effective for topology optimization with a large number of dimensions compared to RGA and provides robust optimal shapes that are insensitive to variations in BH characteristics.
No abstract available
Abstract Evolution-based neural architecture search methods have shown promising results, but they require high computational resources because these methods involve training each candidate architecture from scratch and then evaluating its fitness, which results in long search time. Covariance Matrix Adaptation Evolution Strategy (CMA-ES) has shown promising results in tuning hyperparameters of neural networks but has not been used for neural architecture search. In this work, we propose a framework called CMANAS which applies the faster convergence property of CMA-ES to the deep neural architecture search problem. Instead of training each individual architecture seperately, we used the accuracy of a trained one shot model (OSM) on the validation data as a prediction of the fitness of the architecture, resulting in reduced search time. We also used an architecture-fitness table (AF table) for keeping a record of the already evaluated architecture, thus further reducing the search time. The architectures are modeled using a normal distribution, which is updated using CMA-ES based on the fitness of the sampled population. Experimentally, CMANAS achieves better results than previous evolution-based methods while reducing the search time significantly. The effectiveness of CMANAS is shown on two different search spaces using four datasets: CIFAR-10, CIFAR-100, ImageNet, and ImageNet16-120. All the results show that CMANAS is a viable alternative to previous evolution-based methods and extends the application of CMA-ES to the deep neural architecture search field.
The covariance matrix self-adaptation evolution strategy with repelling subpopulations (RS-CMSA-ES) is one of the most successful multimodal optimization (MMO) methods currently available. However, some of its components may become inefficient in certain situations. This study introduces the second variant of this method, called RS-CMSA-ESII. It improves the adaptation schemes for the normalized taboo distances of the archived solutions and the covariance matrix of the subpopulation, the termination criteria for the subpopulations, and the way in which the infeasible solutions are treated. It also improves the time complexity of RS-CMSA-ES by updating the initialization procedure of a subpopulation and developing a more accurate metric for determining critical taboo regions. The effects of these modifications are illustrated by designing controlled numerical simulations. RS-CMSA-ESII is then compared with the most successful and recent niching methods for MMO on a widely adopted test suite. The results obtained reveal the superiority of RS-CMSA-ESII over these methods, including the winners of the competition on niching methods for MMO in previous years. Besides, this study extends RS-CMSA-ESII to dynamic MMO and compares it with a few recently proposed methods on the modified moving peak benchmark functions.
Among population-based metaheuristics, both Differential Evolution (DE) and Covariance Matrix Adaptation Evolution Strategy (CMA-ES) perform outstanding for real parameter single objective optimization. Compared with DE, CMA-ES stagnates much earlier in many occasions. In this paper, we propose CMA-ES with individuals redistribution based on DE, IR-CMA-ES, to address stagnation in CMA-ES. We execute experiments based on two benchmark test suites to compare our algorithm with nine peers. Experimental results show that our IR-CMA-ES is competitive in the field of real parameter single objective optimization.
The paper represents a competition entry for the competition on bound constrained single objective numerical optimization at The Genetic and Evolutionary Computation Conference (GECCO) 2022[2] by a hybrid algorithm named Zeroth-Order Covariance Matrix Adaptation Evolution Strategy (ZO-CMAES).
During the last two decades, the notion of multiobjective optimization (MOO) has been successfully adopted to solve the nonconvex constrained optimization problems (COPs) in their most general forms. However, such works mainly utilized the Pareto dominance-based MOO framework while the other successful MOO frameworks, such as the reference vector (RV) and the decomposition-based ones, have not drawn sufficient attention from the COP researchers. In this article, we utilize the concepts of the RV-based MOO to design a ranking strategy for the solutions of a COP. We first transform the COP into a biobjective optimization problem (BOP) and then solve it by using the covariance matrix adaptation evolution strategy (CMA-ES), which is arguably one of the most competitive evolutionary algorithms of current interest. We propose an RV-based ranking strategy to calculate the mean and update the covariance matrix in CMA-ES. Besides, the RV is explicitly tuned during the optimization process based on the characteristics of COPs in a RV-based MOO framework. We also propose a repair mechanism for the infeasible solutions and a restart strategy to facilitate the population to escape from the infeasible region. We test the proposal extensively on two well-known benchmark suites comprised of 36 and 112 test problems (at different scales) from the IEEE CEC (Congress on Evolutionary Computation) 2010 and 2017 competitions along with a real-world problem related to power flow. Our experimental results suggest that the proposed algorithm can meet or beat several other state-of-the-art constrained optimizers in terms of the performance on a wide variety of problems.
No abstract available
No abstract available
Fuzzy cognitive maps (FCMs) are an efficient soft computing tool commonly used for modeling and analyzing complex systems, and have significant advantages in knowledge representation and fast reasoning. At present, many researchers have devoted themselves to solving the learning problem of FCMs. However, most of the existing learning algorithms are prone to local convergence and are not suitable for large-scale FCMs learning problems. Moreover, the learning problem of FCMs is often a complex non-convex optimization problem, which is easy to cause the final learning result of the algorithm to be only a local optimal solution. To address these limitations, this paper proposes a novel algorithm for learning FCMs by combining multimodal optimization and modified covariance matrix adaptation evolution strategy (CMA-ES), termed as MCMA-ES. First, the learning problem of FCMs is modeled as a multimodal optimization problem (MMOP), and then a multimodal optimization algorithm is proposed by combining the detect multimodal clustering (DMC) and CMA-ES, and the cubic chaos factor is combined to enhance the capability of local search. Finally the proposed algorithm is combined with the decomposition strategy to learn FCMs. Experiments are carried out on 5 benchmark datasets, and MCMA-ES is also applied to 15 groups of large-scale gene regulatory system reconstruction problems. The experimental results show that MCMA-ES has high learning generalization and accuracy.
No abstract available
In this paper, we discuss a method for generating new individuals such that their mean vector and the covariance matrix are defined by formulas analogous to the covariance matrix adaptation evolution strategy (CMA-ES). In contrast to CMA-ES, which generates new individuals using multivariate Gaussian distribution with an explicitly defined covariance matrix, the introduced method uses combinations of difference vectors between archived individuals and univariate Gaussian random vectors along directions of past shifts of the population midpoints. We use this method to formulate the differential evolution strategy (DES)—an algorithm that is a crossover between differential evolution (DE) and CMA-ES. The numerical results presented in this paper indicate that DES is competitive against CMA-ES in performing both local and global optimization.
Abstract This paper proposes a novel covariance matrix adaptation evolution strategy (CMA-ES) variant, named AEALSCE, for single-objective numerical optimization problems in the continuous domain. To avoid premature convergence and strengthen the exploration capacity of the basic CMA-ES, AEALSCE is obtained by integrating the CMA-ES with two strategies that can adjust the evolutionary directions and enrich the population diversity. The first strategy is named the anisotropic eigenvalue adaptation (AEA) technique, which adapts the search scope towards the optimal evolutionary directions. It scales the eigenvalues of the covariance matrix anisotropically based on local fitness landscape detection. The other strategy is named the local search (LS) strategy, which is executed under the eigen coordinate system and can be subdivided into two parts. In the first part, the new candidates of superior solutions are sampled around the best solution to perform local exploration. In the other part, the new candidates of inferior solutions are generated using a modified mean point along the fitness descent direction. The proposed AEALSCE algorithm is compared with other top competitors, including the CEC 2014 champion, L-SHADE, and the promising NBIPOP-aCMA-ES, by benchmarking the CEC 2014 testbed. Moreover, AEALSCE is applied in solving three constrained engineering design problems and parameter estimation of photovoltaic (PV) models. According to the statistical results of the experiments, our proposed AEALSCE is competitive with other algorithms in convergence efficiency and accuracy. AEALSCE benefits from a good balance of exploration and exploitation, and it exhibits a potential to address real-world optimization problems.
Over the past decades, more and more methods gain a giant development due to the development of technology. Evolutionary Algorithms are widely used as a heuristic method. However, the budget of computation increases exponentially when the dimensions increase. In this paper, we will use the dimensionality reduction method Principal component analysis (PCA) to reduce the dimension during the iteration of Covariance Matrix Adaptation Evolution Strategy (CMA-ES), which is a good Evolutionary Algorithm that is presented as the numeric type and useful for different kinds of problems. We assess the performance of our new methods in terms of convergence rate on multi-modal problems from the Black-Box Optimization Benchmarking (BBOB) problem set and we also use the framework COmparing Continuous Optimizers (COCO) to see how the new method going and compare it to the other algorithms.
No abstract available
Gaussian processes modeling technique has been shown as a valuable surrogate model for the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) in continuous single-objective black-box optimization tasks, where the optimized function is expensive. In this paper, we investigate how different Gaussian process settings influence the error between the predicted and genuine population ordering in connection with features representing the fitness landscape. Apart from using features for landscape analysis known from the literature, we propose a new set of features based on CMA-ES state variables. We perform the landscape analysis of a large set of data generated using runs of a surrogate-assisted version of the CMA-ES on the noiseless part of the Comparing Continuous Optimisers benchmark function testbed.
The covariance matrix adaptation evolution strategy (CMA-ES) is an efficient derivative-free optimization algorithm. It optimizes a black-box objective function over a well-defined parameter space in which feature functions are often defined manually. Therefore, the performance of those techniques strongly depends on the quality of the chosen features or the underlying parametric function space. Hence, enabling CMA-ES to optimize on a more complex and general function class has long been desired. In this paper, we consider modeling the input spaces in black-box optimization non-parametrically in reproducing kernel Hilbert spaces (RKHS). This modeling leads to a functional optimisation problem whose domain is a RKHS function space that enables optimisation in a very rich function class. We propose CMA-ES-RKHS, a generalized CMA-ES framework that is able to carry out black-box functional optimisation in RKHS. A search distribution on non-parametric function spaces, represented as a Gaussian process, is adapted by updating both its mean function and covariance operator. Adaptive and sparse representation of the mean function and the covariance operator can be retained for efficient computation in the updates and evaluations of CMA-ES-RKHS by resorting to sparsification. We will also show how to apply our new black-box framework to search for an optimum policy in reinforcement learning in which policies are represented as functions in a RKHS. CMA-ES-RKHS is evaluated on two functional optimization problems and two bench-marking reinforcement learning domains.
Covariance matrix adaptive evolution strategy (CMAES) is an excellent local search strategy that does not rely on gradient information. It is widely used in real-valued optimization because it fast converges to the local optimum. In addition, manta ray foraging optimization (MRFO) is an evolutionary algorithm inspired by the manta rays’ three particular foraging strategies which can carry out a good solution for global optimization problems. In this paper, we use CMAES to strengthen the local search ability of MRFO and propose CMAES-MRFO to generate better solutions. The CEC’2017 benchmark functions are employed to test the hybrid algorithm. Experimental results show that the proposed CMAES-MRFO significantly outperforms its original algorithms and some other newly-coming ones.
Hyperparameters of deep neural networks are often optimized by grid search, random search or Bayesian optimization. As an alternative, we propose to use the Covariance Matrix Adaptation Evolution Strategy (CMA-ES), which is known for its state-of-the-art performance in derivative-free optimization. CMA-ES has some useful invariance properties and is friendly to parallel evaluations of solutions. We provide a toy example comparing CMA-ES and state-of-the-art Bayesian optimization algorithms for tuning the hyperparameters of a convolutional neural network for the MNIST dataset on 30 GPUs in parallel.
The design of protein interaction inhibitors is a promising approach to address aberrant protein interactions that cause disease. One strategy in designing inhibitors is to use peptidomimetic scaffolds that mimic the natural interaction interface. A central challenge in using peptidomimetics as protein interaction inhibitors, however, is determining how best the molecular scaffold aligns to the residues of the interface it is attempting to mimic. Here we present the Scaffold Matcher algorithm that aligns a given molecular scaffold onto hotspot residues from a protein interaction interface. To optimize the degrees of freedom of the molecular scaffold we implement the covariance matrix adaptation evolution strategy (CMA‐ES), a state‐of‐the‐art derivative‐free optimization algorithm in Rosetta. To evaluate the performance of the CMA‐ES, we used 26 peptides from the FlexPepDock Benchmark and compared with three other algorithms in Rosetta, specifically, Rosetta's default minimizer, a Monte Carlo protocol of small backbone perturbations, and a Genetic algorithm. We test the algorithms' performance on their ability to align a molecular scaffold to a series of hotspot residues (i.e., constraints) along native peptides. Of the 4 methods, CMA‐ES was able to find the lowest energy conformation for all 26 benchmark peptides. Additionally, as a proof of concept, we apply the Scaffold Match algorithm with CMA‐ES to align a peptidomimetic oligooxopiperazine scaffold to the hotspot residues of the substrate of the main protease of severe acute respiratory syndrome coronavirus 2 (SARS‐CoV‐2). Our implementation of CMA‐ES into Rosetta allows for an alternative optimization method to be used on macromolecular modeling problems with rough energy landscapes. Finally, our Scaffold Matcher algorithm allows for the identification of initial conformations of interaction inhibitors that can be further designed and optimized as high‐affinity reagents.
In this paper, we benchmark several versions of a population-based evolution strategy with covariance matrix adaptation, handling constraints with an Augmented Lagrangian fitness function. The versions only differ in the strategy to adapt the penalty parameter of the fitness function. We compare the resulting algorithms, AL-CMA-ES, with random search and Powell's derivative-free COBYLA on the recently released bbob-constrained test suite for constrained continuous optimization in dimensions ranging from 2 to 40. The experimental results allow identifying classes of problems where one algorithm is more advantageous to use. They also reveal features of the merit function used for performance assessment and in particular situations where even on simple problems the targets can be hard to meet for algorithms based on Lagrange multipliers.
We propose a computationally efficient limited memory Covariance Matrix Adaptation Evolution Strategy for large scale optimization, which we call the LM-CMA-ES. The LM-CMA-ES is a stochastic, derivative-free algorithm for numerical optimization of non-linear, non-convex optimization problems in continuous domain. Inspired by the limited memory BFGS method of Liu and Nocedal (1989), the LM-CMA-ES samples candidate solutions according to a covariance matrix reproduced from m direction vectors selected during the optimization process. The decomposition of the covariance matrix into Cholesky factors allows to reduce the time and memory complexity of the sampling to O(mn), where $n$ is the number of decision variables. When $n$ is large (e.g., n > 1000), even relatively small values of $m$ (e.g., m=20,30) are sufficient to efficiently solve fully non-separable problems and to reduce the overall run-time.
In this paper, we use numerical optimization algorithms and a multiscale approach in order to find an optimal well management strategy over the life of the reservoir. The large number of well rates for each control step make the optimization problem more difficult and at a high risk of achieving a suboptimal solution. Moreover, the optimal number of adjustments is not known a priori. Adjusting well controls too frequently will increase unnecessary well management and operation cost, and an excessively low number of control adjustments may not be enough to obtain a good yield. We investigate three derivative-free optimization algorithms, chosen for their robust and parallel nature, to determine optimal well control strategies. The algorithms chosen include generalized pattern search (GPS), particle swarm optimization (PSO) and covariance matrix adaptation evolution strategy (CMA-ES). These three algorithms encompass the breadth of available black-box optimization strategies: deterministic local search, stochastic global search and stochastic local search. In addition, we hybridize the three derivative-free algorithms with a multiscale regularization approach. Starting with a reasonably small number of control steps, the control intervals are subsequently refined during the optimization. Results for experiments studied indicate that CMA-ES performs best among the three algorithms in solving both small and large scale problems. When hybridized with a multiscale regularization approach, the ability to find the optimal solution is further enhanced, with the performance of GPS improving the most. Topics affecting the performance of the multiscale approach are discussed in this paper, including the effect of control frequency on the well control problem. The parameter settings for GPS, PSO, and CMA-ES, within the multiscale approach are considered.
No abstract available
In this paper, the performances of the quasi-Newton BFGS algorithm, the NEWUOA derivative free optimizer, the Covariance Matrix Adaptation Evolution Strategy (CMA-ES), the Differential Evolution (DE) algorithm and Particle Swarm Optimizers (PSO) are compared experimentally on benchmark functions reflecting important challenges encountered in real-world optimization problems. Dependence of the performances in the conditioning of the problem and rotational invariance of the algorithms are in particular investigated.
No abstract available
The Covariance Matrix Adaptation Evolution Strategy (CMA-ES) is widely accepted as a robust derivative-free continuous optimization algorithm for non-linear and non-convex optimization problems. CMA-ES is well known to be almost parameterless, meaning that only one hyper-parameter, the population size, is proposed to be tuned by the user. In this paper, we propose a principled approach called self-CMA-ES to achieve the online adaptation of CMA-ES hyper-parameters in order to improve its overall performance. Experimental results show that for larger-than-default population size, the default settings of hyper-parameters of CMA-ES are far from being optimal, and that self-CMA-ES allows for dynamically approaching optimal settings.
Automated algorithm performance prediction in numerical blackbox optimization often relies on problem characterizations, such as exploratory landscape analysis features. These features are typically used as inputs to machine learning models and are represented in a tabular format. However, such approaches often overlook algorithm configurations, a key factor influencing performance. The relationships between algorithm operators, parameters, problem characteristics, and performance outcomes form a complex structure best represented as a graph. This work explores the use of heterogeneous graph data structures and graph neural networks to predict the performance of optimization algorithms by capturing the complex dependencies between problems, algorithm configurations, and performance outcomes. We focus on two modular frameworks, modCMA-ES and modDE, which decompose two widely used derivative-free optimization algorithms: the covariance matrix adaptation evolution strategy (CMA-ES) and differential evolution (DE). We evaluate 324 modCMA-ES and 576 modDE variants on 24 BBOB problems across six runtime budgets and two problem dimensions. Achieving up to 36.6% improvement in MSE over traditional tabular-based methods, this work highlights the potential of geometric learning in black-box optimization.
This paper proposes a surrogate based, global-search derivative free algorithm which is specifically designed for computationally expensive black-box (simulation-based) optimization problems with constraints. The algorithm, called SCR (Surrogate-CMAES-RQLIF), uses kriging to generate surrogate models of the black-box objective function and black-box constraints. These surrogate models are optimized using the global-search algorithm CMA-ES with the quadratic penalty approach for the constraints. The quality of the kriging surrogates is checked, and the surrogates are updated at each iteration by adding the point found by CMA-ES and additional training points. Once the region of the global optimum has been approximately defined, local search is performed using the hybrid direct-search/model based algorithm RQLIF. After each iteration the points sampled by RQLIF and some additional points found within the optimal region are used to update the surrogate model. Tests on 25 unconstrained and 21 constrained literature test problems show that SCR outperforms benchmark optimization algorithms. The outstanding performance of SCR is also confirmed on two real-world black-box problems arising in process engineering with computationally expensive simulations: the techno-economic optimization of a CO2 Purification Unit (CPU) and a Vacuum Pressure Swing Adsorption Unit (VPSA).
Modular algorithm frameworks not only allow for combinations never tested in manually selected algorithm portfolios, but they also provide a structured approach to assess which algorithmic ideas are crucial for the observed performance of algorithms. In this study, we propose a methodology for analyzing the impact of the different modules on the overall performance. We consider modular frameworks for two widely used families of derivative-free black-box optimization algorithms, the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) and differential evolution (DE). More specifically, we use performance data of 324 modCMA-ES and 576 modDE algorithm variants (with each variant corresponding to a specific configuration of modules) obtained on the 24 BBOB problems for 6 different runtime budgets in 2 dimensions. Our analysis of these data reveals that the impact of individual modules on overall algorithm performance varies significantly. Notably, among the examined modules, the elitism module in CMA-ES and the linear population size reduction module in DE exhibit the most significant impact on performance. Furthermore, our exploratory data analysis of problem landscape data suggests that the most relevant landscape features remain consistent regardless of the configuration of individual modules, but the influence that these features have on regression accuracy varies. In addition, we apply classifiers that exploit feature importance with respect to the trained models for performance prediction and performance data, to predict the modular configurations of CMA-ES and DE algorithm variants. The results show that the predicted configurations do not exhibit a statistically significant difference in performance compared to the true configurations, with the percentage varying depending on the setup (from 49.1% to 95.5% for mod-CMA and 21.7% to 77.1% for DE).
We focus on the challenge of finding a diverse collection of quality solutions on complex continuous domains. While quality diversity (QD) algorithms like Novelty Search with Local Competition (NSLC) and MAP-Elites are designed to generate a diverse range of solutions, these algorithms require a large number of evaluations for exploration of continuous spaces. Meanwhile, variants of the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) are among the best-performing derivative-free optimizers in single-objective continuous domains. This paper proposes a new QD algorithm called Covariance Matrix Adaptation MAP-Elites (CMA-ME). Our new algorithm combines the self-adaptation techniques of CMA-ES with archiving and mapping techniques for maintaining diversity in QD. Results from experiments based on standard continuous optimization benchmarks show that CMA-ME finds better-quality solutions than MAP-Elites; similarly, results on the strategic game Hearthstone show that CMA-ME finds both a higher overall quality and broader diversity of strategies than both CMA-ES and MAP-Elites. Overall, CMA-ME more than doubles the performance of MAP-Elites using standard QD performance metrics. These results suggest that QD algorithms augmented by operators from state-of-the-art optimization algorithms can yield high-performing methods for simultaneously exploring and optimizing continuous search spaces, with significant applications to design, testing, and reinforcement learning among other domains.
No abstract available
Recently it was shown by Nesterov (2011) that techniques form convex optimization can be used to successfully accelerate simple derivative-free randomized optimization methods. The appeal of those schemes lies in their low complexity, which is only Θ(n) per iteration—compared to Θ(n 2) for algorithms storing second-order information or covariance matrices. From a high-level point of view, those accelerated schemes employ correlations between successive iterates—a concept looking similar to the evolution path used in Covariance Matrix Adaptation Evolution Strategies (CMA-ES). In this contribution, we (i) implement and empirically test a simple accelerated random search scheme (SARP). Our study is the first to provide numerical evidence that SARP can effectively be implemented with adaptive step size control and does not require access to gradient or advanced line search oracles. We (ii) try to empirically verify the supposed analogy between the evolution path and SARP. We propose an algorithm CMA-EP that uses only the evolution path to bias the search. This algorithm can be generalized to a family of low memory schemes, with complexity Θ(mn) per iteration, following a recent approach by Loshchilov (2014). The study shows that the performance of CMA-EP heavily depends on the spectra of the objective function and thus it cannot accelerate as consistently as SARP.
Optimum implementation of non-conventional wells allows us to increase considerably hydrocarbon recovery. By considering the high drilling cost and the potential improvement in well productivity, well placement decision is an important issue in field development. Considering complex reservoir geology and high reservoir heterogeneities, stochastic optimization methods are the most suitable approaches for optimum well placement. This paper proposes an optimization methodology to determine optimal well location and trajectory based upon the Covariance Matrix Adaptation - Evolution Strategy (CMA-ES) which is a variant of Evolution Strategies recognized as one of the most powerful derivative-free optimizers for continuous optimization. To improve the optimization procedure, two new techniques are investigated: (1). Adaptive penalization with rejection is developed to handle well placement constraints. (2). A meta-model, based on locally weighted regression, is incorporated into CMA-ES using an approximate ranking procedure. Therefore, we can reduce the number of reservoir simulations, which are computationally expensive. Several examples are presented. Our new approach is compared with a Genetic Algorithm incorporating the Genocop III technique. It is shown that our approach outperforms the genetic algorithm: it leads in general to both a higher NPV and a significant reduction of the number of reservoir simulations.
Evolution Strategies (ESs) are stochastic derivative-free optimization algorithms whose most prominent representative, the CMA-ES algorithm, is widely used to solve difficult numerical optimization problems. We provide the first rigorous investigation of the linear convergence of step-size adaptive ESs involving a population and recombination, two ingredients crucially important in practice to be robust to local irregularities or multimodality. We investigate the convergence of step-size adaptive ESs with weighted recombination on composites of strictly increasing functions with continuously differentiable scaling-invariant functions with a global optimum. This function class includes functions with non-convex sublevel sets and discontinuous functions. We prove the existence of a constant r such that the logarithm of the distance to the optimum divided by the number of iterations converges to r . The constant is given as an expectation with respect to the stationary distribution of a Markov chain—its sign allows to infer linear convergence or divergence of the ES and is found numerically. Our main condition for convergence is the increase of the expected log step-size on linear functions. In contrast to previous results, our condition is equivalent to the almost sure geometric divergence of the step-size on linear functions.
Evolution Strategies such as CMA-ES (covariance matrix adaptation evolution strategy) and NES (natural evolution strategy) have been widely used in machine learning applications, where an objective function is optimized without using its derivatives. However, the convergence behaviors of these algorithms have not been carefully studied. In particular, there is no rigorous analysis for the convergence of the estimated covariance matrix, and it is unclear how does the estimated covariance matrix help the converge of the algorithm. The relationship between Evolution Strategies and derivative free optimization algorithms is also not clear. In this paper, we propose a new algorithm closely related toNES, which we call MiNES (mirror descent natural evolution strategy), for which we can establish rigorous convergence results. We show that the estimated covariance matrix of MiNES converges to the inverse of Hessian matrix of the objective function with a sublinear convergence rate. Moreover, we show that some derivative free optimization algorithms are special cases of MiNES. Our empirical studies demonstrate that MiNES is a query-efficient optimization algorithm competitive to classical algorithms including NES and CMA-ES.
In many practical optimization problems, the derivatives of the functions to be optimized are unavailable or unreliable. Such optimization problems are solved using derivative-free optimization techniques. One of the state-of-the-art techniques for derivative-free optimization is the covariance matrix adaptation evolution strategy (CMA-ES) algorithm. However, the complexity of CMA-ES algorithm makes it undesirable for tasks where fast optimization is needed. To reduce the execution time of CMA-ES, a parallel implementation is proposed, and its performance is analyzed using the benchmark problems in PythOPT optimization environment.
No abstract available
Evolution strategies have been successfully applied to optimization problems with rugged, multi-modal fitness landscapes, to non linear problems, and to derivative free optimization. Usually evolution is performed by exploiting the structure of the objective function. In this paper, we present an approach that harnesses the adapting quantum potential field determined by the spatial distribution of elitist solutions as guidance for the next generation. The potential field evolves to a smoother surface leveling local optima but keeping the global structure what in turn allows for a faster convergence of the solution set. We demonstrate the applicability and the competitiveness of our approach compared with particle swarm optimization and the well established evolution strategy CMA-ES.
No abstract available
Many state-of-the-art automated machine learning (AutoML) systems use greedy ensemble selection (GES) by Caruana et al. (2004) to ensemble models found during model selection post hoc. Thereby, boosting predictive performance and likely following Auto-Sklearn 1's insight that alternatives, like stacking or gradient-free numerical optimization, overfit. Overfitting in Auto-Sklearn 1 is much more likely than in other AutoML systems because it uses only low-quality validation data for post hoc ensembling. Therefore, we were motivated to analyze whether Auto-Sklearn 1's insight holds true for systems with higher-quality validation data. Consequently, we compared the performance of covariance matrix adaptation evolution strategy (CMA-ES), state-of-the-art gradient-free numerical optimization, to GES on the 71 classification datasets from the AutoML benchmark for AutoGluon. We found that Auto-Sklearn's insight depends on the chosen metric. For the metric ROC AUC, CMA-ES overfits drastically and is outperformed by GES -- statistically significantly for multi-class classification. For the metric balanced accuracy, CMA-ES does not overfit and outperforms GES significantly. Motivated by the successful application of CMA-ES for balanced accuracy, we explored methods to stop CMA-ES from overfitting for ROC AUC. We propose a method to normalize the weights produced by CMA-ES, inspired by GES, that avoids overfitting for CMA-ES and makes CMA-ES perform better than or similar to GES for ROC AUC.
No abstract available
Motivated by Danskin's theorem, gradient-based methods have been applied with empirical success to solve minimax problems that involve non-convex outer minimization and non-concave inner maximization. On the other hand, recent work has demonstrated that Evolution Strategies (ES) algorithms are stochastic gradient approximators that seek robust solutions. In this paper, we address black-box (gradient-free) minimax problems that have long been tackled in a coevolutionary setup. To this end and guaranteed by Danskin's theorem, we employ ES as a stochastic estimator for the descent direction. The proposed approach is validated on a collection of black-box minimax problems. Based on our experiments, our method's performance is comparable with its coevolutionary counterparts and favorable for high-dimensional problems. Its efficacy is demonstrated on a real-world application.
No abstract available
Proportional integral derivative (PID) controllers are important and widely used tools in system control. Tuning of the controller gains is a laborious task, especially for complex systems such as combustion engines. To minimize the time of an engineer for tuning of the gains in a simulation software, we propose to formulate a part of the problem as a black-box optimization task. In this paper, we summarize the properties and practical limitations of tuning of the gains in this particular application. We investigate the latest methods of black-box optimization and conclude that the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) with bi-population restart strategy, elitist parent selection and active covariance matrix adaptation is best suited for this task. Details of the algorithm's experiment-based calibration are explained as well as derivation of a suitable objective function. The method's performance is compared with that of PSO and SHADE. Finally, its usability is verified on six models of real engines.
This study modifies the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) algorithm for multi-modal optimization problems. The enhancements focus on addressing the challenges of multiple global minima, improving the algorithm's ability to maintain diversity and explore complex fitness landscapes. We incorporate niching strategies and dynamic adaptation mechanisms to refine the algorithm's performance in identifying and optimizing multiple global optima. The algorithm generates a population of candidate solutions by sampling from a multivariate normal distribution centered around the current mean vector, with the spread determined by the step size and covariance matrix. Each solution's fitness is evaluated as a weighted sum of its contributions to all global minima, maintaining population diversity and preventing premature convergence. We implemented the algorithm on 8 tunable composite functions for the GECCO 2024 Competition on Benchmarking Niching Methods for Multi-Modal Optimization (MMO), adhering to the competition's benchmarking framework. The results are presenting in many ways such as Peak Ratio, F1 score on various dimensions. They demonstrate the algorithm's robustness and effectiveness in handling both global optimization and MMO- specific challenges, providing a comprehensive solution for complex multi-modal optimization problems.
No abstract available
This paper explores the theoretical basis of the covariance matrix adaptation evolution strategy (CMA-ES) from the information geometry viewpoint.To establish a theoretical foundation for the CMA-ES, we focus on a geometric structure of a Riemannian manifold of probability distributions equipped with the Fisher metric. We define a function on the manifold which is the expectation of fitness over the sampling distribution, and regard the goal of update of the parameters of sampling distribution in the CMA-ES as maximization of the expected fitness. We investigate the steepest ascent learning for the expected fitness maximization, where the steepest ascent direction is given by the natural gradient, which is the product of the inverse of the Fisher information matrix and the conventional gradient of the function.Our first result is that we can obtain under some types of parameterization of multivariate normal distribution the natural gradient of the expected fitness without the need for inversion of the Fisher information matrix. We find that the update of the distribution parameters in the CMA-ES is the same as natural gradient learning for expected fitness maximization. Our second result is that we derive the range of learning rates such that a step in the direction of the exact natural gradient improves the parameters in the expected fitness. We see from the close relation between the CMA-ES and natural gradient learning that the default setting of learning rates in the CMA-ES seems suitable in terms of monotone improvement in expected fitness. Then, we discuss the relation to the expectation-maximization framework and provide an information geometric interpretation of the CMA-ES.
No abstract available
In this work, we propose a new variant of natural evolution strategies (NES) for high-dimensional black-box opti-mization problems. The proposed method, CR-FM-NES, extends a recently proposed state-of-the-art NES, Fast Moving Natural Evolution Strategy (FM-NES), in order to be applicable in high-dimensional problems. CR-FM-NES builds on an idea using a restricted representation of a covariance matrix instead of using a full covariance matrix, while inheriting an efficiency of FM-NES. The restricted representation of the covariance matrix enables CR-FM-NES to update parameters of a multivariate normal distribution in linear time and space complexity, which can be applied to high-dimensional problems. Our experimental results reveal that CR-FM-NES does not lose the efficiency of FM-NES, and on the contrary, CR-FM-NES has achieved significant speedup compared to FM-NES on some benchmark problems. Furthermore, our numerical experiments using 200, 600, and 1000-dimensional benchmark problems demonstrate that CR-FM-NES is effective over scalable baseline methods, VD-CMA and Sep-CMA.
This paper presents a natural evolution strategy for implicitly constrained black-box function optimization. The black-box function optimization is challenging because explicit representations of objective functions are not given, and only evaluation values of solutions can be used. In implicitly constrained black-box function problems, constraints are not explicitly given, and only the feasibility of a solution is obtained when the objective function is evaluated. In other words, the amount of constraint violation cannot be obtained, which makes the optimization difficult. Natural Evolution Strategies (NES) is one of the promising frameworks for black-box function optimization. DX-NES is an improved version of xNES which is a promising NES using a multivariate normal distribution as the probability distribution. DX-NES has been reported to show good performance on unconstrained black-box function optimization problems. However, DX-NES has a serious problem in that its performance degrades when applied to implicitly constrained problems. In order to address the problem, we propose DX-NES taking account of Implicit Constraint (DX-NES-IC). In experiments using benchmark problems and a lens system design problem, DX-NES-IC showed better performance than DX-NES, xNES, CMA-ES, and those with the resampling technique in terms of the number of successful trials and that of evaluations, where the resampling technique is a constraint handling method which can be used for implicitly constrained problems.
No abstract available
No abstract available
The mutation strength $(\sigma)$ adaptation of a multi-recombinative self-adapting Evolution Strategy is investigated on the Rastrigin test function by theoretical and experimental means. Sampling $\sigma$ from a log-normal distribution reveals the occurrence of an undesired steady-state under high multimodal-ity, which halts the $\sigma$ -adaptation and prevents convergence. It is shown that an inherent bias is the reason for this steady-state when sampling log-normal mutations. Therefore, sampling from a normal distribution is introduced as an alternative. Normal sampling does not exhibit the steady-state behavior and it is more stable optimizing functions under high multimodality.
Motivated by parallel optimization, we experiment EDA-like adaptation-rules in the case of *** large. The rule we use, essentially based on estimation of multivariate normal algorithm, is (i) compliant with all families of distributions for which a density estimation algorithm exists (ii) simple (iii) parameter-free (iv) better than current rules in this framework of *** large. The speed-up as a function of *** is consistent with theoretical bounds.
No abstract available
No abstract available
No abstract available
Drift analysis is a great tool for proving that optimization algorithms work the way we think they do, and for analyzing them, potentially in great detail. In this talk, I will discuss drift analysis for evolution strategies. These algorithms exhibit linear convergence on a wide range of problems, which corresponds to a linear decrease of the logarithmic distance of the best-so-far sample from the optimum, giving rise to simple additive drift. That behavior is enabled by online adaptation of the step size, which decays at the same rate as the distance to the optimum. Moreover, modern evolution strategies like CMA-ES adapt not only the step size, but rather the full covariance matrix of their sampling distribution. The mechanism enables convergence at a problem-independent rate that depends only on the dimension of the search space. The primary challenge of proving the convergence of CMA-ES lies in establishing the stability of the adaptation process, which was recently achieved by analyzing the invariant Markov chain that describes the parameter adaptation process. Yet, a drift-based analysis is still desirable because it can yield much more fine-grained results. For instance, it can provide details about the transient adaptation phase, which often takes up the lion's share of the time for solving the problem. To achieve this, we need a potential function that appropriately penalizes unsuitable parameter configurations, or more precisely, configurations the algorithm tends to move away from. Designing a potential function that captures the dynamics of covariance matrix adaptation is an ongoing challenge. I will present our recent research efforts towards this goal and emphasize why relatively simple additive drift offers a powerful framework for achieving it.
A first and second order progress rate analysis was conducted for the intermediate multi-recombinative Evolution Strategy (μ/μI, λ)-ES with isotropic scale-invariant mutations on the highly multimodal Rastrigin test function. Closed-form analytic solutions for the progress rates are obtained in the limit of large dimensionality and large populations. The first order results are able to model the one-generation progress including local attraction phenomena. Furthermore, a second order progress rate is derived yielding additional correction terms and further improving the progress model. The obtained results are compared to simulations and show good agreement, even for moderately large populations and dimensionality. The progress rates are applied within a dynamical systems approach, which models the evolution using difference equations. The obtained dynamics are compared to real averaged optimization runs and yield good agreement. The results improve further when dimensionality and population size are increased. Local and global convergence is investigated within given model showing that large mutations are needed to maximize the probability of global convergence, which comes at the expense of efficiency. An outlook regarding future research goals is provided.
The field of adversarial robustness has attracted significant attention in machine learning. Contrary to the common approach of training models that are accurate in average case, it aims at training models that are accurate for worst case inputs, hence it yields more robust and reliable models. Put differently, it tries to prevent an adversary from fooling a model. The study of adversarial robustness is largely focused on $\ell_p-$bounded adversarial perturbations, i.e. modifications of the inputs, bounded in some $\ell_p$ norm. Nevertheless, it has been shown that state-of-the-art models are also vulnerable to other more natural perturbations such as affine transformations, which were already considered in machine learning within data augmentation. This project reviews previous work in spatial robustness methods and proposes evolution strategies as zeroth order optimization algorithms to find the worst affine transforms for each input. The proposed method effectively yields robust models and allows introducing non-parametric adversarial perturbations.
No abstract available
The covariance matrix adaptation evolution strategy (CMA-ES) is a powerful evolutionary algorithm for single-objective real-valued optimization. However, the time and space complexity may preclude its use in high-dimensional decision space. Recent studies suggest that putting sparse or low-rank constraints on the structure of the covariance matrix can improve the efficiency of CMA-ES in handling large-scale problems. Following this idea, this paper proposes a search direction adaptation evolution strategy (SDA-ES) which achieves linear time and space complexity. SDA-ES models the covariance matrix with an identity matrix and multiple search directions, and uses a heuristic to update the search directions in a way similar to the principal component analysis. We also generalize the traditional 1/5th success rule to adapt the mutation strength which exhibits the derandomization property. Numerical comparisons with nine state-of-the-art algorithms are carried out on 31 test problems. The experimental results have shown that SDA-ES is invariant under search-space rotational transformations, and is scalable with respect to the number of variables. It also achieves competitive performance on generic black-box problems, demonstrating its effectiveness in keeping a good tradeoff between solution quality and computational efficiency.
To guide the design of better iterative optimisation heuristics, it is imperative to understand how inherent structural biases within algorithm components affect the performance on a wide variety of search landscapes. This study explores the impact of structural bias in the modular Covariance Matrix Adaptation Evolution Strategy (modCMA), focusing on the roles of various modulars within the algorithm. Through an extensive investigation involving 435,456 configurations of modCMA, we identified key modules that significantly influence structural bias of various classes. Our analysis utilized the Deep-BIAS toolbox for structural bias detection and classification, complemented by SHAP analysis for quantifying module contributions. The performance of these configurations was tested on a sequence of affine-recombined functions, maintaining fixed optimum locations while gradually varying the landscape features. Our results demonstrate an interplay between module-induced structural bias and algorithm performance across different landscape characteristics.
We combine a refined version of two-point step-size adaptation with the covariance matrix adaptation evolution strategy (CMA-ES). Additionally, we suggest polished formulae for the learning rate of the covariance matrix and the recombination weights. In contrast to cumulative step-size adaptation or to the 1/5-th success rule, the refined two-point adaptation (TPA) does not rely on any internal model of optimality. In contrast to conventional self-adaptation, the TPA will achieve a better target step-size in particular with large populations. The disadvantage of TPA is that it relies on two additional objective function
In this paper, we present the CMA-ES algorithm with a step-size adaptation rule which is inspired by the 1/5-th success rule. The method, called PPMF (Previous Population Midpoint Fitness), adjusts the step-size multiplier σ using the comparison of fitness values of the current population of points and the midpoint of the previous population. In the paper we compare the performance of CMA-ES coupled with PPMF and with two other step-size adaptation rules: Cumulative Step-size Adaptation (CSA) and Median Success Rule (MSR). For the comparison we apply a version of the IPOP-CMA-ES strategy and we test its performance using three benchmark suites: CEC 2013, CEC 2017 and CEC 2021. The results evidence that the efficiency of PPMF is comparable with CSA and superior to MSR.
: In this paper, we present a comparison of various step-size adaptation rules for the Matrix Adaptation Evolution Strategy (MA-ES) algorithm, which is a lightweight version of the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). In contrast to CMA-ES, MA-ES does not require to invoke numerically complex covariance matrix factorization. We take a step further in this direction and provide a quantitative assessment of alternative step-size rules to Cumulative Step Adaptation (CSA), which is considered to be a state-of-the-art method. Our study shows that generalized 1/5-th success rules like the Previous Population Midpoint Fitness rule (PPMF) or Population Success Rule (PSR) exhibit comparable or superior performance to the CSA rule on standard benchmark problems, including the CEC benchmark suites.
Covariance matrix adaptation evolution strategy (CMA-ES) is a successful evolutionary algorithm for continuous optimization. The cumulative step size adaptation (CSA) is the default method for adapting the step size in CMA-ES and achieves remarkable performance in practice. It follows the idea of exploiting the correlations between search directions in consecutive generations. In this paper, we investigate the effects of the update directions and the length in CSA. We find that sampling from uniform sphere distribution can yield comparative performance to the default setting, and normalizing the update direction in CSA does not hurt the performance. Further, the CSA with actual search direction cannot perform well, and the correlation between the default direction of CSA and the actual search direction is rather small. It indicates that CSA does not accumulate the actual search directions. We argue that the step size and the covariance matrix should be updated by decoupled search information, and accumulating the actual search direction results in inferior performance due to the same update direction in two evolution paths for the step size and covariance matrix updates.
High-dimensional black-box optimization problems often involve a property referred to as low effective dimensionality (LED), in which design variables contain many redundant elements that scarcely affect the objective function value. The covariance matrix adaptation evolution strategy (CMA-ES) suffers a performance deterioration on objective functions with LED because the redundant dimensions lead to modest hyperparameter settings and slow down the adaptation of the step-size. In this study, we focus on the separable CMA-ES (sep-CMA-ES), a variant of CMA-ES that restricts the covariance matrix to be diagonal, and propose a method to estimate the effectiveness of each dimension using the element-wise signal-to-noise ratios in the mean vector update and rank-µ update. Using the estimated effectiveness, we construct two countermeasures for LED, including an adaptation of hyperparameters and a refinement of the update rule in the cumulative step-size adaptation and two-point step-size adaptation, proposing sep-CMA-ES-LED. We experimentally showed that sep-CMA-ES-LED performed well on several benchmark functions with LED compared to the original sep-CMA-ES.
No abstract available
The evolution strategy with covariance matrix adaptation (CMA-ES) is a well-known algorithm in the family of evolution strategies. It consists of three main mechanisms: derandomized adaptation, cumulative step size, and covariance matrix adaptation. In this paper we aim to improve the CMA- ES and its performance by proper parameter values, better repair mechanism, removal of rank-one update, and a spread- based step size adaptation mechanism. We tested the proposed algorithm by ten test functions in the CEC2019 100-Digit Challenge. The results showed that our algorithm can achieve similar solution quality compared with two recent hybrid adaptive evolutionary algorithms and needs fewer fitness function evaluations.
No abstract available
No abstract available
No abstract available
A pivotal challenge in meta-heuristic optimization is the lack of knowledge inheritance in heuristic rules. For example, the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) requires generating historical data from scratch for its adaptation mechanisms for each new instance, demanding an extensive number of fitness evaluations. This severely limits the practicality of meta-heuristics, especially under restricted evaluation budgets, hindering efficient navigation in high-dimensional spaces. To overcome this, we integrate Proximal Policy Optimization (PPO) with a vanilla Evolution Strategy (ES), forming the novel PPO-ES approach. Distinct from other adaptive ES variants like CMA-ES with cumulative path calculation and covariance matrix adaptation, it leverages PPO's capability for dynamic step-size adjustment. Our method streamlines the optimization process and incorporates a meticulously designed reward system to adeptly navigate the scalability challenge, significantly enhancing adaptability and efficiency of ES. PPO-ES, trained on part of the bbob benchmarks, was tested on these and the rest unseen problems, and further validated on bbob-largescale benchmarks with much higher dimensions. Results show that PPO-ES achieves faster or comparable convergence to CMA-ES. Our code can be found online: https://github.com/burningxt/PPO-ES_GECCO24.
No abstract available
In the post-Moore era, main performance gains of black-box optimizers are increasingly depending on parallelism, especially for large-scale optimization (LSO). Here we propose to parallelize the well-established covariance matrix adaptation evolution strategy (CMA-ES) and in particular its one latest LSO variant called limited-memory CMA-ES (LM-CMA). To achieve efficiency while approximating its powerful invariance property, we present a multilevel learning-based meta-framework for distributed LM-CMA. Owing to its hierarchically organized structure, Meta-ES is well-suited to implement our distributed meta-framework, wherein the outer-ES controls strategy parameters while all parallel inner-ESs run the serial LM-CMA with different settings. For the distribution mean update of the outer-ES, both the elitist and multi-recombination strategy are used in parallel to avoid stagnation and regression, respectively. To exploit spatiotemporal information, the global step-size adaptation combines Meta-ES with the parallel cumulative step-size adaptation. After each isolation time, our meta-framework employs both the structure and parameter learning strategy to combine aligned evolution paths for CMA reconstruction. Experiments on a set of large-scale benchmarking functions with memory-intensive evaluations, arguably reflecting many data-driven optimization problems, validate the benefits (e.g., effectiveness w.r.t. solution quality, and adaptability w.r.t. second-order learning) and costs of our meta-framework.
We present a novel black box optimization algorithm called Hessian Estimation Evolution Strategy. The algorithm updates the covariance matrix of its sampling distribution by directly estimating the curvature of the objective function. This algorithm design is targeted at twice continuously differentiable problems. For this, we extend the cumulative step-size adaptation algorithm of the CMA-ES to mirrored sampling. We demonstrate that our approach to covariance matrix adaptation is efficient by evaluation it on the BBOB/COCO testbed. We also show that the algorithm is surprisingly robust when its core assumption of a twice continuously differentiable objective function is violated. The approach yields a new evolution strategy with competitive performance, and at the same time it also offers an interesting alternative to the usual covariance matrix update mechanism.
Multimodal optimization requires both exploration and exploitation. Exploration identifies promising attraction basins, while exploitation finds the best solutions within these basins. The balance between exploration and exploitation can be maintained by adjusting parameter settings. The population size adaptation covariance matrix adaption evolutionary strategy algorithm (PSA-CMA-ES) achieves this balance by dynamically adjusting population size. PSA-CMA-ES performs well on well-structured multimodal benchmark problems. In weakly structured multimodal problems, however, the algorithm struggles to effectively manage step-size increases, resulting in uncontrolled step-size blow-ups that impede convergence near the global optimum. In this study, we reformulated the step-size correction strategy to overcome this limitation. We analytically identified the cause of the step-size blow-up and demonstrate the existence of a significance level for population size change guiding a safe passage to step-size correction. These insights were incorporated to form the reformulation. Through computer experiments on two weakly structured multimodal benchmark problems, we evaluated the performance of the new approach and compared the results with the state-of-the-art algorithm. The improved algorithm successfully mitigates step-size blow-up, enabling a better balance between exploration and exploitation near the global optimum enhancing convergence.
Three state-of-the-art adaptive population control strategies (PCS) are theoretically and empirically investigated for a multi-recombinative, cumulative step-size adaptation Evolution Strategy $(\mu/\mu_I, \lambda)$-CSA-ES. First, scaling properties for the generation number and mutation strength rescaling are derived on the sphere in the limit of large population sizes. Then, the adaptation properties of three standard CSA-variants are studied as a function of the population size and dimensionality, and compared to the predicted scaling results. Thereafter, three PCS are implemented along the CSA-ES and studied on a test bed of sphere, random, and Rastrigin functions. The CSA-adaptation properties significantly influence the performance of the PCS, which is shown in more detail. Given the test bed, well-performing parameter sets (in terms of scaling, efficiency, and success rate) for both the CSA- and PCS-subroutines are identified.
No abstract available
Three state-of-the-art adaptive population control strategies (PCS) are theoretically and empirically investigated for a multirecombinative, cumulative step-size adaptation Evolution Strategy $(\mu /\mu _{I},\lambda)$ -CSA-ES. First, scaling properties for the generation number and mutation strength rescaling are derived on the sphere in the limit of large population sizes. Then, the adaptation properties of three standard CSA variants are studied for varying population size and dimensionality, and compared to the predicted scaling results. Thereafter, three PCS are implemented along the CSA-ES and studied on a test bed of sphere, random, and Rastrigin functions. The CSA properties significantly influence the performance and stability of PCS, which is shown in greater detail. Given the test bed, well-performing parameter sets (in terms of scaling, efficiency, and success rate) for both the CSA- and PCS-subroutines are identified.
No abstract available
No abstract available
No abstract available
No abstract available
The covariance matrix adaptive evolution strategy (CMA-ES) has been widely used in the field of 2D/3D registration in recent years. This optimization method exhibits exceptional robustness and usability for complex surgical scenarios. However, due to the inherent ill-posed nature of the 2D/3D registration task and the presence of numerous local minima in the landscape of similarity measures, evolution strategies often require a larger population size in each generation to ensure the stability of registration and the globality and effectiveness of search, which makes the entire process computationally expensive. In this paper, we build a 2D/3D registration framework based on a learning rate adaptation CMA-ES manner. The framework employs a fixed and small population size, leading to minimized runtime and optimal utilization of computing resources. We conduct experimental comparisons between the proposed framework and other intensity-based baselines using a substantial volume of synthetic data. The results suggest that our method demonstrates superiority in both registration accuracy and running time compared with other optimization-based baselines. Code is available at github.com/m1nhengChen/CMAES-reg.
The covariance matrix adaptation evolution strategy (CMA-ES) is one of the most successful methods for solving black-box continuous optimization problems. One practically useful aspect of the CMA-ES is that it can be used without hyperparameter tuning. However, the hyperparameter settings still have a considerable impact, especially for difficult tasks such as solving multimodal or noisy problems. In this study, we investigate whether the CMA-ES with default population size can solve multimodal and noisy problems. To perform this investigation, we develop a novel learning rate adaptation mechanism for the CMA-ES, such that the learning rate is adapted so as to maintain a constant signal-to-noise ratio. We investigate the behavior of the CMA-ES with the proposed learning rate adaptation mechanism through numerical experiments, and compare the results with those obtained for the CMA-ES with a fixed learning rate. The results demonstrate that, when the proposed learning rate adaptation is used, the CMA-ES with default population size works well on multimodal and/or noisy problems, without the need for extremely expensive learning rate tuning.
In this paper, an adaptive step-size decay mechanism based on the required accuracy of the problem is proposed to replace the traditional cumulative step-size adaptation (CSA). By simply setting the accuracy requirement of the target problem, our method achieves better results than the existing CSA methods. Compared with CSA, our decay mechanism pays more attention to exploration and avoids falling into local optima. In the mid and late stages, we smooth the decay step to converge to the threshold set by the expert, thus progressively refining the search process. We validate this on the last 10 complex multimodal test problems of the COCO dataset.
In recent years, black-box distributed optimization (DBO) has been widely studied to solve complex optimization problems in multiagent systems (MASs), such as hyperparameter optimization of distributed machine learning. However, most existing methods use a fixed or diminishing step size to sample and search in the black box optimization space, which makes it challenging to maintain optimization efficiency on different optimization problems. In this work, we propose a multiagent evolution strategy with cooperative and cumulative step adaptation (CCSA-DES). In CCSA-DES, each agent executes the algorithm to sample and explores its local objective function, and communicates with other agents to optimize the global objective function cooperatively, which is the sum of local objective functions. To improve the sampling adaptability, we design a cooperative and cumulative step adaptation method (CCSA) consisting of inner adaptation and outer adaptation. By detecting the evolution path of the MAS, CCSA decreases the step size when the evolution directions of agents are conflicting and increases the step size when consistent. In terms of theoretical analysis, we first discuss the working principle of CCSA, and then discuss the system consensus of CCSA-DES. In terms of experimental verification, CCSA-DES achieves better-consensus performance and competitive solution quality compared with state-of-the-art algorithms for DBO.
No abstract available
The population size, i.e., the number of candidate solutions generated at each iteration, is the most critical strategy parameter in the covariance matrix adaptation evolution strategy, CMA-ES, which is one of the state-of-the-art search algorithms for black-box continuous optimization. The population size is required to be larger than its default value when the objective function is well-structured multimodal and/or noisy, while we want to keep it as small as possible for optimization speed. However, the strategy parameter tuning based on trial and error is, in general, prohibitively expensive in black-box optimization scenario. This paper proposes a novel strategy to adapt the population size for CMA-ES. The population size is adapted based on the estimated accuracy of the update of the normal distribution parameters. The CMA-ES with the proposed population size adaptation mechanism, PSA-CMA-ES, is tested both on noiseless and noisy benchmark functions, and compared with existing strategies. The results revealed that the PSA-CMA-ES works well on well-structured multimodal and/or noisy functions, but causes inefficient increase of the population size on unimodal functions. Furthermore, it is shown that the PSA-CMA-ES can tackle noise and multimodality at the same time.
The CSA-ES is an Evolution Strategy with Cumulative Step size Adaptation, where the step size is adapted measuring the length of a so-called cumulative path. The cumulative path is a combination of the previous steps realized by the algorithm, where the importance of each step decreases with time. This article studies the CSA-ES on composites of strictly increasing functions with affine linear functions through the investigation of its underlying Markov chains. Rigorous results on the change and the variation of the step size are derived with and without cumulation. The step-size diverges geometrically fast in most cases. Furthermore, the influence of the cumulation parameter is studied.
No abstract available
The covariance matrix adaptation evolution strategy (CMA-ES) is one of the most successful methods for solving continuous black-box optimization problems. A practically useful aspect of CMA-ES is that it can be used without hyperparameter tuning. However, the hyperparameter settings still have a considerable impact on performance, especially for difficult tasks, such as solving multimodal or noisy problems. This study comprehensively explores the impact of learning rate on CMA-ES performance and demonstrates the necessity of a small learning rate by considering ordinary differential equations. Thereafter, it discusses the setting of an ideal learning rate. Based on these discussions, we develop a novel learning rate adaptation mechanism for CMA-ES that maintains a constant signal-to-noise ratio. Additionally, we investigate the behavior of CMA-ES with the proposed learning rate adaptation mechanism through numerical experiments and compare the results with those obtained for CMA-ES with a fixed learning rate and with population size adaptation. The results show that CMA-ES with the proposed learning rate adaptation works well for multimodal and/or noisy problems without extremely expensive learning rate tuning.
No abstract available
No abstract available
This paper analyzes a -Evolution Strategy, a randomized comparison-based adaptive search algorithm optimizing a linear function with a linear constraint. The algorithm uses resampling to handle the constraint. Two cases are investigated: first, the case where the step-size is constant, and second, the case where the step-size is adapted using cumulative step-size adaptation. We exhibit for each case a Markov chain describing the behavior of the algorithm. Stability of the chain implies, by applying a law of large numbers, either convergence or divergence of the algorithm. Divergence is the desired behavior. In the constant step-size case, we show stability of the Markov chain and prove the divergence of the algorithm. In the cumulative step-size adaptation case, we prove stability of the Markov chain in the simplified case where the cumulation parameter equals 1, and discuss steps to obtain similar results for the full (default) algorithm where the cumulation parameter is smaller than 1. The stability of the Markov chain allows us to deduce geometric divergence or convergence, depending on the dimension, constraint angle, population size, and damping parameter, at a rate that we estimate. Our results complement previous studies where stability was assumed.
No abstract available
No abstract available
The covariance matrix adaptation evolution strategy (CMA-ES) is a powerful optimization method for continuous black-box optimization problems. Several noise-handling methods have been proposed to bring out the optimization performance of the CMA-ES on noisy objective functions. The adaptations of the population size and the learning rate are two major approaches that perform well under additive Gaussian noise. The reevaluation technique is another technique that evaluates each solution multiple times. In this paper, we discuss the difference between those methods from the perspective of stochastic relaxation that considers the maximization of the expected utility function. We derive that the set of maximizers of the noise-independent utility, which is used in the reevaluation technique, certainly contains the optimal solution, while the noise-dependent utility, which is used in the population size and leaning rate adaptations, does not satisfy it under multiplicative noise. Based on the discussion, we develop the reevaluation adaptation CMA-ES (RA-CMA-ES), which computes two update directions using half of the evaluations and adapts the number of reevaluations based on the estimated correlation of those two update directions. The numerical simulation shows that the RA-CMA-ES outperforms the comparative method under multiplicative noise, maintaining competitive performance under additive noise.
In response to the issue of slow attack speed and low success rate of football robots' kicking in dynamic environments during RoboCup competitions, an algorithm for bipedal robot kicking was devised using an enhanced Covariance Matrix Adaptation Evolution Strategy (CMA-ES) algorithm. To tackle the problem of CMA-ES getting trapped in local optima, a combined approach of Tent chaotic mapping and Lévy flight random numbers was introduced to widen the search range and increase the population size, thus enhancing the algorithm's ability for global exploration. The refined CMA-ES algorithm was employed to optimize kicking parameters, with parameter feasibility assessed using inverse kinematics. Moreover, a kicking cost function was utilized to determine optimal kicking point coordinates, and trajectory optimization was performed using Bézier curve interpolation. Finally, the correctness and effectiveness of the method were validated through simulation experiments using SimRobot and physical experiments with NAO robots.
We investigate evolution strategies with weighted recombination on general convex quadratic functions. We derive the asymptotic quality gain in the limit of the dimension to infinity, and derive the optimal recombination weights and the optimal step-size. This work is an extension of previous works where the asymptotic quality gain of evolution strategies with weighted recombination was derived on the infinite dimensional sphere function. Moreover, for a finite dimensional search space, we derive rigorous bounds for the quality gain on a general quadratic function. They reveal the dependency of the quality gain both in the eigenvalue distribution of the Hessian matrix and on the recombination weights. Taking the search space dimension to infinity, it turns out that the optimal recombination weights are independent of the Hessian matrix, i.e., the recombination weights optimal for the sphere function are optimal for convex quadratic functions.
No abstract available
No abstract available
No abstract available
No abstract available
No abstract available
We present a hybrid algorithm between an evolution strategy and a quasi Newton method. The design is based on the Hessian Estimation Evolution Strategy, which iteratively estimates the inverse square root of the Hessian matrix of the problem. This is akin to a quasi-Newton method and corresponding derivative-free trust-region algorithms like NEWUOA. The proposed method therefore replaces the global recombination step commonly found in non-elitist evolution strategies with a quasi-Newton step. Numerical results show superlinear convergence, resulting in improved performance in particular on smooth convex problems.
This paper studies the performance for evolution strategies with the optimal weighed recombination on spherical problems in finite dimensions. We first discuss the different forms of functions that are used to derive the optimal recombination weights and step size, and then derive an inequality that establishes the relationship between these functions. We prove that using the expectation of random variables to derive the optimal recombination weights and step size can be disappointing in terms of the expected performance of evolution strategies. We show that using the realizations of random variables is a better choice. We generalize the results to any convex functions and establish an inequality for the normalized quality gain. We prove that the normalized quality gain of the evolution strategies have a better and robust performance when they use the optimal recombination weights and the optimal step size that are derived from the realizations of random variables rather than using the expectations of random variables.
This paper proposes a natural evolution strategy (NES) for mixed-integer black-box optimization (MI-BBO) that appears in real-world problems such as hyperparameter optimization of machine learning and materials design. This problem is difficult to optimize because plateaus where the values do not change appear when the integer variables are relaxed to the continuous ones. CMA-ES w. Margin that addresses the plateaus reportedly showed good performance on MI-BBO benchmark problems. However, it has been observed that the search performance of CMA-ES w. Margin deteriorates when continuous variables contribute more to the objective function value than integer ones. In order to address the problem of CMA-ES w. Margin, we propose Distance-weighted eXponential Natural Evolution Strategy taking account of Implicit Constraint and Integer (DX-NES-ICI). We compare the search performance of DX-NES-ICI with that of CMA-ES w. Margin through numerical experiments. As a result, DX-NES-ICI was up to 3.7 times better than CMA-ES w. Margin in terms of a rate of finding the optimal solutions on benchmark problems where continuous variables contribute more to the objective function value than integer ones. DX-NES-ICI also outperformed CMA-ES w. Margin on problems where CMA-ES w. Margin originally showed good performance.
No abstract available
No abstract available
No abstract available
No abstract available
Impacts of invariance in search: When CMA-ES and PSO face ill-conditioned and non-separable problems
No abstract available
No abstract available
No abstract available
We introduce an acceleration for covariance matrix adaptation evolution strategies (CMA-ES) by means of adaptive diagonal decoding (dd-CMA). This diagonal acceleration endows the default CMA-ES with the advantages of separable CMA-ES without inheriting its drawbacks. Technically, we introduce a diagonal matrix D that expresses coordinate-wise variances of the sampling distribution in DCD form. The diagonal matrix can learn a rescaling of the problem in the coordinates within a linear number of function evaluations. Diagonal decoding can also exploit separability of the problem, but, crucially, does not compromise the performance on nonseparable problems. The latter is accomplished by modulating the learning rate for the diagonal matrix based on the condition number of the underlying correlation matrix. dd-CMA-ES not only combines the advantages of default and separable CMA-ES, but may achieve overadditive speedup: it improves the performance, and even the scaling, of the better of default and separable CMA-ES on classes of nonseparable test functions that reflect, arguably, a landscape feature commonly observed in practice. The article makes two further secondary contributions: we introduce two different approaches to guarantee positive definiteness of the covariance matrix with active CMA, which is valuable in particular with large population size; we revise the default parameter setting in CMA-ES, proposing accelerated settings in particular for large dimension. All our contributions can be viewed as independent improvements of CMA-ES, yet they are also complementary and can be seamlessly combined. In numerical experiments with dd-CMA-ES up to dimension 5120, we observe remarkable improvements over the original covariance matrix adaptation on functions with coordinate-wise ill-conditioning. The improvement is observed also for large population sizes up to about dimension squared.
No abstract available
Ill-conditioned problems are widely acknowledged as a major challenge in singleobjective optimization, yet they remain largely unexplored in evolutionary multiobjective optimization. In this paper, we introduce a decomposition-based multiobjective evolution strategy (MOES/D) for optimizing non-separable and ill-conditioned multiobjective problems. In contrast to most existing approaches that integrate evolution strategies while potentially compromising their essential features, we develop novel, tailored strategies to coordinate evolution strategies, maximizing their strengths. These strategies collectively contribute to the efficiency of MOES/D, which include an importance mixing algorithm that enhances sample efficiency in an unbiased manner, a collaborative ascent method that optimizes multiple subproblems simultaneously, and a principled resource allocation based on expectation-maximization that prioritizes the evolution strategy models. To bridge the gap in the field, we propose a novel benchmark suite in which all instances are non-separable and either moderate- or illconditioned. Extensive experiments on the suite demonstrate that MOES/D excels at solving moderate- or ill-conditioned multiobjective problems, outperforming most state-of-the-art algorithms by a significant margin.
Physics-Informed Neural Networks (PINNs) have recently received increasing attention, however, optimizing the loss function of PINNs is notoriously difficult, where the landscape of the loss function is often highly non-convex and rugged. Local optimization methods based on gradient information can converge quickly but are prone to being trapped in local minima for training PINNs. Evolutionary algorithms (EAs) are well known for the global search ability, which can help escape from local minima. It has been reported in the literature that EAs show some advantages over gradient-based methods in training PINNs. Inspired by the Memetic Algorithm, we combine global-search based EAs and local-search based batch gradient descent in order to make the best of both word. In addition, since the PINN loss function is composed of multiple terms, balancing these terms is also a challenging issue. Therefore, we also attempt to combine EAs with multiple-gradient descent algorithm for multi-objective optimization. Our experiments provide strong evidence for the superiority of the above algorithms.
No abstract available
No abstract available
No abstract available
No abstract available
In this paper, we present a comparative benchmark of two implementations of CMA-ES, both with and without diagonal decoding. The benchmarked variants of CMA-ES with diagonal decoding adaptively split the update of the covariance matrix into an update with the original CMA-ES method and an update with the separable-CMA-ES method. Thus, the diagonal decoding should allow for improved performance on separable functions with minimal loss on nonseparable ones. To gain insight into how diagonal decoding impacts CMA-ES runs, an assessment of the performance gain or loss due to the use of diagonal decoding relative to the original CMA-ES, was performed on bbob problems using the COCO platform. We were also interested in variances that might emerge from the difference in the implementations. The data presented in this paper shows improved performance of the CMA-ES on separable functions when using diagonal decoding, without any apparent loss on nonseparable ones. In addition, a few performance variances were spotted in the weakly structured functions, which appeared uncorrelated with the use of diagonal decoding. However, they can be traced back to implementation differences, such as the stopping conditions that may result in different runs, as the data suggests.
No abstract available
Modern machine learning uses more and more advanced optimization techniques to find optimal hyper parameters. Whenever the objective function is non-convex, non continuous and with potentially multiple local minima, standard gradient descent optimization methods fail. A last resource and very different method is to assume that the optimum(s), not necessarily unique, is/are distributed according to a distribution and iteratively to adapt the distribution according to tested points. These strategies originated in the early 1960s, named Evolution Strategy (ES) have culminated with the CMA-ES (Covariance Matrix Adaptation) ES. It relies on a multi variate normal distribution and is supposed to be state of the art for general optimization program. However, it is far from being optimal for discrete variables. In this paper, we extend the method to multivariate binomial correlated distributions. For such a distribution, we show that it shares similar features to the multi variate normal: independence and correlation is equivalent and correlation is efficiently modeled by interaction between different variables. We discuss this distribution in the framework of the exponential family. We prove that the model can estimate not only pairwise interactions among the two variables but also is capable of modeling higher order interactions. This allows creating a version of CMA ES that can accomodate efficiently discrete variables. We provide the corresponding algorithm and conclude.
Path planning for quadrotor unmanned aerial vehicles (UAVs) in complex three-dimensional (3D) environments is challenged by nonlinear dynamics, dense obstacles, and conflicting objectives. This paper proposes a Hybrid Non-dominated Multi-Objective Particle Swarm Optimization algorithm (HNMOPSO) to enhance path feasibility, convergence, and energy efficiency. A Rapidly-exploring Random Tree (RRT*) initializes collision-free seed paths, improving population quality. Lévy flight perturbations enhance global exploration and prevent premature convergence, while the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) refines elite solutions for smoother, more accurate trajectories. An energy-aware objective function models propulsion power under cruise, climb, and turning states, extending optimization toward real energy efficiency. Simulations in complex 3D terrain show that HNMOPSO surpasses Genetic Algorithm (GA), Differential Evolution (DE), and Particle Swarm Optimization (PSO) in convergence, smoothness, and robustness, demonstrating strong potential for practical UAV path planning.
Given the ubiquity of non-separable optimization problems in real worlds, in this paper we analyze and extend the large-scale version of the well-known cooperative coevolution (CC), a divide-and-conquer black-box optimization framework, on non-separable functions. First, we reveal empirical reasons of when decomposition-based methods are preferred or not in practice on some non-separable large-scale problems, which have not been clearly pointed out in many previous CC papers. Then, we formalize CC to a continuous-game model via simplification, but without losing its essential property. Different from previous evolutionary game theory for CC, our new model provides a much simpler but useful viewpoint to analyze its convergence, since only the pure Nash equilibrium concept is needed and more general fitness landscapes can be explicitly considered. Based on convergence analyses, we propose a hierarchical decomposition strategy for better generalization, as for any decomposition, there is a risk of getting trapped into a suboptimal Nash equilibrium. Finally, we use powerful distributed computing to accelerate it under the recent multi-level learning framework, which combines the fine-tuning ability from decomposition with the invariance property of CMA-ES. Experiments on a set of high-dimensional test functions validate both its search performance and scalability (w.r.t. CPU cores) on a clustering computing platform with 400 CPU cores.
No abstract available
The Cooperative Coevolution (CC) framework is the state of the art for solving large scale global optimization (LSGO) problems. A particular challenge in using CC lies in the decomposition of variables and resource allocation. In this work, the decomposition phase of the framework is performed in two stages to address both variable interaction and efficient resource allocation. The algorithm starts with differential analysis followed by differential grouping. The differential analysis allows efficient resource allocation while the differential grouping will detect variable interactions. The differential grouping will act on a small number of variables and will not consume as much computational budget as a single-stage grouping. While not all variable interactions will be detected, separable variables will be recognized hence specialized solvers for separable problems can be employed on these subproblems. In this work, the twostage grouping CC (TSCC) is paired with a hybrid algorithm where sep-CMA-ES is used to solve the separable subproblem and CMA-ES is used to solve the non-separable subproblems, the algorithm is referred as TSCC-CMAES. A comparison between TSCC and CC with groups based on either differential analysis or differential grouping is carried out. In general, TSCC could outperform the two single-stage grouping methods. Additionally, the TSCC-CMAES shows a competitive advantage on a number of more complex problems against state-of-the-art algorithms and it is shown that the effect of the population size and group size is crucial in achieving these results.
This paper deals with the identification of the flux for a system of conservation laws in the specific example of analytic chromatography. The fundamental equations of chromatographic process are highly non linear. The state-of-the-art evolution strategy, CMA-ES (the covariance matrix adaptation evolution strategy), is used to identify the parameters of the so-called isotherm function. The approach was validated on different configurations of simulated data using either one, two or three components mixtures. CMA-ES is then applied to real data cases and its results are compared to those of a gradient-based strategy.
No abstract available
No abstract available
No abstract available
Rank-based zeroth-order (ZO) optimization -- which relies only on the ordering of function evaluations -- offers strong robustness to noise and monotone transformations, and underlies many successful algorithms such as CMA-ES, natural evolution strategies, and rank-based genetic algorithms. Despite its widespread use, the theoretical understanding of rank-based ZO methods remains limited: existing analyses provide only asymptotic insights and do not yield explicit convergence rates for algorithms selecting the top-$k$ directions. This work closes this gap by analyzing a simple rank-based ZO algorithm and establishing the first \emph{explicit}, and \emph{non-asymptotic} query complexities. For a $d$-dimension problem, if the function is $L$-smooth and $\mu$-strongly convex, the algorithm achieves $\widetilde{\mathcal O}\!\left(\frac{dL}{\mu}\log\!\frac{dL}{\mu\delta}\log\!\frac{1}{\varepsilon}\right)$ to find an $\varepsilon$-suboptimal solution, and for smooth nonconvex objectives it reaches $\mathcal O\!\left(\frac{dL}{\varepsilon}\log\!\frac{1}{\varepsilon}\right)$. Notation $\cO(\cdot)$ hides constant terms and $\widetilde{\mathcal O}(\cdot)$ hides extra $\log\log\frac{1}{\varepsilon}$ term. These query complexities hold with a probability at least $1-\delta$ with $0<\delta<1$. The analysis in this paper is novel and avoids classical drift and information-geometric techniques. Our analysis offers new insight into why rank-based heuristics lead to efficient ZO optimization.
No abstract available
No abstract available
No abstract available
No abstract available
No abstract available
No abstract available
No abstract available
No abstract available
No abstract available
No abstract available
Contextual policy search (CPS) is a class of multi-task reinforcement learning algorithms that is particularly useful for robotic applications. A recent state-of-the-art method is Contextual Covariance Matrix Adaptation Evolution Strategies (C-CMA-ES). It is based on the standard black-box optimization algorithm CMA-ES. There are two useful extensions of CMA-ES that we will transfer to C-CMA-ES and evaluate empirically: ACM-ES, which uses a comparison-based surrogate model, and aCMA-ES, which uses an active update of the covariance matrix. We will show that improvements with these methods can be impressive in terms of sample-efficiency, although this is not relevant any more for the robotic domain.
No abstract available
No abstract available
No abstract available
No abstract available
Covariance matrix adaptation evolution strategy (CMA-ES) is one of the most successful evolutionary algorithms. To improve the efficiency of CMA-ES and make sufficient use of the sampled points, we propose an active update scheme for the mutation matrix which approximates the factorization of the covariance matrix. The active update effectively reduces the variance along undesired search directions with negative weights. We design a set of symmetric weights for all the sampled points as well as a larger learning rate. We conduct extensive experiments and show that the proposed active update scheme effectively accelerates the convergence to the optimum.
No abstract available
No abstract available
In the challenging context of human-robot collaboration, the ability to capture critical human-grasp-related features in a short period is essential for the transfer of skills/experience and the effective imitation learning of human grasping strategies to robotic systems. The paper at hand introduces a novel system for pose optimization of an active (moving) sensor, reducing uncertainty in grasp perception and enabling a more accurate observation of the human grasp pose. The proposed framework, coined as "GRAsPAD" (Generalized fRAmework for optimal graSp key-Points Active Detection), leverages a keypoint detection model to identify grasp-related features in a tomato fruit. The camera pose is iteratively adjusted through the Covariance Matrix Adaptation with Margin - Evolution Strategy (CMAwM-ES) to minimize uncertainty in detected keypoints, enhancing the accuracy of grasp perception and ensuring a clearer, more stable grasp representation. To mitigate sensor noise and variability, a Kalman filter is applied to refine keypoint detections over time. The experimental results demonstrate the significant improvement achieved by the pose optimization of the observer, in terms of grasp perception quality, which is critical for robotic skill learning and effective transfer of human grasping strategies to robotic systems.
In this paper, we evaluate some variants of the CMAES-APOP using two additional methods. The first method is the combination of mirrored sampling with active covariance matrix adaptation. The second one is to keep some search points of the previous iteration for the evolution of the current iteration when the population size is large. Moreover, we propose a strategy to reduce the step-size when the population size attains its upper bound for several consecutive iterations. We will show that the techniques can enhance the performance on some multi-modal functions, especially on the weakly-structured multi-modal ones, and the overall performance in all dimensions.
The CMAES-APOP is a variation of the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) which adjusts the population size in the CMA-ES to deal with multi-modal functions. In this paper, we investigate two methods to improve the performance of this algorithm. The first one is based on the mirrored sampling with active covariance matrix adaptation. The second one is to keep some search points of the previous iteration for the evolution of the current iteration when the population size is large. We will show that the techniques can enhance the performance of the CMAES-APOP on some benchmark multi-modal functions, especially when they are used simultaneously.
Abstract Three-Dimensional Integrated Circuits (3D ICs) are a revolutionary advancement in semiconductor technology, allowing vertical stacking of multiple active device layers to overcome the scaling challenges faced by conventional Two-Dimensional Integrated Circuits (2D ICs). While 3D ICs harness incredible advantages in terms of increased integration density, shorter interconnect length, and increased performance, they also present daunting challenges—primarily, the strong coupling between thermal and timing constraints. These challenges demand strong, multi-objective optimization techniques that can manage 3D ICs' highly non-convex and multi-modal design space. In this paper, we introduce a novel optimization framework based on the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) integrated with a physics-based electro-thermal-timing simulation framework. By comparing CMA-ES with Bayesian optimization, we prove that CMA-ES produces superior convergence, robustness, and solution quality, especially under noisy and high-dimensional objective conditions. Our results evidence spectacular improvements in key figures of merit such as peak temperature, thermal gradients, clock skew, and overall simulation efficiency.
The widespread deployment of inductive-loop traffic detectors allows us to obtain a massive amount of traffic data in real time. A key step of utilizing the data is to use the data to fit a traffic model. Simultaneous perturbation stochastic approximation (SPSA) and its variants are popular techniques for the calibration of dynamic traffic assignment (DTA) models by searching for Origin-Destination (OD) matrices that fit the data. However, the performance of SPSA cannot scale with modern multi-core CPU architectures due to its sequential nature. This paper proposes the use of active covariance matrix adaptation evolution strategy (Active-CMA-ES) for optimizing OD matrices in microscopic traffic simulation. CMA-ES is a blackbox optimization algorithm that was found highly effective in many domains. According to our case study in Ulsan, South Korea, Active-CMA-ES outperforms Restart-WSPSA, one of the best SPSA algorithms, in terms of the calibration error and the running time. Moreover, the running time of Active-CMA-ES decreases as the number of parallel simulation processes increases.
This paper presents computationally optimized 2‐element and 4‐element Multiple‐Input, Multiple‐Output (MIMO) antennas for WLAN/Wi‐Fi, 5G, and UWB applications. The antenna configuration is constructed with the orthogonal placement of sawtooth‐shaped circular monopole radiating elements. The Particle Swarm Optimization (PSO) and Covariance Matrix Adaptation Evolution Strategy (CMA‐ES) optimization techniques are employed to achieve the best performance and size of the proposed antenna. Among these optimization techniques, CMA‐ES is identified as the better approach. The final optimized geometry of the 2‐element and 4‐element MIMO antennas is 49.40 mm × 24.22 mm and 49.44 mm × 45 mm, respectively. Both optimized antennas are fabricated and experimentally verified. The fractional bandwidth of the antenna is more than 110.3%, and more than −20 dB isolation is attained without employing any decoupling method. The Envelop Correlation Coefficient (ECC), Directivity Gain (DG), Total Active Reflection Coefficient (TARC), and Channel Capacity Limit (CCL) are 0.0001, 9.99, < −15 dB, and 0.1 bits/s/Hz, respectively. The proposed antenna is a good candidate for numerous current wireless applications due to its size and performance.
No abstract available
No abstract available
In this study a new multi-swarm hybrid algorithm is designed for dynamic optimization problems. It is based on two well-known stochastic approaches such as the Particle Swarm Optimization (PSO) and the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) applied with additional modifications to make them able to find better solutions before the environmental changes. Additionally, an adaptation strategy is proposed to regulate the number of active swarms, so that new swarms would be brought into existence or redundant swarms would be removed to maintain the multi-swarm diversity. The Generalized Moving Peak Benchmark is used to evaluate the performance of the proposed algorithm and to compare it to the alternative approaches. Obtained results demonstrated usefulness of the new algorithm as it outperformed other alternative approaches.
No abstract available
Autonomous control of soft robots using safe reinforcement learning and covariance matrix adaptation
No abstract available
Quality-Diversity (QD) optimization is an emerging field that focuses on finding a set of behaviorally diverse and high-quality solutions. While the quality is typically defined w.r.t. a single objective function, recent work on Multi-Objective Quality-Diversity (MOQD) extends QD optimization to simultaneously optimize multiple objective functions. This opens up multi-objective applications for QD, such as generating a diverse set of game maps that maximize difficulty, realism, or other properties. Existing MOQD algorithms use non-adaptive methods such as mutation and crossover to search for non-dominated solutions and construct an archive of Pareto Sets (PS). However, recent work in QD has demonstrated enhanced performance through the use of covariance-based evolution strategies for adaptive solution search. We propose bringing this insight into the MOQD problem, and introduce MO-CMA-MAE, a new MOQD algorithm that leverages Covariance Matrix AdaptationEvolution Strategies (CMA-ES) to optimize the hypervolume associated with every PS within the archive. We test MO-CMA-MAE on three MOQD domains, and for generating maps of a co-operative video game, showing significant improvements in performance.
Bayesian Optimization (BO) is an effective method for finding the global optimum of expensive black-box functions. However, it is well known that applying BO to high-dimensional optimization problems is challenging. To address this issue, a promising solution is to use a local search strategy that partitions the search domain into local regions with high likelihood of containing the global optimum, and then use BO to optimize the objective function within these regions. In this paper, we propose a novel technique for defining the local regions using the Covariance Matrix Adaptation (CMA) strategy. Specifically, we use CMA to learn a search distribution that can estimate the probabilities of data points being the global optimum of the objective function. Based on this search distribution, we then define the local regions consisting of data points with high probabilities of being the global optimum. Our approach serves as a meta-algorithm as it can incorporate existing black-box BO optimizers, such as BO, TuRBO, and BAxUS, to find the global optimum of the objective function within our derived local regions. We evaluate our proposed method on various benchmark synthetic and real-world problems. The results demonstrate that our method outperforms existing state-of-the-art techniques.
How to explore the continuous latent space of Generative Adversarial Networks to extract interesting levels has been a challenge in video game level generation. To take on this challenge, the Covariance Matrix Adaptation MAP-Elites (CMA-ME) algorithm focuses on finding a diverse collection of quality solutions on complex continuous domains. By combining self-adaptation techniques from Covariance Matrix Adaptation Evolution Strategy (CMA-ES) with archiving and mapping techniques, CMA-ME effectively maintains the quality diversity of solutions. In order to obtain various playable levels that are aesthetically similar to human-made ones, this paper defines a new set of behavioral characteristics and fitness functions for the CMA-ME algorithm. It also employs the CMA-ME algorithm to explore the latent space, while simultaneously utilizing the Mixed Integer Linear Programming (MILP) method to ensure the playability of the generated levels. We apply our new method to generate diverse playable levels for several 2D tile-based games: Zelda-v0, Zelda-v1, and Pacman. The experimental results show the high quality of the generated levels compared with the baselines and thus justify the effectiveness of the proposed methods.
Single-objective optimization algorithms search for the single highest quality solution with respect to an objective. Quality diversity (QD) optimization algorithms, such as Covariance Matrix Adaptation MAP-Elites (CMA-ME), search for a collection of solutions that are both high quality with respect to an objective and diverse with respect to specified measure functions. However, CMA-ME suffers from three major limitations highlighted by the QD community: prematurely abandoning the objective in favor of exploration, struggling to explore flat objectives, and having poor performance for low-resolution archives. We propose a new QD algorithm, CMA MAP-Annealing (CMA-MAE), and its differentiable QD variant, CMA-MAE via a Gradient Arborescence (CMA-MAEGA), that address all three limitations. We provide theoretical justifications for the new algorithm with respect to each limitation. Our theory informs our experiments, which support the theory and show that CMA-MAE achieves state-of-the-art performance and robustness on standard QD benchmark and reinforcement learning domains.
No abstract available
No abstract available
本报告综合了CMA-ES(协方差矩阵自适应进化策略)的全面研究图谱。研究不仅深入探讨了从信息几何到漂移分析的底层数学理论,还通过步长、种群及主动更新机制的持续迭代优化了核心算法性能。针对现代计算挑战,高维大规模优化变体与并行化技术显著提升了处理复杂问题的效率。此外,通过引入代理模型、混合元启发式框架以及对多目标、多模态和约束环境的扩展,CMA-ES已进化为一种极其灵活的优化工具。最后,该算法在从深度学习超参数调优到精密工程设计的广泛应用,证明了其作为黑盒优化领域行业标准的地位。