Performance Visualizations for the bbob-biobj
test suite
Below, we provide postprocessed data showing the performance of all 30+ officially supported algorithm data sets for the bbob-biobj
test suite.
Due to the large amount of algorithms (and the limited space in the figures), we currently group algorithm data sets by year of publication.
Performance Comparisons per Year
2016: DEMO, HMO-CMA-ES, MAT-DIRECT, MAT-SMS, MO-DIRECT-HV-Rank, MO-DIRECT-ND, MO-DIRECT-Rank, NSGA-II-MATLAB, RANDOMSEARCH-100, RANDOMSEARCH-4, RANDOMSEARCH-5, RM-MEDA, RS-100.tgz, RS-4.tgz, SMS-EMOA-DE, SMS-EMOA-PM, UP-MO-CMA-ES
2019: COMO-100, COMO-10, COMO-1e3, COMO-316, COMO-32, COMO-3, GDE3-platypus, IBEA-platypus, MO-CMA-ES-10-autoref, MO-CMA-ES-100-autoref, MO-CMA-ES-32-autoref, MOEAD-platypus, N-III-11-platypus, N-III-111-platypus, NSGA-II-platypus, SPEA2-platypus
2021: DMS, MultiGLODS
2022: K-RVEA, MOTPE, TPB
2023: Borg-adaptive, Borg-eps-1e-4
Example code to produce the figures
The Python code to locally generate the second entry, 2019, above (other entries work respectively) reads
import cocopp # see https://pypi.org/project/cocopp
= {None: cocopp.archives.bbob_biobj.get_all('2016')}
cocopp.genericsettings.background 'bbob-biobj/2019/*') # will take several minutes to process cocopp.main(