Authors
Juan J Durillo, Antonio J Nebro, Carlos A Coello Coello, José García-Nieto, Francisco Luna, Enrique Alba
Publication date
2010/2/17
Journal
IEEE Transactions on Evolutionary Computation
Volume
14
Issue
4
Pages
618-635
Publisher
IEEE
Description
To evaluate the search capabilities of a multiobjective algorithm, the usual approach is to choose a benchmark of known problems, to perform a fixed number of function evaluations, and to apply a set of quality indicators. However, while real problems could have hundreds or even thousands of decision variables, current benchmarks are normally adopted with relatively few decision variables (normally from 10 to 30). Furthermore, performing a constant number of evaluations does not provide information about the effort required by an algorithm to get a satisfactory set of solutions; this information would also be of interest in real scenarios, where evaluating the functions defining the problem can be computationally expensive. In this paper, we study the effect of parameter scalability in a number of state-of-the-art multiobjective metaheuristics. We adopt a benchmark of parameter-wise scalable problems (the Zitzler …
Total citations
2010201120122013201420152016201720182019202020212022202320242410101316101013812912104
Scholar articles
JJ Durillo, AJ Nebro, CAC Coello, J García-Nieto… - IEEE Transactions on Evolutionary Computation, 2010